Operating Systems - Concurrency
Upcoming SlideShare
Loading in...5
×
 

Operating Systems - Concurrency

on

  • 3,110 views

 

Statistics

Views

Total Views
3,110
Views on SlideShare
3,108
Embed Views
2

Actions

Likes
0
Downloads
59
Comments
0

1 Embed 2

http://www.slideshare.net 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Operating Systems - Concurrency Operating Systems - Concurrency Presentation Transcript

  • Operating Systems CMPSCI 377 Concurrency Patterns Emery Berger University of Massachusetts Amherst UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  • Finishing Up From Last Time Avoiding deadlock: is this ok?  lock (a); lock (b); lock (b); lock (a); unlock (b); unlock (a); unlock (a); unlock (b); UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 2
  • Finishing Up From Last Time Not ok – may deadlock.  lock (a); lock (b); lock (b); lock (a); unlock (b); unlock (a); unlock (a); unlock (b); Solution: impose canonical order (acyclic)  lock (a); lock (a); lock (b); lock (b); unlock (b); unlock (b); unlock (a); unlock (a); UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 3
  • Motivating Example: Web Server web server Client (browser)  Requests HTML, images  not found Server  Caches requests  http://server/Easter-bunny/ Sends to client  200x100/75.jpg client UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  • Possible Implementation while (true) { wait for connection; read from socket & parse URL; look up URL contents in cache; if (!in cache) { fetch from disk / execute CGI; put in cache; } send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 5
  • Possible Implementation while (true) { wait for connection; // net read from socket & parse URL; // cpu look up URL contents in cache; // cpu if (!in cache) { fetch from disk / execute CGI;//disk put in cache; // cpu } send data to client; // net } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 6
  • Problem: Concurrency web Sequential fine until:  server More clients  Bigger server  Multicores, multiprocessors  Goals:  Hide latency of net & disk I/O  Don’t keep clients waiting  Improve throughput  Serve up more pages  clients UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science
  • Building Concurrent Apps Patterns / Architectures  Thread pools  Producer-consumer  “Bag of tasks”  Worker threads (work stealing)  Goals:  Minimize latency  Maximize parallelism  Keep progs. simple to program & maintain  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 8
  • Thread Pools Thread invocation & destruction relatively  expensive Instead: use pool of threads  When new task arrives, get thread from pool  to work on it; block if pool empty Faster with many tasks  Limits max threads  ( ThreadPoolExecutor class in Java)  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 9
  • Producer-Consumer Can get pipeline parallelism:  One thread (producer) does work  E.g., I/O  and hands it off to other thread (consumer)  producer consumer UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 10
  • Producer-Consumer Can get pipeline parallelism:  One thread (producer) does work  E.g., I/O  and hands it off to other thread (consumer)  producer consumer UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 11
  • Producer-Consumer Can get pipeline parallelism:  One thread (producer) does work  E.g., I/O  and hands it off to other thread (consumer)  producer consumer UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 12
  • Producer-Consumer Can get pipeline parallelism:  One thread (producer) does work  E.g., I/O  and hands it off to other thread (consumer)  producer consumer UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 13
  • Producer-Consumer Can get pipeline parallelism:  One thread (producer) does work  E.g., I/O  and hands it off to other thread (consumer)  producer consumer LinkedBlockingQueue Blocks on put() if full, poll() if empty UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 14
  • Producer-Consumer Web Server Use 2 threads: producer & consumer  queue.put(x) and x = queue.poll();  while (true) { while (true) { wait for connection; do something… read from socket & parse URL; queue.put (x); look up URL contents in cache; } if (!in cache) { fetch from disk / execute CGI; put in cache; } while (true) { send data to client; x = queue.poll(); } do something… } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 15
  • Producer-Consumer Web Server Pair of threads – one reads, one writes  while (true) { while (true) { wait for connection; URL = queue.poll(); read from socket & parse URL; look up URL contents in cache; queue.put (URL); if (!in cache) { } fetch from disk / execute CGI; put in cache; } send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 16
  • Producer-Consumer Web Server More parallelism –  optimizes common case (cache hit) while (true) { while (true) { wait for connection; URL = queue1.poll(); read from socket & parse URL; look up URL contents in cache; queue1.put (URL); if (!in cache) { 1 } queue2.put (URL); return; } send data to client; } 2 while (true) { URL = queue2.poll(); fetch from disk / execute CGI; put in cache; send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 17
  • When to Use Producer-Consumer Works well for pairs of threads  Best if producer & consumer are symmetric  Proceed roughly at same rate  Order of operations matters  Not as good for  Many threads  Order doesn’t matter  Different rates of progress  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 18
  • Producer-Consumer Web Server Have to be careful to balance load across  threads while (true) { while (true) { wait for connection; URL = queue1.poll(); read from socket & parse URL; look up URL contents in cache; queue1.put (URL); if (!in cache) { 1 } queue2.put (URL); } send data to client; } 2 while (true) { URL = queue2.poll(); fetch from disk / execute CGI; put in cache; send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 19
  • Bag of Tasks Collection of mostly independent tasks  worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 20
  • Bag of Tasks Collection of mostly independent tasks  worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 21
  • Bag of Tasks Collection of mostly independent tasks  worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 22
  • Bag of Tasks Collection of mostly independent tasks  worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 23
  • Bag of Tasks Collection of mostly independent tasks  worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 24
  • Bag of Tasks Collection of mostly independent tasks  worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 25
  • Bag of Tasks Collection of mostly independent tasks  addWork worker worker worker worker Bag could also be LinkedBlockingQueue  (put, poll) UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 26
  • Bag of Tasks Web Server Re-structure this into bag of tasks:  addWork & worker threads  t = bag.poll() or bag.put(t)  while (true) { wait for connection; read from socket & parse URL; look up URL contents in cache; if (!in cache) { fetch from disk / execute CGI; put in cache; } send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 27
  • Bag of Tasks Web Server Re-structure this into bag of tasks:  addWork & worker  t = bag.poll() or bag.put(t)  addWork: Worker: while (true) { while (true) { wait for connection; URL = bag.poll(); bag.put (URL); look up URL contents in cache; } if (!in cache) { fetch from disk / execute CGI; put in cache; } send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 28
  • Bag of Tasks Web Server Re-structure this into bag of tasks:  t = bag.poll() or bag.put(t)  addWork: while (true){ wait for connection; bag.put (URL); } worker worker: while (true) { addWork URL = bag.poll(); look up URL contents in cache; if (!in cache) { fetch from disk / execute CGI; put in cache; worker worker } send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 29
  • Bag of Tasks vs. Prod/Consumer Exploits more parallelism  Even with coarse-grained threads  Don’t have to break up tasks too finely  Easy to change or add new functionality  But: one major performance problem…  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 30
  • What’s the Problem? addWork worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 31
  • What’s the Problem? Contention – single lock on structure  Bottleneck to scalability  addWork worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 32
  • Work Queues Each thread has own work queue (deque)  No single point of contention  executor executor executor executor Threads now generic “executors”  Tasks (balls): blue = parse, yellow = connect…  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 33
  • Work Queues Each thread has own work queue (deque)  No single point of contention  executor executor executor executor UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 34
  • Work Queues Each thread has own work queue (deque)  No single point of contention  executor executor executor executor UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 35
  • Work Queues Each thread has own work queue (deque)  No single point of contention  executor executor executor executor UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 36
  • Work Queues Each thread has own work queue  No single point of contention  executor executor executor executor Now what?  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 37
  • Work Stealing When thread runs out of work,  steal work from random other thread worker worker worker worker UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 38
  • Work Stealing When thread runs out of work,  steal work from top of random deque worker worker worker worker Optimal load balancing algorithm  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 39
  • Work Stealing Web Server Re-structure:  readURL, lookUp, addToCache, output myQueue.put(new readURL (url))  while (true) { wait for connection; read from socket & parse URL; look up URL contents in cache; if (!in cache) { fetch from disk / execute CGI; put in cache; } send data to client; } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 40
  • Work Stealing Web Server Re-structure:  readURL, lookUp, addToCache, output myQueue.put(new readURL (url))  readURL(url) { wait for connection; read from socket & parse URL; myQueue.put (new lookUp (URL)); } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 41
  • Work Stealing Web Server Re-structure:  readURL, lookUp, addToCache, output myQueue.put(new readURL (url))  readURL(url) { lookUp(url) { wait for connection; look up URL contents in cache; read from socket & parse URL; if (!in cache) { myQueue.put (new lookUp (URL)); myQueue.put (new addToCache (URL)); } } else { myQueue.put (new output(contents)); } addToCache(URL) { } fetch from disk / execute CGI; put in cache; myQueue.put (new output(contents)); } UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 42
  • Work Stealing Works great for heterogeneous tasks  Convert addWork and worker into units of  work (different colors) Flexible: can easily re-define tasks  Coarse, fine-grained, anything in-between  Automatic load balancing  Separates thread logic from functionality  Popular model for structuring servers  UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 43
  • The End UNIVERSITY OF MASSACHUSETTS AMHERST • Department of Computer Science 44