Distributed Shared Memory Paul Krzyzanowski [email_address] [email_address] Distributed Systems Except as otherwise noted,...
Motivation <ul><li>SMP systems </li></ul><ul><ul><li>Run parts of a program in parallel </li></ul></ul><ul><ul><li>Share s...
Distributed Shared Memory (DSM) <ul><li>Goal : allow networked computers to  share a region of virtual memory </li></ul><u...
Take advantage of the MMU <ul><li>Page table entry for a page is valid if the page is held (cached) locally </li></ul><ul>...
Simplest design <ul><li>Each page of virtual address space exists on only  one  machine at a time </li></ul><ul><ul><li>-n...
Simplest design <ul><li>On page fault: </li></ul><ul><ul><li>Consult central server to find which machine is currently hol...
Problem <ul><li>Directory becomes a bottleneck </li></ul><ul><ul><li>All page query requests must go to this server </li><...
Distributed Directory P0 P2 P3 P1 Page Location 0000 P3 0004 P1 0008 P1 000C P2 … … Page Location 0002 P3 0006 P1 000A P0 ...
Design Considerations: granularity <ul><li>Memory blocks are typically a multiple of a node’s page size </li></ul><ul><ul>...
Design Considerations: replication <ul><li>What if we allow copies of shared pages on multiple nodes? </li></ul><ul><li>Re...
Replication <ul><li>Multiple readers, single writer </li></ul><ul><ul><li>One host can be granted a  read-write  copy </li...
Replication <ul><li>Read operation : </li></ul><ul><ul><li>If page not local </li></ul></ul><ul><ul><ul><li>Acquire read-o...
Full replication <ul><li>Extend model </li></ul><ul><ul><li>Multiple hosts have read/write access </li></ul></ul><ul><ul><...
Dealing with replication <ul><li>Keep track of copies of the page </li></ul><ul><ul><li>Directory with single node per pag...
How do you propagate changes? <ul><li>Send entire page </li></ul><ul><ul><li>Easiest, but may be a lot of data </li></ul><...
Home-based algorithms <ul><li>Home-based </li></ul><ul><ul><li>A node (usually first writer) is chosen to be the  home  of...
Consistency Model <ul><li>Definition of when modifications to data may be seen at a given processor </li></ul><ul><li>Defi...
Consistency Model <ul><li>Must be well-understood </li></ul><ul><ul><li>Determines how a programmer reasons about the corr...
Sequential Semantics <ul><li>Provided by most (uniprocessor) programming languages/systems </li></ul><ul><li>Program order...
Sequential Semantics <ul><li>Requirements </li></ul><ul><ul><li>All memory operations must execute one at a time </li></ul...
Sequential semantics P 0 P 1 P 2 P 3 P 4 memory
Achieving sequential semantics <ul><li>Illusion is efficiently supported in uniprocessor systems </li></ul><ul><ul><li>Exe...
Achieving sequential consistency <ul><li>Processor must ensure that the previous memory operation is complete before proce...
Achieving sequential consistency <ul><li>All writes to the same location must be visible in the same order by all processe...
Improving performance <ul><li>Break rules to achieve better performance </li></ul><ul><ul><li>Compiler and/or programmer s...
Relaxed (weak) consistency <ul><li>Relax program order between all operations to memory </li></ul><ul><ul><li>Read/writes ...
Synchronization variable (barrier) <ul><li>Operation for synchronizing memory </li></ul><ul><li>All local writes get propa...
Consistency guarantee <ul><li>Access to synchronization variables are sequentially consistent </li></ul><ul><ul><li>All pr...
Problems with sync consistency <ul><li>Inefficiency </li></ul><ul><ul><li>Synchronization </li></ul></ul><ul><ul><ul><li>B...
Can we do better? <ul><li>Separate synchronization into two stages: </li></ul><ul><ul><li>1.  acquire access </li></ul></u...
Let’s get lazy <ul><li>Release requires </li></ul><ul><ul><li>Sending invalidations to copyset nodes </li></ul></ul><ul><u...
E.g.: Home-based Lazy Release Consistency <ul><li>At  release </li></ul><ul><ul><li>Diffs are computed </li></ul></ul><ul>...
Finer granularity <ul><li>Release consistency </li></ul><ul><ul><li>Synchronizes  all  data </li></ul></ul><ul><ul><li>No ...
The end.
Upcoming SlideShare
Loading in...5
×

Dsm (Distributed computing)

1,007

Published on

Published in: Technology
1 Comment
0 Likes
Statistics
Notes
  • Be the first to like this

No Downloads
Views
Total Views
1,007
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
68
Comments
1
Likes
0
Embeds 0
No embeds

No notes for slide

Dsm (Distributed computing)

  1. 1. Distributed Shared Memory Paul Krzyzanowski [email_address] [email_address] Distributed Systems Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License.
  2. 2. Motivation <ul><li>SMP systems </li></ul><ul><ul><li>Run parts of a program in parallel </li></ul></ul><ul><ul><li>Share single address space </li></ul></ul><ul><ul><ul><li>Share data in that space </li></ul></ul></ul><ul><ul><li>Use threads for parallelism </li></ul></ul><ul><ul><li>Use synchronization primitives to prevent race conditions </li></ul></ul><ul><li>Can we achieve this with multicomputers? </li></ul><ul><ul><li>All communication and synchronization must be done with messages </li></ul></ul>
  3. 3. Distributed Shared Memory (DSM) <ul><li>Goal : allow networked computers to share a region of virtual memory </li></ul><ul><li>How do you make a distributed memory system appear local? </li></ul><ul><li>Physical memory on each node used to hold pages of shared virtual address space. Processes address it like local memory. </li></ul>
  4. 4. Take advantage of the MMU <ul><li>Page table entry for a page is valid if the page is held (cached) locally </li></ul><ul><li>Attempt to access non-local page leads to a page fault </li></ul><ul><li>Page fault handler </li></ul><ul><ul><li>Invokes DSM protocol to handle fault </li></ul></ul><ul><ul><li>Fault handler brings page from remote node </li></ul></ul><ul><li>Operations are transparent to programmer </li></ul><ul><ul><li>DSM looks like any other virtual memory system </li></ul></ul>
  5. 5. Simplest design <ul><li>Each page of virtual address space exists on only one machine at a time </li></ul><ul><ul><li>-no caching </li></ul></ul>
  6. 6. Simplest design <ul><li>On page fault: </li></ul><ul><ul><li>Consult central server to find which machine is currently holding the page </li></ul></ul><ul><ul><li>Directory </li></ul></ul><ul><li>Request the page from the current owner: </li></ul><ul><ul><li>Current owner invalidates PTE </li></ul></ul><ul><ul><li>Sends page contents </li></ul></ul><ul><ul><li>Recipient allocates frame, reads page, sets PTE </li></ul></ul><ul><ul><li>Informs directory of new location </li></ul></ul>
  7. 7. Problem <ul><li>Directory becomes a bottleneck </li></ul><ul><ul><li>All page query requests must go to this server </li></ul></ul><ul><li>Solution </li></ul><ul><ul><li>Distributed directory </li></ul></ul><ul><ul><li>Distribute among all processors </li></ul></ul><ul><ul><li>Each node responsible for portion of address space </li></ul></ul><ul><ul><li>Find responsible processor: </li></ul></ul><ul><ul><ul><li>hash( page# ) mod num_processors </li></ul></ul></ul>
  8. 8. Distributed Directory P0 P2 P3 P1 Page Location 0000 P3 0004 P1 0008 P1 000C P2 … … Page Location 0002 P3 0006 P1 000A P0 000E -- … … Page Location 0003 P3 0007 P1 000B P2 000F -- … … Page Location 0001 P3 0005 P1 0009 P0 000D P2 … …
  9. 9. Design Considerations: granularity <ul><li>Memory blocks are typically a multiple of a node’s page size </li></ul><ul><ul><li>To integrate with VM system </li></ul></ul><ul><li>Large pages are good </li></ul><ul><ul><li>Cost of migration amortized over many localized accesses </li></ul></ul><ul><li>BUT </li></ul><ul><ul><li>Increases chances that multiple objects reside in one page </li></ul></ul><ul><ul><ul><li>Thrashing </li></ul></ul></ul><ul><ul><ul><li>False sharing </li></ul></ul></ul>
  10. 10. Design Considerations: replication <ul><li>What if we allow copies of shared pages on multiple nodes? </li></ul><ul><li>Replication (caching) reduces average cost of read operations </li></ul><ul><ul><li>Simultaneous reads can be executed locally across hosts </li></ul></ul><ul><li>Write operations become more expensive </li></ul><ul><ul><li>Cached copies need to be invalidated or updated </li></ul></ul><ul><li>Worthwhile if reads/writes ratio is high </li></ul>
  11. 11. Replication <ul><li>Multiple readers, single writer </li></ul><ul><ul><li>One host can be granted a read-write copy </li></ul></ul><ul><ul><li>Or multiple hosts granted read-only copies </li></ul></ul>
  12. 12. Replication <ul><li>Read operation : </li></ul><ul><ul><li>If page not local </li></ul></ul><ul><ul><ul><li>Acquire read-only copy of the page </li></ul></ul></ul><ul><ul><ul><li>Set access writes to read-only on any writeable copy on other nodes </li></ul></ul></ul><ul><li>Write operation : </li></ul><ul><ul><li>If page not local or no write permission </li></ul></ul><ul><ul><ul><li>Revoke write permission from other writable copy (if exists) </li></ul></ul></ul><ul><ul><ul><li>Get copy of page from owner (if needed) </li></ul></ul></ul><ul><ul><ul><li>Invalidate all copies of the page at other nodes </li></ul></ul></ul>
  13. 13. Full replication <ul><li>Extend model </li></ul><ul><ul><li>Multiple hosts have read/write access </li></ul></ul><ul><ul><li>Need multiple-readers, multiple-writers protocol </li></ul></ul><ul><ul><li>Access to shared data must be controlled to maintain consistency </li></ul></ul>
  14. 14. Dealing with replication <ul><li>Keep track of copies of the page </li></ul><ul><ul><li>Directory with single node per page not enough </li></ul></ul><ul><ul><li>Keep track of copyset </li></ul></ul><ul><ul><ul><li>Set of all systems that requested copies </li></ul></ul></ul><ul><li>On getting a request for a copy of a page: </li></ul><ul><ul><li>Directory adds requestor to copyset </li></ul></ul><ul><ul><li>Page owner sends page contents to requestor </li></ul></ul><ul><li>On getting a request to invalidate page: </li></ul><ul><ul><li>Issue invalidation requests to all nodes in copyset and wait for acknowledgements </li></ul></ul>
  15. 15. How do you propagate changes? <ul><li>Send entire page </li></ul><ul><ul><li>Easiest, but may be a lot of data </li></ul></ul><ul><li>Send differences </li></ul><ul><ul><li>Local system must save original and compute differences </li></ul></ul>
  16. 16. Home-based algorithms <ul><li>Home-based </li></ul><ul><ul><li>A node (usually first writer) is chosen to be the home of the page </li></ul></ul><ul><ul><li>On write , a non-home node will send changes to the home node. </li></ul></ul><ul><ul><ul><li>Other cached copies invalidated </li></ul></ul></ul><ul><ul><li>On read , a non-home node will get changes (or page) from home node </li></ul></ul><ul><li>Non-home-based </li></ul><ul><ul><li>Node will always contact the directory to find the current owner (latest copy) and obtain page from there </li></ul></ul>
  17. 17. Consistency Model <ul><li>Definition of when modifications to data may be seen at a given processor </li></ul><ul><li>Defines how memory will appear to a programmer </li></ul><ul><ul><li>Places restrictions on what values can be returned by a read of a memory location </li></ul></ul>
  18. 18. Consistency Model <ul><li>Must be well-understood </li></ul><ul><ul><li>Determines how a programmer reasons about the correctness of a program </li></ul></ul><ul><ul><li>Determines what hardware and compiler optimizations may take place </li></ul></ul>
  19. 19. Sequential Semantics <ul><li>Provided by most (uniprocessor) programming languages/systems </li></ul><ul><li>Program order </li></ul>The result of any execution is the same as if the operations of all processors were executed in some sequential order and the operations of each individual processor appear in this sequence in the order specified by the program. ― Lamport
  20. 20. Sequential Semantics <ul><li>Requirements </li></ul><ul><ul><li>All memory operations must execute one at a time </li></ul></ul><ul><ul><li>All operations of a single processor appear to execute in program order </li></ul></ul><ul><ul><li>Interleaving among processors is OK </li></ul></ul>
  21. 21. Sequential semantics P 0 P 1 P 2 P 3 P 4 memory
  22. 22. Achieving sequential semantics <ul><li>Illusion is efficiently supported in uniprocessor systems </li></ul><ul><ul><li>Execute operations in program order when they are to the same location or when one controls the execution of another </li></ul></ul><ul><ul><li>Otherwise, compiler or hardware can reorder </li></ul></ul><ul><li>Compiler: </li></ul><ul><ul><li>Register allocation, code motion, loop transformation, … </li></ul></ul><ul><li>Hardware: </li></ul><ul><ul><li>Pipelining, multiple issue, … </li></ul></ul>
  23. 23. Achieving sequential consistency <ul><li>Processor must ensure that the previous memory operation is complete before proceeding with the next one </li></ul><ul><li>Program order requirement </li></ul><ul><ul><li>Determining completion of write operations </li></ul></ul><ul><ul><ul><li>get acknowledgement from memory system </li></ul></ul></ul><ul><ul><li>If caching used </li></ul></ul><ul><ul><ul><li>Write operation must send invalidate or update messages to all cached copies. </li></ul></ul></ul><ul><ul><ul><li>ALL these messages must be acknowledged </li></ul></ul></ul>
  24. 24. Achieving sequential consistency <ul><li>All writes to the same location must be visible in the same order by all processes </li></ul><ul><li>Write atomicity requirement </li></ul><ul><ul><li>Value of a write will not be returned by a read until all updates/invalidates are acknowledged </li></ul></ul><ul><ul><ul><li>hold off on read requests until write is complete </li></ul></ul></ul><ul><ul><li>Totally ordered reliable multicast </li></ul></ul>
  25. 25. Improving performance <ul><li>Break rules to achieve better performance </li></ul><ul><ul><li>Compiler and/or programmer should know what’s going on! </li></ul></ul><ul><li>Goals: </li></ul><ul><ul><li>combat network latency </li></ul></ul><ul><ul><li>reduce number of network messages </li></ul></ul><ul><li>Relaxing sequential consistency </li></ul><ul><ul><li>Weak consistency models </li></ul></ul>
  26. 26. Relaxed (weak) consistency <ul><li>Relax program order between all operations to memory </li></ul><ul><ul><li>Read/writes to different memory operations can be reordered </li></ul></ul><ul><li>Consider: </li></ul><ul><ul><li>Operation in critical section (shared) </li></ul></ul><ul><ul><li>One process reading/writing </li></ul></ul><ul><ul><li>Nobody else accessing until process leaves critical section </li></ul></ul><ul><li>No need to propagate writes sequentially or at all until process leaves critical section </li></ul>
  27. 27. Synchronization variable (barrier) <ul><li>Operation for synchronizing memory </li></ul><ul><li>All local writes get propagated </li></ul><ul><li>All remote writes are brought in to the local processor </li></ul><ul><li>Block until memory synchronized </li></ul>
  28. 28. Consistency guarantee <ul><li>Access to synchronization variables are sequentially consistent </li></ul><ul><ul><li>All processes see them in the same order </li></ul></ul><ul><li>No access to a synchronization variable can be performed until all previous writes have completed </li></ul><ul><li>No read or write permitted until all previous accesses to synchronization variables are performed </li></ul><ul><ul><li>Memory is updated during sync </li></ul></ul>
  29. 29. Problems with sync consistency <ul><li>Inefficiency </li></ul><ul><ul><li>Synchronization </li></ul></ul><ul><ul><ul><li>Because process finished memory accesses or is about to start? </li></ul></ul></ul><ul><li>Systems must make sure that: </li></ul><ul><ul><li>All locally-initiated writes have completed </li></ul></ul><ul><ul><li>All remote writes have been acquired </li></ul></ul>
  30. 30. Can we do better? <ul><li>Separate synchronization into two stages: </li></ul><ul><ul><li>1. acquire access </li></ul></ul><ul><ul><ul><li>Obtain valid copies of pages </li></ul></ul></ul><ul><ul><li>2. release access </li></ul></ul><ul><ul><ul><li>Send invalidations or updates for shared pages that were modified locally to nodes that have copies. </li></ul></ul></ul>acquire(R) // start of critical section Do stuff release(R) // end of critical section Eager Release Consistency (ERC)
  31. 31. Let’s get lazy <ul><li>Release requires </li></ul><ul><ul><li>Sending invalidations to copyset nodes </li></ul></ul><ul><ul><li>And waiting for all to acknowledge </li></ul></ul><ul><li>Do not make modifications visible globally at release </li></ul><ul><li>On release : </li></ul><ul><ul><li>Send invalidation only to directory or send updates to home node (owner of page) </li></ul></ul><ul><li>On acquire : this is where modifications are propagated </li></ul><ul><ul><li>Check with directory to see whether it needs a new copy </li></ul></ul><ul><ul><ul><li>Chances are not every node will need to do an acquire </li></ul></ul></ul><ul><li>Reduces message traffic on releases </li></ul>Lazy Release Consistency (LRC)
  32. 32. E.g.: Home-based Lazy Release Consistency <ul><li>At release </li></ul><ul><ul><li>Diffs are computed </li></ul></ul><ul><ul><li>Sent to owner (home node) </li></ul></ul><ul><ul><li>Home node: </li></ul></ul><ul><ul><ul><li>Applies diffs as soon as they arrive </li></ul></ul></ul><ul><li>At acquire </li></ul><ul><ul><li>Node requests updated page from the home node </li></ul></ul>
  33. 33. Finer granularity <ul><li>Release consistency </li></ul><ul><ul><li>Synchronizes all data </li></ul></ul><ul><ul><li>No relation between lock and data </li></ul></ul><ul><li>Use object granularity instead of page granularity </li></ul><ul><ul><li>Each variable or group of variables can have a synchronization variable </li></ul></ul><ul><ul><li>Propagate only writes performed in those sections </li></ul></ul><ul><ul><li>Cannot rely on OS and MMU anymore </li></ul></ul><ul><ul><ul><li>Need smart compilers </li></ul></ul></ul>Entry Consistency
  34. 34. The end.
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×