Chap 4

5,556 views

Published on

Published in: Technology

Chap 4

  1. 1. Chapter-IV Distributed Shared Memory (DSM)
  2. 2. <ul><li>Introduction </li></ul><ul><li>Basically there are TWO IPC paradigms in DOS </li></ul><ul><li>Message passing and RPC </li></ul><ul><li>Distributed Shared Memory (DSM) </li></ul><ul><ul><li>The DSM provides to processes in a system with a shared address space </li></ul></ul><ul><ul><li>Processes use this address space in the same way as they use local memory </li></ul></ul><ul><ul><li>Primitives used are </li></ul></ul><ul><ul><li>data= Read(address) </li></ul></ul><ul><ul><li>Write(address, data) </li></ul></ul><ul><ul><li>The DSM in tightly coupled system is natural. </li></ul></ul><ul><ul><li>In loosely coupled systems there is no physically shared memory available to support DSM paradigm. </li></ul></ul><ul><li>The term DSM refers to the shared-memory paradigm applied to loosely coupled distributed-memory systems (Virtual Memory) </li></ul>
  3. 3. <ul><li>DSM Block diagram </li></ul>CPU 1 CPU2 Memory Memory mapping manager CPU 1 CPU2 Memory Memory mapping manager CPU 1 CPU2 Memory Memory mapping manager Distributed Shared Memory (Virtual) Communication Network Node-1 Node-2 Node-n (Architecture)
  4. 4. <ul><li>DSM architecture </li></ul><ul><ul><li>DSM provides a virtual address space shared among processes on loosely coupled processors </li></ul></ul><ul><ul><li>DSM is basically an abstraction that integrates the local memory of different machines in a network environment into a single logical entity shared by cooperating processes executing on multiple sites. </li></ul></ul><ul><ul><li>The shared memory exists only virtually, hence it is also called (DSVM) </li></ul></ul><ul><ul><li>In DSM each node of the system has one or more CPUs and a memory unit. </li></ul></ul><ul><ul><li>The nodes are connected by a high-speed communication network. </li></ul></ul><ul><ul><li>The DSM abstraction presents a large shared-memory space to the processors of all nodes. </li></ul></ul><ul><ul><li>A software memory-mapping manager routine in each node maps the local memory onto the shared virtual memory. </li></ul></ul><ul><ul><li>To facilitate the mapping operation, the shared-memory space is portioned into blocks. </li></ul></ul><ul><ul><li>The idea of data caching is used to reduce network latency </li></ul></ul><ul><ul><li>The main memory of individual nodes is used to cache blocks of the shared-memory space. </li></ul></ul>
  5. 5. <ul><li>DSM operation </li></ul><ul><li>when a process on a node wants to access some data from a memory block of the shared memory space </li></ul><ul><ul><li>The local memory-mapping manager takes charge of its request. </li></ul></ul><ul><ul><li>If the memory block containing the accessed data is resident in the local memory, the request is satisfied. </li></ul></ul><ul><ul><li>Otherwise, a network block fault is generated and the control is passed to the operating system. </li></ul></ul><ul><ul><li>The OS then sends a message to the node on which the desired memory block is located to get the block. </li></ul></ul><ul><ul><li>The missing block is migrated from the remote node to the client process’s node and the OS maps it into the application address space. </li></ul></ul><ul><ul><li>Data blocks keep migrating from one node to another on demand basis, but no communication is visible to the user processes. </li></ul></ul><ul><ul><li>Copies of data cached in local memory eliminate network traffic. </li></ul></ul><ul><ul><li>DSM allows replication/migration of data blocks </li></ul></ul>
  6. 6. <ul><li>Design and Implementation issues of DSM </li></ul><ul><ul><li>Granularity: </li></ul></ul><ul><ul><li>Refers to the block size of a DSM system </li></ul></ul><ul><ul><li>Unit of sharing data or unit of data transfer across network. </li></ul></ul><ul><ul><li>Proper block size explores the granularity of parallelism and the amount of network traffic generated by network block faults. </li></ul></ul><ul><ul><li>Structure of shared-memory space </li></ul></ul><ul><ul><li>Refers to the layout of the shared data in memory. </li></ul></ul><ul><ul><li>Depending on the type of application that the DSM system is intended. </li></ul></ul><ul><ul><li>Memory coherence and access synchronization </li></ul></ul><ul><ul><li>In DSM system, the replication/copies of shared data is available in all nodes, coherence/consistency of accessing of data in particular node is very difficult. </li></ul></ul><ul><ul><li>Concurrent access of shared data requires synchronization primitives such as semaphores, event count, lock etc. </li></ul></ul><ul><ul><li>Data location and access </li></ul></ul><ul><ul><li>To share data in DSM, it is necessary to locate and retrieve the data accessed by a user process. </li></ul></ul>
  7. 7. <ul><li>Replacement strategy </li></ul><ul><ul><li>If the local memory of a node is full, the data block of local memory is replaced with the shared data block. </li></ul></ul><ul><ul><li>Cache replacement strategy is necessary in DSM </li></ul></ul><ul><li>Thrashing </li></ul><ul><ul><li>In a DSM system, data blocks migrate between nodes on demand. </li></ul></ul><ul><ul><li>If two nodes compete for write access to a single data item, the corresponding data block is forced to transfer back and forth at such a high rate that no real work can get done. </li></ul></ul><ul><ul><li>The DSM which use a policy to avoid this situation is called thrashing. </li></ul></ul><ul><li>Heterogeneity </li></ul><ul><ul><li>The DSM system is designed to take care of the heterogeneity so that it functions properly with machines with different architectures. </li></ul></ul>
  8. 8. <ul><li>Granularity </li></ul><ul><ul><li>Factors influencing block size selection </li></ul></ul><ul><ul><li>Paging overhead </li></ul></ul><ul><ul><li>Shared-memory programs provide locality of reference, a process is likely to access a large region of its shared address space in a small amount of time. </li></ul></ul><ul><ul><li>Paging overhead is less for large block sizes as compared to small blocks. </li></ul></ul><ul><ul><li>Directory size </li></ul></ul><ul><ul><li>Larger the block size, the smaller the directory </li></ul></ul><ul><ul><li>Reduces directory management for larger block sizes. </li></ul></ul><ul><ul><li>Thrashing </li></ul></ul><ul><ul><li>Thrashing problem may occur with any block size </li></ul></ul><ul><ul><li>Different regions in the same block may be updated by processes on different nodes </li></ul></ul><ul><ul><li>Easy to implement with large block sizes </li></ul></ul><ul><ul><li>False sharing </li></ul></ul><ul><ul><li>It occurs when two different processes access two unrelated variables that reside in the same data block. </li></ul></ul><ul><ul><li>In this situation, even though the original variables are not shared, the data block appears to be shared by two processes. </li></ul></ul>
  9. 9. <ul><ul><li>The larger is the block size, the higher is the probability of false sharing. </li></ul></ul><ul><ul><li>False sharing of a block may lead to a thrashing problem. </li></ul></ul><ul><li>Using page size as block size </li></ul><ul><ul><li>A suitable compromise in granularity </li></ul></ul><ul><ul><li>adopted in DSM is to use the typical page </li></ul></ul><ul><ul><li>size of a virtual memory as the block size. </li></ul></ul><ul><li>Advantages: </li></ul><ul><ul><li>It allows the use of existing page-fault schemes. </li></ul></ul><ul><ul><li>Memory coherence problem may be resolved </li></ul></ul><ul><ul><li>It allows the access control to be readily integrated into the functionality of the memory management unit. </li></ul></ul><ul><ul><li>As long as page can fit into a packet, page size( data block) do not impose undue communication overhead. </li></ul></ul><ul><ul><li>Memory contention is resolved. </li></ul></ul>P1 access data in this area P2 access data in this area p1 p2 Data block False sharing
  10. 10. <ul><ul><li>Structure of shared-memory space </li></ul></ul><ul><ul><li>It defines the abstract view of the shared-memory space to be represented to the application programmers of a DSM system </li></ul></ul><ul><ul><li>It can be viewed as a storage of data objects. </li></ul></ul><ul><ul><li>Three approaches used for structuring shared-memory space </li></ul></ul><ul><ul><li>No structuring : It is simply a linear array of words </li></ul></ul><ul><ul><li>It chooses suitable page size as the unit of sharing and fixed grain size is used for all applications. </li></ul></ul><ul><ul><li>Simple and easy to design and implement with any data structure. </li></ul></ul><ul><ul><li>Structure by data byte </li></ul></ul><ul><ul><li>Memory space is structured as a collection of data objects </li></ul></ul><ul><ul><li>The granularity defines the size of objects or variable. </li></ul></ul><ul><ul><li>Complicates the design and implementation. </li></ul></ul><ul><ul><li>Structure as a data base </li></ul></ul><ul><ul><li>The shared memory space is ordered as an associative memory </li></ul></ul><ul><ul><li>Memory is addressed as content rather name or address (Tuple space) </li></ul></ul><ul><ul><li>Processes select tuples by specifying the number of their fields and values or types </li></ul></ul><ul><ul><li>Access to shared data is Nontransparent </li></ul></ul>
  11. 11. <ul><li>Consistency Models </li></ul><ul><ul><li>Refers to the degree of consistency that has to be maintained for the shared-memory data to a set of applications. </li></ul></ul><ul><ul><li>Strict consistency model </li></ul></ul><ul><ul><li>Strongest form of memory coherence </li></ul></ul><ul><ul><li>Stringent consistency requirement. </li></ul></ul><ul><ul><li>If the value returned by a read operation on a memory address is always same as the value written by the most recent write operation to that address, irrespective of the locations of the processes performing read/write operations. </li></ul></ul><ul><ul><li>All write operations are visible to all processes. </li></ul></ul><ul><ul><li>2. Sequential consistency models </li></ul></ul><ul><ul><li>If all processes see the same order of all memory access operations on the shared memory. </li></ul></ul><ul><ul><li>No new memory operation is started until all the previous ones have been completed. </li></ul></ul><ul><ul><li>Uses single-copy(one-copy) semantics </li></ul></ul><ul><ul><li>Acceptable consistency model for Distributed applications </li></ul></ul>
  12. 12. <ul><ul><li>Causal consistency model </li></ul></ul><ul><ul><li>A memory reference operation (read/write) is said to be potentially causally related to another memory reference operation if the first one might have been influenced in any way by the second one. </li></ul></ul><ul><ul><li>Eg: Read operation followed by write operation </li></ul></ul><ul><ul><li>Write operations are not potentially causally related (w1, w2) </li></ul></ul><ul><ul><li>It keep tracks of which memory reference operation is dependent on which other memory reference operations. </li></ul></ul><ul><ul><li>4. Pipelined Random-Access memory consistency model (PRAM) </li></ul></ul><ul><ul><li>It ensures that all write operations performed by a single process are seen by all other processes in the order they were performed as if all the write operations performed by a single process are in a pipeline </li></ul></ul><ul><ul><li>It can be implemented by simply sequencing the write operations performed at each node independently. </li></ul></ul><ul><ul><li>All write operations of a single process are pipelined </li></ul></ul><ul><ul><li>Simple, easy to implement and has good performance. </li></ul></ul>
  13. 13. <ul><ul><li>Processor consistency model </li></ul></ul><ul><ul><li>It ensures that all write operations performed on the same memory location ( no matter by which process they are performed) are seen by all the processes in the same order. </li></ul></ul><ul><ul><li>Enhances memory coherence- For any memory location all processes agree on the same order of all write operations to that location. </li></ul></ul><ul><ul><li>Weak consistency model </li></ul></ul><ul><ul><li>It is not necessary to show the change in memory done by every write operation to other processes. </li></ul></ul><ul><ul><li>The results of several write operations can be combined and sent to other processes only when they needed it. </li></ul></ul><ul><ul><li>Isolated accesses to shared variables are rare. </li></ul></ul><ul><ul><li>Very difficult to show and keep track of the changes at time to time </li></ul></ul><ul><ul><li>It uses a special variable called a synchronization variable (SV)for memory synchronization (visible to all processes) </li></ul></ul><ul><li>Conditions to access synchronization variable (SV) </li></ul><ul><li>All accesses to synchronization variables must obey sequential consistency </li></ul><ul><li>All previous write operations must be completed before accessing SV. </li></ul><ul><li>All previous accesses to SV must be completed before accessing any other variables. </li></ul>
  14. 14. <ul><ul><li>Release consistency model </li></ul></ul><ul><ul><li>All changes made to the memory by the process are propagated to other nodes. </li></ul></ul><ul><ul><li>All changes made to the memory by other processes are propagated from other nodes the process’s node. </li></ul></ul><ul><ul><li>Uses two synchronization variables (Acquire and Release) </li></ul></ul><ul><ul><li>Acquire - used by a process to tell the system about entering the critical section. </li></ul></ul><ul><ul><li>Release- used by a process to tell about exit from critical section. </li></ul></ul><ul><ul><li>Release consistency model uses synchronization mechanism based on barriers. </li></ul></ul><ul><ul><li>It defines the end of a phase of execution of a group of concurrent processes before any process is allowed to proceed. </li></ul></ul><ul><li>Conditions for Release consistency model </li></ul><ul><li>All accesses to acquire and release must obey the processor consistency </li></ul><ul><li>All previous acquires by a process must be completed before memory accessing by other processes. </li></ul><ul><li>All data access by a process must be completed before accessing release. </li></ul>
  15. 15. <ul><li>Implementing Sequential consistency model </li></ul><ul><li>Most commonly used consistency model in DSM </li></ul><ul><li>Protocols depends on whether the DSM system allows replication/migration of shared-memory blocks </li></ul><ul><li>Different strategies are </li></ul><ul><ul><li>Non-Replicated, Non-Migrating blocks (NRNMBs) </li></ul></ul><ul><ul><li>Non-Replicated, Migrating blocks (NRMBs) </li></ul></ul><ul><ul><li>Replicated, Migrating blocks (RMBs) </li></ul></ul><ul><ul><li>Replicated, Non-Migrating blocks (RNMBs) </li></ul></ul><ul><li>Non-Replicated, Non-Migrating blocks (NRNMBs) </li></ul><ul><ul><li>Simplest strategy </li></ul></ul><ul><ul><li>Each block of the shared memory has a single copy whose location is always fixed. </li></ul></ul><ul><ul><li>All access requests to a block from any node are sent to the owner node of the block, which has the only copy of the block. </li></ul></ul><ul><ul><li>On receiving a request from a client node, the MMU and OS of the owner node return a response to the client. </li></ul></ul><ul><ul><li>Serializing data access creates a bottle neck </li></ul></ul><ul><ul><li>Parallelism is not possible </li></ul></ul><ul><ul><li>there is a singe copy of each block in the system </li></ul></ul><ul><ul><li>The location of the block never changes </li></ul></ul>Client node Owner node Request Response
  16. 16. <ul><li>Non-Replicated, Migrating blocks </li></ul><ul><ul><li>Each block of the shared memory has a single copy in the entire system, however, each access to a block causes the block to migrate from its current node to the node from where it is accessed. </li></ul></ul><ul><ul><li>The owner node of a block changes as soon as the block is migrated to a new node. </li></ul></ul><ul><ul><li>High locality of reference </li></ul></ul><ul><ul><li>It is prone to thrashing </li></ul></ul><ul><ul><li>Parallelism is not possible </li></ul></ul><ul><li>Data locating in NRMB strategy </li></ul><ul><li>There is a single copy of each block and the location of a block changing dynamically </li></ul><ul><ul><li>broadcasting: Each node maintains an owned blocks table that contains an entry for each block for which the node is the current owner. </li></ul></ul>Client node Owner node Block request Block migration Block address (Node) Current owner Block address (Node) Current owner Node-1 Node-m Node boundary
  17. 17. <ul><li>centralized-server algorithm </li></ul><ul><ul><li>A centralized server maintains a block table that contains the location information for all blocks in the shared memory space </li></ul></ul><ul><ul><li>The centralized sever extracts the location information and forwards it to the requested node. </li></ul></ul><ul><ul><li>Failure of server cause the DSM to stop functioning. </li></ul></ul><ul><ul><li>The location and identity of centralized server is well known to all nodes </li></ul></ul>Block address Owner node changes dynamically Contains an Entry for each block Block table Node boundary Node boundary Node-1 Node-m
  18. 18. <ul><li>Fixed distributed-server algorithm </li></ul><ul><ul><li>It is a direct extension of the centralized server scheme </li></ul></ul><ul><ul><li>It overcomes the problems of the centralized server scheme by distributing the role of centralized server. </li></ul></ul><ul><ul><li>It has a block manager on several nodes, and each block manager is given a predetermined subset of data blocks to manage. </li></ul></ul>Contains an Entry for each block Block address owner node (dynamic) Owner node (dynamic) Block address Contains an Entry for each block Owner node (dynamic) Block address Contains an Entry for each block Node-1 Block table Node-2 Block table Node-m Block table Node boundary Node boundary
  19. 19. <ul><li>Dynamic distributed-server algorithm </li></ul><ul><ul><li>It does not use any block manager and attempts to keep track of the ownership information of all blocks in each node. </li></ul></ul><ul><ul><li>Each node has a block table that contains the ownership information of all blocks (probable owner) </li></ul></ul><ul><ul><li>This field gives the node a hint on the location of the owner of a block </li></ul></ul>Block address Block address Block address probable node (dynamic) probable node (dynamic) probable node (dynamic) Contains an Entry for each block Contains an Entry for each block Contains an Entry for each block Node-1 Block table Node-2 Block table Node-m Block table
  20. 20. <ul><li>3. Replicated Migrating blocks </li></ul><ul><ul><li>To increase parallelism, all DSM systems replicate blocks </li></ul></ul><ul><ul><li>With replicated blocks, read operations can be carried out in parallel with multiple nodes, the average cost of read operation is reduced. </li></ul></ul><ul><ul><li>However, replication tends to increase the cost of write operations, because for a write to a block all its replicas must be invalidated or updated to maintain consistency. </li></ul></ul><ul><ul><li>Replication complicates the memory coherence protocol </li></ul></ul><ul><li>Basically there are two protocols for enhancing sequential consistency </li></ul><ul><li>Write-invalidate </li></ul><ul><ul><li>In this scheme, all copies of a piece of data except one are invalidated before write operation can be performed on it. </li></ul></ul><ul><ul><li>When a write fault occurs, its fault handler copies the accessed block from one of the block’s to its own node and invalidates all other copies by sending a invalidate message. </li></ul></ul><ul><ul><li>The write operation is performed on own node </li></ul></ul><ul><ul><li>The own node holds the modified version of block and is replicated to other nodes. </li></ul></ul>
  21. 21. <ul><li>Write-Invalidate </li></ul>Client node Nodes having valid copies of the data before write operation 3. Invalidate block 1. Request block 2. Replicate block 3. Invalidate block 3. Invalidate block Node-1 Node-2 Node-m Has the valid copy of the data block after write operation
  22. 22. <ul><li>write-update </li></ul><ul><ul><li>A write operation is carried out by updating all copies of the data on which the write is performed. </li></ul></ul><ul><ul><li>When a write fault occurs at a node, the fault handler copies the accessed block from one of the block’s current nodes to its own node. </li></ul></ul><ul><ul><li>Updates all copies of the block by performing the write operation on the local copy . </li></ul></ul><ul><ul><li>Send the address of the modified memory location and its new value to the nodes having a copy of the block. </li></ul></ul><ul><ul><li>The write operation completes only after all copies of the block have been successfully updated. </li></ul></ul><ul><ul><li>After a write operation completes, all the nodes that had a copy of the block before the write also have a valid copy of the block after the write. </li></ul></ul>
  23. 23. <ul><li>Write- Update </li></ul>Client node Nodes having valid copies of the data before and after write operation 3. Update block 1. Request block 2. Replicate block 3. Update block 3. Update block Node-1 Node-2 Node-m Has the valid copy of the data block after write operation
  24. 24. <ul><li>Global sequencing mechanism </li></ul><ul><ul><li>Use a global sequencer to sequence the write operations to all the nodes. </li></ul></ul><ul><ul><li>The intended modification of each write operation is first sent to the global sequencer. </li></ul></ul><ul><ul><li>The global sequencer multicasts the modification to all the nodes where a replica of data block is located. </li></ul></ul>Client node Global Sequencer Replica of data Modification Sequenced modification Sequenced modification Sequenced modification Sequenced modification Nodes 1 2 n
  25. 25. <ul><li>Data locating in RMB strategy </li></ul><ul><ul><li>The following issues are involved in the write-invalidate protocol </li></ul></ul><ul><ul><li>Locating the owner of a block </li></ul></ul><ul><ul><li>Keeping track of the nodes that currently have a valid copy of the block </li></ul></ul><ul><li>Broadcasting </li></ul><ul><ul><li>Each node has an owned blocks table </li></ul></ul><ul><ul><li>The block table of a node has an entry for each block for which the node is the owner </li></ul></ul><ul><ul><li>The table has a copy-set field that contains a list of nodes that currently have a valid copy of the corresponding block </li></ul></ul>Block address (dynamic) Copy-set (dynamic) Block address (dynamic) Block address (dynamic) Copy-set (dynamic) Copy-set (dynamic) Contains an entry for each block (owner) Contains an entry for each block (owner) Contains an entry for each block (owner) Node-1 Node-2 Node-n Owned blocks table Owned blocks table Owned blocks table Node boundary
  26. 26. <ul><li>centralized-Server algorithm </li></ul><ul><ul><li>In this method, each entry of the block table managed by the centralized server, has an owner-node field that indicates the current owner node of the block and a copy-set field that contains a list of nodes having a valid copy of the block. </li></ul></ul>Block address (Fixed) Owner node (dynamic) Copy-set (dynamic) Contains an entry for each block Node-1 Node i Node-n Node boundary Node boundary Block table
  27. 27. <ul><li>Fixed distributed-server algorithm </li></ul><ul><ul><li>The role of the centralized server is distributed to several distributed servers </li></ul></ul><ul><ul><li>There is a block manager on several nodes </li></ul></ul><ul><ul><li>Each block manager manages a predetermined subset of blocks, and a mapping function is used to map a block to a particular block manager and its corresponding node. </li></ul></ul>Block address (fixed) Block address (fixed) Block address (fixed) Owner node (dynamic) Owner node (dynamic) Owner node (dynamic) Copy-set (dynamic ) Copy-set (dynamic ) Copy-set (dynamic ) Contains entries for a fixed subset of all blocks Contains entries for a fixed subset of all blocks Contains entries for a fixed subset of all blocks Node-1 Node-2 Node-n Node boundary Block table, Block manager Block table, Block manager Block table, Block manager Node boundary
  28. 28. <ul><li>4. Dynamic distributed-server algorithm </li></ul><ul><ul><li>Each node has a block table that contains an entry for all blocks </li></ul></ul><ul><ul><li>Each entry of the table has a probable-owner field that gives the node a hint on the location of the owner. </li></ul></ul><ul><ul><li>Each true owner block contains a copy-set field that provides a list of nodes having a valid copy of the block </li></ul></ul>Block address (fixed) Block address (fixed) Block address (fixed) Probable Owner node (dynamic) Probable Owner node (dynamic) Probable Owner node (dynamic) Copy-set (dynamic ) Copy-set (dynamic ) Copy-set (dynamic ) Contains an entry for each block Node-1 Node-2 Node-n Node boundary Block table Block table Block table Node boundary Contains an entry for each block Contains an entry for each block Entry for true owner node Entry for true owner node Entry for true owner node
  29. 29. <ul><li>Replicated Nonmigrating Blocks (RNBs) </li></ul><ul><li>In this strategy, a shared-memory block may be replicated at multiple nodes of the system </li></ul><ul><li>The location of replica is fixed </li></ul><ul><li>A read or write access to a memory address is carried out by sending the access request to one of the nodes having a replica of the block containing the memory address. </li></ul><ul><li>Write-update protocol is used </li></ul><ul><li>Sequential consistency is ensured by using a Global-sequencer. </li></ul><ul><li>Assignment </li></ul><ul><li>Release consistency DSM system: MUNIN system </li></ul>
  30. 30. <ul><li>Replacement strategies in DSM </li></ul><ul><ul><li>DSM system allows shared-memory blocks to be migrated/replicated </li></ul></ul><ul><ul><li>To provide the space for the addressed block </li></ul></ul><ul><ul><li>Which block should be replaced to make space for new block ? </li></ul></ul><ul><ul><li>Where should the replaced block be placed ? </li></ul></ul><ul><li>Which block to replace ? </li></ul><ul><li>Usage based versus non-usage based </li></ul><ul><ul><li>Usage based algorithms keep track of the history of usage of a cache line or page and use this information to make replacement decisions. </li></ul></ul><ul><ul><li>The reuse of cache normally improves the replacement status </li></ul></ul><ul><ul><li>Eg: LRU (Least Recently Used) </li></ul></ul><ul><ul><li>Non-usage based algorithms do not take the record of use of cache lines into account when doing replacement </li></ul></ul><ul><ul><li>Eg: FIFO and Rand (Random replacement) </li></ul></ul>
  31. 31. <ul><li>Fixed space versus variable space </li></ul><ul><ul><li>Fixed space- Cache size is fixed </li></ul></ul><ul><ul><li>Variable space- Cache size changes dynamically </li></ul></ul><ul><ul><li>Replacement in fixed space algorithms simply involves the selection of a specific cache line. </li></ul></ul><ul><ul><li>Replacement in variable space algorithm requires swap-out </li></ul></ul><ul><li>Usage based-fixed space algorithms are more suitable for DSM </li></ul><ul><ul><li>In DSM each node is classified into one of the following five types </li></ul></ul><ul><ul><li>Unused- free memory block </li></ul></ul><ul><ul><li>Nil- Invalidated block </li></ul></ul><ul><ul><li>Read-only- only read access right </li></ul></ul><ul><ul><li>Read-owned- owner with Read access right </li></ul></ul><ul><ul><li>Writable- write access permission </li></ul></ul>
  32. 32. <ul><li>Replacement priority </li></ul><ul><ul><li>Both unused and nil blocks have the highest priority </li></ul></ul><ul><ul><li>The read-only block have the next replacement priority- a copy of a read-only block is available with its owner and is possible to discard it </li></ul></ul><ul><ul><li>Read-owned and writable blocks for which replica exist on some other node and easy to pass ownership to one of the replica nodes. </li></ul></ul><ul><ul><li>Read-owned and writable blocks have the lowest priority because it involves replacement of ownership as well as memory block. </li></ul></ul><ul><li>Where to place a replaced block ? </li></ul><ul><ul><li>Once a memory block has been selected for replacement, it should be ensured that the useful information should not be lost. </li></ul></ul><ul><ul><li>Using Secondary store- Transfer the block on to a local disk </li></ul></ul><ul><ul><li>Advantages: Simple, easy for next access, not required n/w to access. </li></ul></ul><ul><ul><li>Using the memory space of other nodes- Keep track of free memory space at all nodes and transfer the replaced block to the memory of a node with available space. </li></ul></ul><ul><ul><li>Disadvantage: Requires to maintain a table of free memory space for all nodes. </li></ul></ul>
  33. 33. <ul><li>Thrashing </li></ul><ul><ul><li>It occur when the system spends a large amount of time transferring shared data blocks from one node to another, compared to the time spent doing the useful work of executing application processes. </li></ul></ul><ul><ul><li>It may occur in the following situations. </li></ul></ul><ul><ul><li>System may allow data blocks to migrate from one node to another. </li></ul></ul><ul><ul><li>When interleaved data access made by two or more nodes causes a data block to move back and forth from one node to another in quick succession (ping-pong effect) </li></ul></ul><ul><ul><li>When blocks with read-only permissions are repeatedly invalidated soon after they are replicated. </li></ul></ul><ul><li>Thrashing degrades system performance considerably. </li></ul><ul><li>Creates poor locality of references. </li></ul><ul><li>Following are the methods to solve thrashing problems. </li></ul><ul><ul><li>Providing application-controlled locks- Locking data to prevent other nodes from accessing that data for a short period of time . </li></ul></ul>
  34. 34. <ul><li>Nailing a block to a node for a minimum amount time </li></ul><ul><ul><li>Disallow a block to be taken away from a node until a minimum amount of time ‘t’ elapses after its allocation to that node. </li></ul></ul><ul><ul><li>The time ‘t’ is fixed or tuned based on the access patterns. </li></ul></ul><ul><ul><li>If a process accessing a block for writing, other processes will be preventing from accessing the block until time ‘t’ elapses. </li></ul></ul><ul><ul><li>Tuning of ‘t’ dynamically is preferably used because it reduces processor idle time. </li></ul></ul><ul><ul><li>MMU’s reference bits are used to fix ‘t’ </li></ul></ul><ul><ul><li>‘ t’ depends on the length of the processes que waiting for accessing block. </li></ul></ul><ul><li>Tailoring the coherence algorithm to the shared-data usage patterns </li></ul><ul><ul><li>Use different coherence protocols for shared data having different protocols. </li></ul></ul><ul><ul><li>Use coherence protocols for write-shared variables, which avoids thrashing </li></ul></ul><ul><ul><li>Complete transparency of DSM is compromised while minimizing thrashing. </li></ul></ul><ul><ul><li>Application-controlled locks makes DSM non-trasparent. </li></ul></ul>

×