This document discusses NFS performance issues related to caching and consistency. It covers several approaches to addressing these issues:
1. Server-side caching improves I/O performance at the server but multiple clients can still experience delays.
2. Client-side caching addresses delays from RPC roundtrips but can cause stale data issues if clients have inconsistent views of the data.
3. Synchronous writing solves stale data problems but limits clients to one write per RPC roundtrip, hurting performance. Delayed and asynchronous writing models aim to improve performance but introduce crash recovery challenges.
3. BITS Pilani, Pilani Campus
NFS Performance
• Every request by a client may be delayed by:
1. RPC Round-trip
2. I/O Wait at the server
– which may be serving multiple clients
• Server-side Caching addresses the second issue.
– Server may still be serving multiple clients
– But server’s I/O rate will be improved and so should
client’s wait time
• Client-side Caching addresses the first issue.
– But caching may introduce coherence issues:
• Clients may be accessing stale data
3
4. BITS Pilani, Pilani Campus
NFS - Performance -
Inconsistent Data
• Scenario:
First client writes data that is later read by second client.
• Two main ways for stale data to be read:
1. Second client has stale data in its cache and does not
know that modified data are available.
2. First client has modified data in its cache but has not
written those data back to the server.
• Synchronous writing solves the second problem.
– It also results in behavior that is close to the local filesystem.
– But clients are restricted to one write per RPC RTT.
4
5. BITS Pilani, Pilani Campus
NFS Performance Caching[1]
• Delayed writing model:
– Write request returns as soon as data are cached by the client
• Pros:
– Following things can now be bundled in to a single request to the
server (i.e. the last one):
• multiple writes to the same blocks,
• file deletion or file truncation shortly after write(s)
• Cons:
– Client crash may result in loss of data
– Server must notify a client - holding a cached copy –
• that other client(s) want to read/write the file held by the first client.
• This introduces state in the implementation
– Error propagation to the client may be problematic:
• e.g. “Out of space” error
• e.g. client process exiting before error notification
5
6. BITS Pilani, Pilani Campus
NFS Performance Caching[2]
• Asynchronous writing model:
– As soon as data are cached by the client, write to the server is
initiated and then the write request returns.
• Variants:
– write on close (file)
• Delays are only deferred
– Read-sharing only (.e.g. Sprite file system, Unix like distributed file
system)
• Cache Verification model
– Client performs cache verification on access
• RPC RTT delays
• Callback model
– Server keeps track of cached copies and notifies them on update
6
7. BITS Pilani, Pilani Campus
NFS Performance Caching[3]
• Leasing model:
– Leases are issued for time intervals.
– As long as lease holds server will callback on update
– When lease expires client must verify its cache
contents and/or obtain a new lease.
• Requires much less server memory and reduces
traffic.
• Read-caching and write-caching may be given
separate leases.
7
8. BITS Pilani, Pilani Campus
NFS Crash Recovery
• Caching schemes introduce state
– If system crashes, state must be recovered:
• E.g. leases
– If state depends on time (or intervals), recovery
time must be accounted for in leasing
• Clocks become critical.
• Server congestion may also lead to failure
– And recovery issues apply here as well.
• Timing Issues
– Clocks??
8