Lustre And Nfs V4

3,708 views

Published on

My presentation contrasting the lustre fs and nfs v4

0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,708
On SlideShare
0
From Embeds
0
Number of Embeds
20
Actions
Shares
0
Downloads
91
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Lustre And Nfs V4

  1. 1. Lustre and NFS v4.0 Chris Sosa For Grimshaw’s Grid Seminar
  2. 2. Lustre – Motivation <ul><li>Need for a file system for large clusters that has the following attributes </li></ul><ul><ul><li>Highly scalable > 10,000 nodes </li></ul></ul><ul><ul><li>Provide petabytes of storage </li></ul></ul><ul><ul><li>High throughput (100 GB/sec) </li></ul></ul><ul><li>Datacenters have different needs so we need a general-purpose back-end file system </li></ul>
  3. 3. Lustre = Linux + Cluster <ul><li>Peter Braam created the design for Lustre at CMU whom went on to found Cluster File Systems </li></ul><ul><li>Cluster File Systems was bought by Sun in last 2007 – Lustre now part of Sun </li></ul><ul><li>Lustre is the file system with the largest share in HPC (see BlueGene (or not)) </li></ul>
  4. 4. Features of Lustre <ul><li>Open-source object-based cluster file system </li></ul><ul><li>Fully compliant with POSIX </li></ul><ul><li>Features (i.e. what I will discuss) </li></ul><ul><ul><li>Object Protocols </li></ul></ul><ul><ul><li>Intent-based Locking </li></ul></ul><ul><ul><li>Adaptive Locking Policies </li></ul></ul><ul><ul><li>Aggressive Caching </li></ul></ul>
  5. 5. System Overview
  6. 6. Object Protocols
  7. 7. Intent-based Locking
  8. 8. Adaptive Locking Policies <ul><li>Policy depends on context </li></ul><ul><li>Mode 1: Performing operations on something they only mostly use (e.g. /home/username) </li></ul><ul><li>Mode 2: Performing operations on a highly contentious Resource (e.g. /tmp) </li></ul><ul><li>DLM capable of granting locks on an entire subtree and whole files </li></ul>
  9. 9. Aggressive Caching <ul><li>Keeps local journal of updates for locked files </li></ul><ul><ul><li>One per file operation </li></ul></ul><ul><ul><li>Hard linked files get special treatment with subtree locks </li></ul></ul><ul><li>Lock revoked -> updates flushed and replayed </li></ul><ul><li>Use subtree change times to validate cache entries </li></ul><ul><li>Additionally features collaborative caching -> referrals to other dedicated cache service </li></ul>
  10. 10. On to NFS Version 4.0
  11. 11. Motivation <ul><li>We want a file system that provides distributed transparent access in a heterogeneous network </li></ul><ul><li>NFS pre 4 had a lot of issues </li></ul><ul><ul><li>Caches had no guarantees </li></ul></ul><ul><ul><li>Terrible failure semantics </li></ul></ul><ul><ul><ul><li>Hanging locks </li></ul></ul></ul><ul><ul><ul><li>Server / Clients were never sure of anything </li></ul></ul></ul><ul><ul><li>Data coherency, what’s that? </li></ul></ul>
  12. 12. Overview of NFS v4 <ul><li>Stateful Protocol </li></ul><ul><li>Compound Operations </li></ul><ul><li>Lease-based Locks </li></ul><ul><li>“Delegation” to clients </li></ul><ul><li>Close-Open Cache Consistency </li></ul><ul><li>Better security </li></ul>
  13. 13. Stateful <ul><li>Borrowed model from CIFS (Common Internet File System) see MS (Marty’s supporters) </li></ul><ul><li>Open/Close </li></ul><ul><ul><li>Opens also handles creates, etc. </li></ul></ul><ul><ul><li>Close semantics </li></ul></ul><ul><ul><li>Opens do byte locking and file locking atomically on the open </li></ul></ul><ul><ul><li>Locks / delegation released on file close </li></ul></ul><ul><ul><li>Everything done with file handles </li></ul></ul><ul><ul><li>Always a notion of a “current file handle” i.e. see pwd </li></ul></ul>
  14. 14. COMPOUND Ops <ul><li>Problem: Normal filesystem semantics have too many RPC’s (boo) </li></ul><ul><li>Solution: Group many calls into one call (yay) </li></ul><ul><li>Semantics </li></ul><ul><ul><li>Run sequentially </li></ul></ul><ul><ul><li>Fails on first failure </li></ul></ul><ul><ul><li>Returns status of each individual RPC in the compound response (either to failure or success) </li></ul></ul>Compound Kitty
  15. 15. Lease-based Locks <ul><li>Both byte-range and file locks </li></ul><ul><li>Heartbeats keep locks alive (renew lock) </li></ul><ul><li>A lease on every lock that indicates that the client is still up </li></ul><ul><li>If server fails, waits at least the agreed upon lease time (constant) before accepting any other lock requests </li></ul><ul><li>If client fails, locks are released by server at the end of lease period </li></ul>
  16. 16. Delegation <ul><li>Tells client no one else has the file (similar to Lustre’s first mode) </li></ul><ul><li>Client exposes callbacks </li></ul><ul><ul><li>Difference here between 4.0 / 4.1 </li></ul></ul><ul><ul><li>Here’s a second bullet </li></ul></ul>
  17. 17. Close-Open Consistency <ul><li>Any opens that happen after a close finishes are consistent with the information with the last close </li></ul><ul><li>Last close wins the competition </li></ul><ul><ul><li>Not coherent (without locks) </li></ul></ul><ul><ul><li>You have to reopen to see if you won </li></ul></ul>
  18. 18. Security <ul><li>Uses the GSS-API framework </li></ul><ul><li>All id’s are formed with </li></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul><ul><li>Every implementation must have Kerberos v5 </li></ul><ul><li>Every implementation must have LIPKey </li></ul>Meow
  19. 19. Other Stuff <ul><li>Replication / Migration mechanism added </li></ul><ul><ul><li>Special error messages to indicate migration </li></ul></ul><ul><ul><li>Special attribute for both replication and migration that gives the location of the other / new location </li></ul></ul><ul><ul><li>If file system response is too slow or get the special error message, can check the special attribute for the read-only replica (or stop using security) </li></ul></ul>
  20. 20. Comparison of NFSv3 and NFSv4
  21. 21. Questions?

×