Clusters (Distributed computing)

1,975 views
1,906 views

Published on

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,975
On SlideShare
0
From Embeds
0
Number of Embeds
13
Actions
Shares
0
Downloads
151
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Clusters (Distributed computing)

  1. 1. Clusters Paul Krzyzanowski [email_address] [email_address] Distributed Systems Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License.
  2. 2. Designing highly available systems <ul><li>Incorporate elements of fault-tolerant design </li></ul><ul><ul><li>Replication, TMR </li></ul></ul><ul><li>Fully fault tolerant system will offer non-stop availability </li></ul><ul><ul><li>You can’t achieve this! </li></ul></ul><ul><li>Problem : expensive! </li></ul>
  3. 3. Designing highly scalable systems <ul><li>SMP architecture </li></ul><ul><li>Problem : </li></ul><ul><li>performance gain as f (# processors) is sublinear </li></ul><ul><ul><li>Contention for resources (bus, memory, devices) </li></ul></ul><ul><ul><li>Also … the solution is expensive! </li></ul></ul>
  4. 4. Clustering <ul><li>Achieve reliability and scalability by interconnecting multiple independent systems </li></ul><ul><li>Cluster: group of standard, autonomous servers configured so they appear on the network as a single machine </li></ul><ul><li>approach single system image </li></ul>
  5. 5. Ideally… <ul><li>Bunch of off-the shelf machines </li></ul><ul><li>Interconnected on a high speed LAN </li></ul><ul><li>Appear as one system to external users </li></ul><ul><li>Processors are load-balanced </li></ul><ul><ul><li>May migrate </li></ul></ul><ul><ul><li>May run on different systems </li></ul></ul><ul><ul><li>All IPC mechanisms and file access available </li></ul></ul><ul><li>Fault tolerant </li></ul><ul><ul><li>Components may fail </li></ul></ul><ul><ul><li>Machines may be taken down </li></ul></ul>
  6. 6. we don’t get all that (yet) (at least not in one package)
  7. 7. Clustering types <ul><li>Supercomputing (HPC) </li></ul><ul><li>Batch processing </li></ul><ul><li>High availability (HA) </li></ul><ul><li>Load balancing </li></ul>
  8. 8. High Performance Computing (HPC)
  9. 9. The evolution of supercomputers <ul><li>Target complex applications: </li></ul><ul><ul><li>Large amounts of data </li></ul></ul><ul><ul><li>Lots of computation </li></ul></ul><ul><ul><li>Parallelizable application </li></ul></ul><ul><li>Many custom efforts </li></ul><ul><ul><li>Typically Linux + message passing software + remote exec + remote monitoring </li></ul></ul>
  10. 10. Clustering for performance <ul><li>Example: One popular effort </li></ul><ul><ul><li>Beowulf </li></ul></ul><ul><ul><ul><li>Initially built to address problems associated with large data sets in Earth and Space Science applications </li></ul></ul></ul><ul><ul><ul><li>From Center of Excellence in Space Data & Information Sciences (CESDIS), division of University Space Research Association at the Goddard Space Flight Center </li></ul></ul></ul>
  11. 11. What makes it possible <ul><li>Commodity off-the-shelf computers are cost effective </li></ul><ul><li>Publicly available software: </li></ul><ul><ul><li>Linux, GNU compilers & tools </li></ul></ul><ul><ul><li>MPI (message passing interface) </li></ul></ul><ul><ul><li>PVM (parallel virtual machine) </li></ul></ul><ul><li>Low cost, high speed networking </li></ul><ul><li>Experience with parallel software </li></ul><ul><ul><li>Difficult: solutions tend to be custom </li></ul></ul>
  12. 12. What can you run? <ul><li>Programs that do not require fine-grain communication </li></ul><ul><li>Nodes are dedicated to the cluster </li></ul><ul><ul><li>Performance of nodes not subject to external factors </li></ul></ul><ul><li>Interconnect network isolated from external network </li></ul><ul><ul><li>Network load is determined only by application </li></ul></ul><ul><li>Global process ID provided </li></ul><ul><ul><li>Global signaling mechanism </li></ul></ul>
  13. 13. Beowulf configuration <ul><li>Includes: </li></ul><ul><ul><li>BPROC: Beowulf distributed process space </li></ul></ul><ul><ul><ul><li>Start processes on other machines </li></ul></ul></ul><ul><ul><ul><li>Global process ID, global signaling </li></ul></ul></ul><ul><ul><li>Network device drivers </li></ul></ul><ul><ul><ul><li>Channel bonding, scalable I/O </li></ul></ul></ul><ul><ul><li>File system (file sharing is generally not critical) </li></ul></ul><ul><ul><ul><li>NFS root </li></ul></ul></ul><ul><ul><ul><li>unsynchronized </li></ul></ul></ul><ul><ul><ul><li>synchronized periodically via rsync </li></ul></ul></ul>
  14. 14. Programming tools: MPI <ul><li>Message Passing Interface </li></ul><ul><li>API for sending/receiving messages </li></ul><ul><ul><li>Optimizations for shared memory & NUMA </li></ul></ul><ul><ul><li>Group communication suppot </li></ul></ul><ul><li>Other features: </li></ul><ul><ul><li>Scalable file I/O </li></ul></ul><ul><ul><li>Dynamic process management </li></ul></ul><ul><ul><li>Synchronization (barriers) </li></ul></ul><ul><ul><li>Combining results </li></ul></ul>
  15. 15. Programming tools: PVM <ul><li>Software that emulates a general-purpose heterogeneous computing framework on interconnected computers </li></ul><ul><li>Present a view of virtual processing elements </li></ul><ul><ul><li>Create tasks </li></ul></ul><ul><ul><li>Use global task IDs </li></ul></ul><ul><ul><li>Manage groups of tasks </li></ul></ul><ul><ul><li>Basic message passing </li></ul></ul>
  16. 16. Beowulf programming tools <ul><li>PVM and MPI libraries </li></ul><ul><li>Distributed shared memory </li></ul><ul><ul><li>Page based: software-enforced ownership and consistency policy </li></ul></ul><ul><li>Cluster monitor </li></ul><ul><li>Global ps, top, uptime tools </li></ul><ul><li>Process management </li></ul><ul><ul><li>Batch system </li></ul></ul><ul><ul><li>Write software to control synchronization and load balancing with MPI and/or PVM </li></ul></ul><ul><ul><li>Preemptive distributed scheduling: not part of Beowulf (two packages: Condor and Mosix ) </li></ul></ul>
  17. 17. Another example <ul><li>Rocks Cluster Distribution </li></ul><ul><ul><li>Based on CentOS Linux </li></ul></ul><ul><ul><li>Mass installation is a core part of the system </li></ul></ul><ul><ul><ul><li>Mass re-installation for application-specific configurations </li></ul></ul></ul><ul><ul><li>Front-end central server + compute & storage nodes </li></ul></ul><ul><ul><li>Rolls: collection of packages </li></ul></ul><ul><ul><ul><li>Base roll includes: PBS (portable batch system), PVM (parallel virtual machine), MPI (message passing interface), job launchers, … </li></ul></ul></ul>
  18. 18. Another example <ul><li>Microsoft HPC Server 2008 </li></ul><ul><ul><li>Windows Server 2008 + clustering package </li></ul></ul><ul><ul><li>Systems Management </li></ul></ul><ul><ul><ul><li>Management Console: plug-in to System Center UI with support for Windows PowerShell </li></ul></ul></ul><ul><ul><ul><li>RIS (Remote Installation Service) </li></ul></ul></ul><ul><ul><li>Networking </li></ul></ul><ul><ul><ul><li>MS-MPI (Message Passing Interface) </li></ul></ul></ul><ul><ul><ul><li>ICS (Internet Connection Sharing) : NAT for cluster nodes </li></ul></ul></ul><ul><ul><ul><li>Network Direct RDMA (Remote DMA) </li></ul></ul></ul><ul><ul><li>Job scheduler </li></ul></ul><ul><ul><li>Storage: iSCSI SAN and SMB support </li></ul></ul><ul><ul><li>Failover support </li></ul></ul>
  19. 19. Batch Processing
  20. 20. Batch processing <ul><li>Common application: graphics rendering </li></ul><ul><ul><li>Maintain a queue of frames to be rendered </li></ul></ul><ul><ul><li>Have a dispatcher to remotely exec process </li></ul></ul><ul><li>Virtually no IPC needed </li></ul><ul><li>Coordinator dispatches jobs </li></ul>
  21. 21. Single-queue work distribution <ul><li>Render Farms: </li></ul><ul><ul><li>Pixar: </li></ul></ul><ul><ul><ul><li>1,024 2.8 GHz Xeon processors running Linux and Renderman </li></ul></ul></ul><ul><ul><ul><li>2 TB RAM, 60 TB disk space </li></ul></ul></ul><ul><ul><ul><li>Custom Linux software for articulating, animating/lighting (Marionette), scheduling (Ringmaster), and rendering (RenderMan) </li></ul></ul></ul><ul><ul><ul><li>Cars: each frame took 8 hours to Render. Consumes ~32 GB storage on a SAN </li></ul></ul></ul><ul><ul><li>DreamWorks: </li></ul></ul><ul><ul><ul><li>>3,000 servers and >1,000 Linux desktops HP xw9300 workstations and HP DL145 G2 servers with 8 GB/server </li></ul></ul></ul><ul><ul><ul><li>Shrek 3: 20 million CPU render hours. Platform LSF used for scheduling + Maya for modeling + Avid for editing+ Python for pipelining – movie uses 24 TB storage </li></ul></ul></ul>
  22. 22. Single-queue work distribution <ul><li>Render Farms: </li></ul><ul><ul><li>ILM: </li></ul></ul><ul><ul><ul><li>3,000 processor (AMD) renderfarm; expands to 5,000 by harnessing desktop machines </li></ul></ul></ul><ul><ul><ul><li>20 Linux-based SpinServer NAS storage systems and 3,000 disks from Network Appliance </li></ul></ul></ul><ul><ul><ul><li>10 Gbps ethernet </li></ul></ul></ul><ul><ul><li>Sony Pictures’ Imageworks: </li></ul></ul><ul><ul><ul><li>Over 1,200 processors </li></ul></ul></ul><ul><ul><ul><li>Dell and IBM workstations </li></ul></ul></ul><ul><ul><ul><li>almost 70 TB data for Polar Express </li></ul></ul></ul>
  23. 23. Batch Processing <ul><li>OpenPBS.org: </li></ul><ul><ul><li>Portable Batch System </li></ul></ul><ul><ul><li>Developed by Veridian MRJ for NASA </li></ul></ul><ul><li>Commands </li></ul><ul><ul><li>Submit job scripts </li></ul></ul><ul><ul><ul><li>Submit interactive jobs </li></ul></ul></ul><ul><ul><ul><li>Force a job to run </li></ul></ul></ul><ul><ul><li>List jobs </li></ul></ul><ul><ul><li>Delete jobs </li></ul></ul><ul><ul><li>Hold jobs </li></ul></ul>
  24. 24. Grid Computing <ul><li>Characteristics </li></ul><ul><ul><li>Coordinate computing resources that are not subject to centralized control </li></ul></ul><ul><ul><li>Not application-specific </li></ul></ul><ul><ul><ul><li>use general-purpose protocols for accessing resources, authenticating, discovery, … </li></ul></ul></ul><ul><ul><li>Some form of quality of service guarantee </li></ul></ul><ul><li>Capabilities </li></ul><ul><ul><li>Computational resources: starting, monitoring </li></ul></ul><ul><ul><li>Storage access </li></ul></ul><ul><ul><li>Network resources </li></ul></ul>
  25. 25. Open Grid Services Architecture (OGSA) <ul><li>Globus Toolkit </li></ul><ul><li>All grid resources are modeled as services </li></ul><ul><ul><li>Grid Resource Information Protocol (GRIP) </li></ul></ul><ul><ul><ul><li>Register resources; based on LDAP </li></ul></ul></ul><ul><ul><li>Grid Resource Access and Management Protocol (GRAM) </li></ul></ul><ul><ul><ul><li>Allocation & monitoring of resources; based on HTTP </li></ul></ul></ul><ul><ul><li>GridFTP </li></ul></ul><ul><ul><ul><li>Data access </li></ul></ul></ul>
  26. 26. Globus services <ul><li>Define service’s interface (GWSDL) </li></ul><ul><ul><li>Interface definition – extension of WSDL </li></ul></ul><ul><li>Implement service (Java) </li></ul><ul><li>Define deployment parameters (WSDD) </li></ul><ul><ul><li>Make service available through Grid-enhanced web server </li></ul></ul><ul><li>Compile & Generate GAR file (Ant) </li></ul><ul><ul><li>Java build tool (apache project) – similar to make </li></ul></ul><ul><li>Deploy service with Ant </li></ul><ul><ul><li>Copy key files to public directory tree. </li></ul></ul>
  27. 27. Load Balancing for the web
  28. 28. Functions of a load balancer <ul><li>Load balancing </li></ul><ul><li>Failover </li></ul><ul><li>Planned outage management </li></ul>
  29. 29. Redirection <ul><li>Simplest technique </li></ul><ul><li>HTTP REDIRECT error code </li></ul>
  30. 30. Redirection <ul><li>Simplest technique </li></ul><ul><li>HTTP REDIRECT error code </li></ul>Public domain keyboard image from http://en.wikipedia.org/wiki/Image:Computer_keyboard.gif Public domain Mac Pro image from http://en.wikipedia.org/wiki/Image:Macpro.png www.mysite.com
  31. 31. Redirection <ul><li>Simplest technique </li></ul><ul><li>HTTP REDIRECT error code </li></ul>www.mysite.com REDIRECT www03.mysite.com
  32. 32. Redirection <ul><li>Simplest technique </li></ul><ul><li>HTTP REDIRECT error code </li></ul>www03.mysite.com
  33. 33. Redirection <ul><li>Trivial to implement </li></ul><ul><li>Successive requests automatically go to the same web server </li></ul><ul><ul><li>Important for sessions </li></ul></ul><ul><li>Visible to customer </li></ul><ul><ul><li>Some don’t like it </li></ul></ul><ul><li>Bookmarks will usually tag a specific site </li></ul>
  34. 34. Software load balancer <ul><li>e.g.: IBM Interactive Network Dispatcher Software </li></ul><ul><li>Forwards request via load balancing </li></ul><ul><ul><li>Leaves original source address </li></ul></ul><ul><ul><li>Load balancer not in path of outgoing traffic (high bandwidth) </li></ul></ul><ul><ul><li>Kernel extensions for routing TCP and UDP requests </li></ul></ul><ul><ul><ul><li>Each client accepts connections on its own address and dispatcher’s address </li></ul></ul></ul><ul><ul><ul><li>Dispatcher changes MAC address of packets. </li></ul></ul></ul>
  35. 35. Software load balancer www.mysite.com
  36. 36. Software load balancer www.mysite.com src=bobby, dest=www03
  37. 37. Software load balancer www.mysite.com response src=bobby, dest=www03
  38. 38. Load balancing router <ul><li>Routers have been getting smarter </li></ul><ul><ul><li>Most support packet filtering </li></ul></ul><ul><ul><li>Add load balancing </li></ul></ul><ul><li>Cisco LocalDirector, Altheon, F5 Big-IP </li></ul>
  39. 39. Load balancing router <ul><li>Assign one or more virtual addresses to physical address </li></ul><ul><ul><li>Incoming request gets mapped to physical address </li></ul></ul><ul><li>Special assignments can be made per port </li></ul><ul><ul><li>e.g. all FTP traffic goes to one machine </li></ul></ul><ul><li>Balancing decisions : </li></ul><ul><ul><li>Pick machine with least # TCP connections </li></ul></ul><ul><ul><li>Factor in weights when selecting machines </li></ul></ul><ul><ul><li>Pick machines round-robin </li></ul></ul><ul><ul><li>Pick fastest connecting machine (SYN/ACK time) </li></ul></ul>
  40. 40. Sessions <ul><li>Watch out for session state (e.g. servlets, asp) </li></ul><ul><ul><li>Cookie contains handle to session data </li></ul></ul><ul><li>Where do you store session data? </li></ul><ul><ul><li>App server </li></ul></ul><ul><ul><ul><li>(jrun, tomcat, IBM websphere, BEA weblogic) </li></ul></ul></ul><ul><ul><li>Database (Websphere, weblogic) </li></ul></ul><ul><ul><li>Replicate among app servers for fault tolerance (weblogic) </li></ul></ul><ul><ul><li>Migrate old/stale sessions to database (weblogic) </li></ul></ul><ul><li>Database is slowest </li></ul><ul><li>Load balancer may have to manage sessions </li></ul><ul><ul><li>Or have server direct request to appropriate app server </li></ul></ul>
  41. 41. High Availability (HA)
  42. 42. High availability (HA) Class Level Annual Downtime Continuous 100% 0 Six nines (carrier class switches) 99.9999% 30 seconds Fault Tolerant (carrier-class servers) 99.999% 5 minutes Fault Resilient 99.99% 53 minutes High Availability 99.9% 8.3 hours Normal availability 99-99.5% 44-87 hours
  43. 43. Clustering: high availability <ul><li>Fault tolerant design </li></ul><ul><li>Stratus, NEC, Marathon technologies </li></ul><ul><ul><li>Applications run uninterrupted on a redundant subsystem </li></ul></ul><ul><ul><ul><li>NEC and Stratus has applications running in lockstep synchronization </li></ul></ul></ul><ul><ul><li>Two identical connected systems </li></ul></ul><ul><ul><li>If one server fails, other takes over instantly </li></ul></ul><ul><li>Costly and inefficient </li></ul><ul><ul><li>But does what it was designed to do </li></ul></ul>
  44. 44. Clustering: high availability <ul><li>Availability addressed by many: </li></ul><ul><ul><li>Sun, IBM, HP, Microsoft, SteelEye Lifekeeper, … </li></ul></ul><ul><li>If one server fails </li></ul><ul><ul><li>Fault is isolated to that node </li></ul></ul><ul><ul><li>Workload spread over surviving nodes </li></ul></ul><ul><ul><li>Allows scheduled maintenance without disruption </li></ul></ul><ul><ul><li>Nodes may need to take over IP addresses </li></ul></ul>
  45. 45. Example: Windows Server 2003 clustering <ul><li>Network load balancing </li></ul><ul><ul><li>Address web-server bottlenecks </li></ul></ul><ul><li>Component load balancing </li></ul><ul><ul><li>Scale middle-tier software (COM objects) </li></ul></ul><ul><li>Failover support for applications </li></ul><ul><ul><li>8-node failover clusters </li></ul></ul><ul><ul><li>Applications restarted on surviving node </li></ul></ul><ul><ul><li>Shared disk configuration using SCSI or fibre channel </li></ul></ul><ul><ul><li>Resource group : {disk drive, IP address, network name, service} can be moved during failover </li></ul></ul>
  46. 46. Example: Windows Server 2003 clustering <ul><li>Top tier: cluster abstractions </li></ul><ul><ul><li>Failover manager, resource monitor, cluster registry </li></ul></ul><ul><li>Middle tier: distributed operations </li></ul><ul><ul><li>Global status update, quorum (keeps track of who’s in charge), membership </li></ul></ul><ul><li>Bottom tier: OS and drivers </li></ul><ul><ul><li>Cluster disk driver, cluster network drivers </li></ul></ul><ul><ul><li>IP address takeover </li></ul></ul>
  47. 47. Clusters Architectural models
  48. 48. HA issues <ul><li>How do you detect failover? </li></ul><ul><li>How long does it take to detect? </li></ul><ul><li>How does a dead application move/restart? </li></ul><ul><li>Where does it move to? </li></ul>
  49. 49. Heartbeat network <ul><li>Machines need to detect faulty systems </li></ul><ul><ul><li>“ ping” mechanism </li></ul></ul><ul><li>Need to distinguish system faults from network faults </li></ul><ul><ul><li>Useful to maintain redundant networks </li></ul></ul><ul><ul><li>Send a periodic heartbeat to test a machine’s liveness </li></ul></ul><ul><ul><li>Watch out for split-brain ! </li></ul></ul><ul><li>Ideally, use a network with a bounded response time </li></ul><ul><ul><li>Lucent RCC used a serial line interconnect </li></ul></ul><ul><ul><li>Microsoft Cluster Server supports a dedicated “private network” </li></ul></ul><ul><ul><ul><li>Two network cards connected with a pass-through cable or hub </li></ul></ul></ul>
  50. 50. Failover Configuration Models <ul><li>Active/Passive (N+M nodes) </li></ul><ul><ul><li>M dedicated failover node(s) for N active nodes </li></ul></ul><ul><li>Active/Active </li></ul><ul><ul><li>Failed workload goes to remaining nodes </li></ul></ul>
  51. 51. Design options for failover <ul><li>Cold failover </li></ul><ul><ul><li>Application restart </li></ul></ul><ul><li>Warm failover </li></ul><ul><ul><li>Application checkpoints itself periodically </li></ul></ul><ul><ul><li>Restart last checkpointed image </li></ul></ul><ul><ul><li>May use writeahead log (tricky) </li></ul></ul><ul><li>Hot failover </li></ul><ul><ul><li>Application state is lockstep synchronized </li></ul></ul><ul><ul><li>Very difficult, expensive (resources), prone to software faults </li></ul></ul>
  52. 52. Design options for failover <ul><li>With either type of failover … </li></ul><ul><li>Multi-directional failover </li></ul><ul><ul><li>Failed applications migrate to / restart on available systems </li></ul></ul><ul><li>Cascading failover </li></ul><ul><ul><li>If the backup system fails, application can be restarted on another surviving system </li></ul></ul>
  53. 53. System support for HA <ul><li>Hot-pluggable devices </li></ul><ul><ul><li>Minimize downtime for component swapping </li></ul></ul><ul><li>Redundant devices </li></ul><ul><ul><li>Redundant power supplies </li></ul></ul><ul><ul><li>Parity on memory </li></ul></ul><ul><ul><li>Mirroring on disks (or RAID for HA) </li></ul></ul><ul><ul><li>Switchover of failed components </li></ul></ul><ul><li>Diagnostics </li></ul><ul><ul><li>On-line serviceability </li></ul></ul>
  54. 54. Shared resources (disk) <ul><li>Shared disk </li></ul><ul><ul><li>Allows multiple systems to share access to disk drives </li></ul></ul><ul><ul><li>Works well if applications do not generate much disk I/O </li></ul></ul><ul><ul><li>Disk access must be synchronized </li></ul></ul><ul><ul><ul><li>Synchronization via a distributed lock manager (DLM) </li></ul></ul></ul>
  55. 55. Shared resources (disk) <ul><li>Shared nothing </li></ul><ul><ul><li>No shared devices </li></ul></ul><ul><ul><li>Each system has its own storage resources </li></ul></ul><ul><ul><li>No need to deal with DLMs </li></ul></ul><ul><ul><li>If a machine A needs resources on B , A sends a message to B </li></ul></ul><ul><ul><ul><li>If B fails, storage requests have to be switched over to a live node </li></ul></ul></ul>
  56. 56. Cluster interconnects <ul><li>Traditional WANs and LANs may be slow as cluster interconnect </li></ul><ul><ul><li>Connecting server nodes, storage nodes, I/O channels, even memory pages </li></ul></ul><ul><ul><li>Storage Area Network (SAN) </li></ul></ul><ul><ul><ul><li>Fibre channel connectivity to external storage devices </li></ul></ul></ul><ul><ul><ul><li>Any node can be configured to access any storage through a fibre channel switch </li></ul></ul></ul><ul><ul><li>System Area Network (SAN) </li></ul></ul><ul><ul><ul><li>Switched interconnect to switch cluster resources </li></ul></ul></ul><ul><ul><ul><li>Low-latency I/O without processor intervention </li></ul></ul></ul><ul><ul><ul><li>Scalable switching fabric </li></ul></ul></ul><ul><ul><ul><li>(Compaq, Tandem’s ServerNet) </li></ul></ul></ul><ul><ul><ul><li>Microsoft Windows 2000 supports Winsock Direct for SAN communication </li></ul></ul></ul>
  57. 57. Achieving High Availability heartbeat 2 heartbeat 3 Server A Server B Fibre channel switch Fibre channel switch Fabric A Fabric B Storage Area Networks Local Area Networks switch B switch A heartbeat
  58. 58. Achieving High Availability heartbeat 2 heartbeat 3 Server A Server B Ethernet switch A’ Ethernet switch B’ ethernet A ethernet B Local Area Networks (for iSCSI) Local Area Networks switch B Switch A heartbeat
  59. 59. HA Storage: RAID <ul><li>Redundant Array of Independent (Inexpensive) Disks </li></ul>
  60. 60. RAID 0: Performance <ul><li>Striping </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Performance </li></ul></ul><ul><ul><li>All storage capacity can be used </li></ul></ul><ul><li>Disadvantage: </li></ul><ul><ul><li>Not fault tolerant </li></ul></ul>Disk 0 Block 4 Block 2 Block 0 Disk 1 Block 5 Block 3 Block 1
  61. 61. RAID 1: HA <ul><li>Mirroring </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Double read speed </li></ul></ul><ul><ul><li>No rebuild necessary if a disk fails: just copy </li></ul></ul><ul><li>Disadvantage: </li></ul><ul><ul><li>Only half the space </li></ul></ul>Disk 0 Block 2 Block 1 Block 0 Disk 1 Block 2 Block 1 Block 0
  62. 62. RAID 3 and RAID 4: HA <ul><li>Separate parity disk </li></ul><ul><li>RAID 3 uses byte-level striping RAID 4 uses block-level striping </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Very fast reads </li></ul></ul><ul><ul><li>High efficiency: low ratio of parity/data </li></ul></ul><ul><li>Disadvantages: </li></ul><ul><ul><li>Slow random I/O performance </li></ul></ul><ul><ul><li>Only one I/O at a time for RAID 3 </li></ul></ul>Disk 3 Parity 2 Parity 1 Parity 0 Disk 2 Block 2c Block 1c Block 0c Disk 1 Block 2b Block 1b Block 0b Disk 0 Block 2a Block 1a Block 0a
  63. 63. RAID 5: HA <ul><li>Interleaved parity </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Very fast reads </li></ul></ul><ul><ul><li>High efficiency: low ratio of parity/data </li></ul></ul><ul><li>Disadvantage: </li></ul><ul><ul><li>Slower writes </li></ul></ul><ul><ul><li>Complex controller </li></ul></ul>Disk 2 Block 2b Parity 1 Block 0c Disk 3 Block 2c Block 1c Parity 0 Disk 1 Parity 2 Block 1b Block 0b Disk 0 Block 2a Block 1a Block 0a
  64. 64. RAID 1+0 <ul><li>Combine mirroring and striping </li></ul><ul><ul><li>Striping across a set of disks </li></ul></ul><ul><ul><li>Mirroring of the entire set onto another set </li></ul></ul>
  65. 65. The end

×