Survey of distributed storage system

7,776 views
7,519 views

Published on

This slide introduces DAS, NAS, SAN and something about object storage, storage virtualization and distributed file system.

Published in: Technology, Business
0 Comments
10 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
7,776
On SlideShare
0
From Embeds
0
Number of Embeds
1,024
Actions
Shares
0
Downloads
299
Comments
0
Likes
10
Embeds 0
No embeds

No notes for slide

Survey of distributed storage system

  1. 1. Survey of Distributed Storage System<br />frankey0207@gmail.com<br />
  2. 2. Outline<br />Background<br />Storage Virtualization<br />Object Storage<br />Distributed File System<br />
  3. 3. Outline<br />Background<br />Storage Virtualization<br />Object Storage<br />Distributed File System<br />
  4. 4. Background<br />As more and more digital devices(e.g. PC, laptop, ipad and smart phone) connect to the Internet, massive amount of new data are created on the web<br />There were 5 exabytes of data online in 2002, which had risen to 281 exabytes in 2009, and the online data growth rate is rising faster than Moore's Law<br />Then, how to store and manage these massive data effectively and efficiently ?<br />An natural approach: Distributed <br /> Storage System!<br />
  5. 5. Traditional Storage Architecture<br />Direct Attached Storage(DAS)<br />- huge management burden<br /> - limited number of connected host<br /> - severely limited data sharing<br />Fabric Attached Storage<br />- central system serves data to<br /> connected hosts<br /> - hosts and devices interconnected<br /> through Ethernet or Fibre Channel<br /> - NAS & SAN<br />
  6. 6. FAS Implementations<br />Network Attached Storage(NAS)<br />- file-based storage architecture<br /> - data sharing across platforms<br /> - file sever can be the bottleneck<br />Storage Area Networks(SAN)<br /> - scalable performance, high <br /> capacity<br /> - limited ability of sharing data<br /> - unreliable security<br />Since the traditional storage architectures can<br />not satisfy the emerging requirement well, novel <br />approaches need to be proposed !<br />
  7. 7. Outline<br />Background<br />Storage Virtualization<br />Object Storage<br />Distributed File System<br />
  8. 8. Storage Virtualization<br />Definitions of storage virtualization by SNIA<br />- the act of abstracting, hiding, or isolating the internal functions of a storage (sub)system or service from applications, computer servers, or general network resources for the purposes of enabling application and network independent management of storage or data<br />- The application of virtualization to storage services or<br /> devices for the purpose of aggregating, hiding complexity, or adding new capabilities to lower-level storage resources<br />Simply speaking, storage virtualization aggregates storage components, such as disks, controllers, and storage networks, in a coordinated way to share them more efficiently among the applications it serves!<br />
  9. 9. Charactristics of ideal solution<br />A good storage virtualization solution should:<br />Enhance the storage resources it is virtualizing through the aggregation of services to increase the return of existing assets<br />Not add another level of complexity in configuration and<br />management<br />Improve performance rather than act as a bottleneck in<br />order for it to be scalable. Scalability is the capability of a<br />system to maintain performance linearly as new resources<br />(typically hardware) are added<br />Provide secure multi-tenancy so that users and data can<br />share virtual resources without exposure to other users’<br />bad behavior or mistakes<br />Not be proprietary, but virtualize other vendor storage in<br />the same way as its own storage to make the management<br />seamless.<br />
  10. 10. Types of Storage Virtualization<br />Modern storage virtualization technologies can be implemented in three layers of the infrastructure<br />In the server, some of the earliest forms of storage virtualization came from within the server’s operating systems<br />In the storage network, network-based storage virtualization embeds the intelligence of managing the storage resources in the network layer<br />In the storage controller, controller-based storage virtualization allows external storage to appear as if it’s internal<br />
  11. 11. Server-based<br /><ul><li>Server-based storage virtualization is highly configurable and flexible since it’s implemented </li></ul>in the system software.<br /><ul><li>Because most operating systems incorporate this functionality into their system software, it is very cheap.
  12. 12. It does not require additional hardware in the storage infrastructure, and works with any devices that can be seen by the operating system.
  13. 13. Although it helps maximize the efficiency and resilience of storage resources, it’s optimized on a per-server basis only.
  14. 14. The task of mirroring, striping, and calculating parity requires additional processing, taking valuable CPU and memory resources away from the application.
  15. 15. Since every operating system implements file systems and volume management in different ways, organizations with multiple IT vendors need to maintain different skill sets and processes, with higher costs.
  16. 16. When it comes to the migration or replication of data (either locally or remotely) it becomes difficult to keep track of data protection across the entire environment.</li></li></ul><li>Network-based<br />Both in-band and out-of-band approaches provide storage virtualization with the ability to:<br /><ul><li>Pool heterogeneous vendor storage products in a seamless accessible pool.
  17. 17. Perform replication between non-like devices.
  18. 18. Provide a single management interface.</li></ul>Only the in-band approach can cache data for increased performance.<br />Both approaches also suffer from a number of drawbacks:<br /><ul><li>Implementation can be very complex because the pooling of storage requires the storage extents to be remapped into virtual extents.
  19. 19. The virtualization devices are typically servers running system software and requiring as much maintenance as a regular server.
  20. 20. The I/O can suffer from latency, impacting performance and scalability due to the multiple steps required to complete the request, and limited to the amount of memory and CPU available in the appliance nodes.
  21. 21. Decoupling the virtualization from the storage once it has been implemented is impossible because all the meta-data resides in the appliance, thereby making it proprietary.
  22. 22. Solutions on the market only exist for fibre channel (FC) based SANs.</li></li></ul><li>Controller-based<br /><ul><li>Connectivity to external storage assets is done via industry standard protocols, with no proprietary lock-in.
  23. 23. Complexity is reduced as it needs no additional hardware to extend the benefits of virtualization. In many cases the requirement for SAN hardware is greatly reduced.
  24. 24. Controller-based virtualization is typically cheaper than other approaches due to the ability to leverage existing SAN infrastructure, and the opportunity to consolidate</li></ul>management, replication, and availability tools.<br /><ul><li>Capabilities such as replication, partitioning, migration, and thin provisioning are extended to legacy storage arrays.
  25. 25. Heterogeneous data replication between non-like vendors or different storage classes reduces data protection costs.
  26. 26. Interoperability issues are reduced as the virtualized controller mimics a server connection to external storage.</li></ul>Although a few downsides to controller-based virtualization exist, the advantages not only far outweigh them but they also address most of the deficiencies found in server- and network based approaches.<br />
  27. 27. Outline<br />Background<br />Storage Virtualization<br />Object Storage<br />Distributed File System<br />
  28. 28. Motivation of Object Storage<br />Improved device and data sharing<br />- platform-dependent metadata moved to device<br />Improved scalability & security<br />- devices directly handle client requests<br /> - object security<br />Improved performance<br />- data types can be differentiated at the device<br />Improved storage management<br />- self-managed, policy-driven storage<br />- storage devices become more autonomous<br />
  29. 29. Objects in Storage<br />The root object -- The OSD itself<br />User object -- Created by SCSI commands from the <br />application or client<br />Collection object -- A group of user objects, such as all .mp3<br />Partition object -- Containers that share common security and <br />space managementcharacteristics<br />P4<br />P3<br />P2<br />OSD<br />P1<br />Root Object<br />(one per device)<br />Partition Objects<br />U1<br />User Data<br />Collection Objects<br />Metadata<br />Attributes<br />User Objects(for user data)<br />Object ID<br />
  30. 30. Object Storage Device<br />Two changes<br /> - Object-based storage offloads <br /> the storage component to the <br /> storage device<br /> - The device interface changes <br /> from blocks to objects<br />Applications<br />Applications<br />System call interface<br />System call interface<br />File system user component<br />File system user component<br />File system storage component<br />Object interface<br />File system storage component<br />Block interface<br />Block I/O manager<br />Block I/O manager<br />Storage device<br />Storage device<br />Traditional model<br />OSD model<br />
  31. 31. Object Storage Architecture<br />Summary of OSD Key Benefits<br />■ Better data sharing – Using objects means less metadata<br />to keep coherent, which makes it possible to share the<br />data across different platforms.<br />■ Better security – Unlike blocks, objects can protect<br />themselves and authorize each I/O.<br />■ More intelligence – Object attributes help the storage<br />devices learn about its users, the applications and the<br />workloads. This leads to a variety of improvements, such<br />as better data management through caching. Active disks<br />can be implemented on OSDs to implement database<br />filters. An intelligent OSD can also continuously reorganize<br />the data, manage its own backups and deal with failures.<br />
  32. 32. Lustre<br />Lustre (Linux + Cluster)<br />- first open sourced system with object storage <br /> - a massively parallel distributed file system<br /> - consist of clients, MDS and OST<br /> - used by fifteen of the top 30 supercomputers in the world<br />A single metadata server (MDS) that has a single metadata target (MDT) per Lustrefilesystem that stores namespace metadata, such as filenames, directories, access permissions, and file layout. <br />Client(s)that access and use the data, concurrent and coherent read and write access to the files are allowed<br />One or more object storage servers (OSSes) that store file data on one or more object storage targets (OSTs)<br />
  33. 33. Ceph<br />Ceph is a distributed file system that provides excellent performance, reliability, and scalability based on object storage devices<br />Metadata Cluster store the cluster map and control the data placement, higher-level POSIX functions (such as open, close, and rename) are managed.<br />
  34. 34. Panasas<br />Panasas (Panasas, Inc.)<br /> - consist of OSD, Panasas File <br /> System, MDS<br /> - claim to be the world's fastest <br /> HPC storage system<br />
  35. 35. Outline<br />Background<br />Storage Virtualization<br />Object Storage<br />Distributed File System<br />
  36. 36. Distributed File System<br />A distributed file system or network file system is any file system that allows access to files from multiple hosts sharing via a computer network(Wikipedia)<br />The history<br />- 1st generation(1980s): NFS, AFS<br /> - 2nd generation(1990~1995): Tiger Shark, <br /> Slice File System <br /> - 3rd generation(1995~2000): Global File <br /> System, General Parallel File System, DiFFs, <br /> CXFS, HighRoad<br /> - 4th generation(2000~now): Lustre, GFSm, GlusterFS, HDFS<br />Performance<br />Scalability<br />Reliability<br />Availability<br />Fault-tolerant<br />
  37. 37. Google File System(GFS)<br />GFS is a scalable distributed file system for large distributed data-intensive application in Google<br />Beyond the traditional choices<br />- normal component failures<br /> - huge files by traditional standards <br /> - appending new data rather than overwriting<br /> - co-designing the application and file system API<br />GFS Interface<br />- create, delete, open, close, read, write<br />- snapshot & record append<br />Master maintains all file system metadata, such as namespace, access control information, mapping from files to chunks and the location of chunks<br />Clients interact with the master for metadata operations, but all data-bearing communication goes directly to the chunkservers<br />Files are divided into fix-size(64MB) chunks, and each chunk is identified by immutable and global unique 64 bit chunk handle. Chunkservers store chunks on local disks as Linux files. In addition, each chunk is replicated on multiple chunkservers, in default, 3 replicas.<br />
  38. 38. The client sends a write request to the primary once all the replicas have acknowledged receiving the data. The primary assigns consecutive serial numbers to all the mutations it receives and applies the mutation to its own local state in serial number order.<br />Write Control and Data Flow<br />The client asks the master which chunkserver holds the current lease for the chunk and the locations of the other replicas. If no one has, the master grants one to a replica it chooses.<br />Error cases:<br />Failed at the primary, it would not have been assigned a serial number and forwarded;<br />Succeeded at primary and an arbitrary subset of the secondary replicas.<br />The client code handles such errors by retrying the failed mutation.<br />The primary forwards the write request to secondary replicas<br />The client pushes the data to all replicas in any order.<br />The master replies with the identity of primary and the locations of the other replicas. The client caches the information.<br />The primary replies to the client.<br />The secondaries all reply to the primary indicating that they have completed the operation.<br />
  39. 39. Hadoop Distributed File System (HDFS)<br />NameNode, a master server that manages the file system namespace and regulates access to files by clients.<br />The Hadoop Distributed File System (HDFS) is an open source implementation of GFS<br />DataNodes, manage storage attached to the nodes that they run on<br />A file is split into one or more blocks and these blocks are stored in a set of DataNodes<br />
  40. 40. Taobao File System<br />Taobao File System(TFS) is a distributed file system optimized for the management of massive small files(1MB), such as pictures and descriptions of commodity<br />Application/Client: access the name server & data server through TFSClient<br />Name Sever: store metadata, monitor data server through heartbeat message, control IO balance, and data location info such <block id, data server><br />Data Sever: store application data, load blance, redundant backup<br />
  41. 41. GlusterFS<br />GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients<br />Foundamental shift:<br /><ul><li>elimination of metadata synchronization and updates, for each individual operation Gluster calculates metadata using universal algorithms
  42. 42. effective distribution of data, file distribution is intelligently handled using elastic hash
  43. 43. highly parallel architecture, there is a far more intelligent relationship between available CPUs and spindles</li></li></ul><li>GlusterFS(cont.)<br />Gluster offers multiple ways for users to access volumes in a Gluster storage cluster<br />Gluster allows to configure GlusterFS volumes in different scenarios:<br />1)Distributed ,distributes<br />files throughout the cluster;<br />2)Distributed Replicated,<br />replicates data across two or<br />more nodes in the cluster;<br />3) Distributed Striped, Stripes <br />files across multiple nodes in <br />the cluster.<br />
  44. 44. Sheepdog<br />Automatically detect removed nodes<br />Sheepdog is a distributed storage system for QEMU/KVM<br />- amazon EBS-like volume pool<br /> - highly scalable, available and reliable<br /> - support for advanced volume management<br /> - not general file system, API is designed specific to QEMU<br />Zero configuration about cluster nodes<br />Automatically detect added nodes<br />
  45. 45. Sheepdog<br />Volumes are divided into 4 MB objects and each object is identified by globally unique 64 bit id, and replicated to multiple nodes<br />Consistent hashing is used to decide<br />which node to store objects. Each node is also placed on the ring.Addition or removal of nodes does not significantly change the mapping of objects<br />
  46. 46. Reference<br />[1] A. D. Luca and M. Bhide. Storage virtualization for dummies, Hitachi Data Systems Edition. Wiley Publishing, 2010. <br />[2] S. Ghemawat, H. Gobioff, and S.-T. Leung. The google file system. In Proceedings of the 19th ACM Symposium on Operating Systems Principles 2003, SOSP 2003, Bolton Landing, NY, USA, October 19-22, 2003.<br />[3] R. MacManus. The coming data explosion. Available: http://www.readwriteweb.com/archives/the_coming_data_explosion.php, 2010.<br />
  47. 47. Reference (cont.)<br />[4] Intel white paper: Object-based storage, the next wave of storage technology and devices, 2003.<br />[5] M. Mesnier, G. R. Ganger and E. Riedel. Object-based storage. IEEE Communications Magazine, August 2003, 84-89.<br />[6] Lustre. Available: http://wiki.lustre.org/index.php, 2010.<br />[7] Panasas. Available: http://www.panasas.com/.<br />[8] Hadoop. Available: http://hadoop.apache.org/.<br />[9] tfs. Available: http://code.taobao.org/trac/tfs/wiki/intro.<br />[10] GlusterFS. Available: http://www.gluster.org/.<br />
  48. 48. Reference (cont.)<br />[11] Sheep dog. Available: http://www.osrg.net/sheepdog/.<br />[12] Ceph. Available: http://ceph.newdream.net/.<br />[13] S. A. Weil, S. A. Brandt, E. L. Miller, D. D. E. Long. Ceph: A Scalable, High-Performance Distributed File System. In Proceedings of 7th Symposium on Operating Systems Design and Implementation (OSDI '06), November 6-8, Seattle, WA, USA.<br />[14] Gluster Whitepaper: Gluster file system architecture.<br />

×