• Like
  • Save
How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13
Upcoming SlideShare
Loading in...5
×
 

How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13

on

  • 5,505 views

 

Statistics

Views

Total Views
5,505
Views on SlideShare
5,409
Embed Views
96

Actions

Likes
9
Downloads
91
Comments
0

3 Embeds 96

http://coderwall.com 86
http://www.slideshare.net 8
http://translate.googleusercontent.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13 How To Build A Scalable Storage System with OSS at TLUG Meeting 2008/09/13 Presentation Transcript

    • TLUG Meeting 2008/09/13 Gosuke Miyashita
    • My company
      • paperboy&co.
        • Web hosting, blog, ec hosting and so on for indivisuals
        • About 1,000 Linux servers
        • Many single servers ...
    • My goal of a scalable storage system
      • Storage system for a web hosting service
        • High resource availability
        • Flexible I/O distribution
        • Easy to extend
        • Mountable by multiple hosts
        • No SPoF
        • With OSS
        • Without expensive hardwares
      • Now I’m trying technologies for these purposes
    •  
      • cman
      • CLVM
      • GFS2
      • GNBD
      • DRBD
      • DM-MP
      Technologies
    • cman
      • Cluster Manager
      • A component of Red Hat Cluster Suit
      • Membership management
      • Messaging among cluster nodes
      • Needed for CLVM and GFS2
    • CLVM
      • Cluster Logical Volume Manager
      • Cluster-wide version of LVM2
      • Automatically share LVM2 metadata among all cluster nodes
      • So logical volumes with CLVM available to all cluster nodes
    • CLVM clvmd distributes metadata among cluster nodes Logical volumes presented to each cluster node Logical volume on shared storage LVM2 Metadata clvmd LVM2 Metadata clvmd LVM2 Metadata clvmd
    • GNBD
      • Global Network Block Device
      • Provides block-device access over TCP/IP
      • Similar to iSCSI
      • Advantage over iSCSI is built-in fencing
    • GNBD TCP/IP network GNBD client GNBD client GNBD client GNBD Server Exported block device
    • GFS2
      • Global File System 2
      • One of cluster-aware file systems
      • Multiple nodes can simultaneously access this filesystem
      • Uses DLM(Distributed Lock Manager) of cman to maintain file system integrity
      • OCFS is another cluster-aware file system
    • GFS2 These nodes can access to the GFS2 file system simultaneously GNBD Server GFS2 GNBD client cman GNBD client cman GNBD client cman
    • DRBD
      • Distributed Replicated Block Device
      • RAID1 over a network
      • Mirrors a whole block device over TCP/IP
      • Available Active/Active with cluster file systems
    • DRBD Replication Server Block Device Server Block Device
    • DM-MP
      • Device-Mapper Multipath
      • Bundles I/O paths to one virtual I/O path
      • Can choose active/passive or active/active
    • DM-MP with SAN storage /dev/sda1 /dev/sdb1 Seen as one device /dev/mapper/mpath0 active/passive or active/active Node HBA1 HBA2 SAN swtich 1 SAN swtich 2 Storage CNTRLR1 CNTRLR2
    •  
    • A scalable storage system /dev/VG0/LV0 (CLVM) mount /dev/VG0/LV0 /mnt cman GNBD cman GNBD GNBD Server GFS2 GNBD Server GFS2 Replication (DRBD) /dev/mapper/mpath0 (DM-MP) /dev/gnbd0 /dev/gnbd1 GNBD Server GFS2 GNBD Server GFS2 Replication (DRBD) /dev/mapper/mpath1 (DM-MP) /dev/gnbd2 /dev/gnbd3
    • How to extend /dev/VG0/LV0 (CLVM) mount /dev/VG0/LV0 /mnt cman GNBD cman GNBD GNBD Server GFS2 GNBD Server GFS2 /dev/mapper/mpath0 /dev/gnbd0 /dev/gnbd1 GNBD Server GFS2 GNBD Server GFS2 /dev/mapper/mpath1 /dev/gnbd2 /dev/gnbd3 GNBD Server GFS2 GNBD Server GFS2 /dev/mapper/mpath2 /dev/gnbd4 /dev/gnbd5
    •  
    • I wonder ...
      • Many components cause troubles?
      • How about overhead and performance?
      • How about stability?
      • More better way?
      • How about other than Red Hat Linux?
    •