Your SlideShare is downloading. ×
Glusterfs  and openstack
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Glusterfs and openstack

5,088
views

Published on

Published in: Technology, Business

0 Comments
8 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
5,088
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
195
Comments
0
Likes
8
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. GlusterFS & OpenStack Vijay Bellur Red Hat
  • 2. Agenda ➢ What is GlusterFS? ➢ Architecture ➢ Concepts ➢ Algorithms ➢ Access Mechanisms ➢ Implementation ➢ OpenStack and GlusterFS ➢Resources ➢ Q&A
  • 3. GlusterFS Deployment Global namespace Scale-out storage building blocks Supports thousands of clients Access using GlusterFS native, NFS, SMB and HTTP protocols Runs on commodity hardware
  • 4. GlusterFS concepts
  • 5.  Trusted Storage Pool (cluster) is a collection of storage servers.  Trusted Storage Pool is formed by invitation – you “probe” a new member from the cluster and not vice versa.  Logical partition for all data and management operations.  Membership information used for determining quorum.  Members can be dynamically added and removed from the pool. GlusterFS concepts – Trusted Storage Pool
  • 6. GlusterFS concepts – Trusted Storage Pool Node1 Node2 Probe Probe accepted Node 1 and Node 2 are peers in a trusted storage pool Node2Node1
  • 7. GlusterFS concepts – Trusted Storage Pool Node1 Node2 Node3Node2Node1 Trusted Storage Pool Node3Node2Node1 Detach
  • 8.  A brick is the combination of a node and an export directory – for e.g. hostname:/dir  Each brick inherits limits of the underlying filesystem  No limit on the number bricks per node  Ideally, each brick in a cluster should be of the same size /export3 /export3 /export3 Storage Node /export1 Storage Node /export2 /export1 /export2 /export4 /export5 Storage Node /export1 /export2 3 bricks 5 bricks 3 bricks GlusterFS concepts - Bricks
  • 9. GlusterFS concepts - Volumes ➢ A volume is a logical collection of bricks. ➢ Volume is identified by an administrator provided name. ➢Volume is a mountable entity and the volume name is provided at the time of mounting. ➢ mount -t glusterfs server1:/<volname> /my/mnt/point ➢ Bricks from the same node can be part of different volumes
  • 10. GlusterFS concepts - Volumes Node2Node1 Node3 /export/brick1 /export/brick2 /export/brick1 /export/brick2 /export/brick1 /export/brick2 Videos music
  • 11. Volume Types ➢Type of a volume is specified at the time of volume creation ➢ Volume type determines how and where data is placed ➢ Following volume types are supported in glusterfs: a) Distribute b) Stripe c) Replication d) Distributed Replicate e) Striped Replicate f) Distributed Striped Replicate
  • 12. Distributed Volume ➢Distributes files across various bricks of the volume. ➢Directories are present on all bricks of the volume. ➢Single brick failure will result in loss of data availability. ➢Removes the need for an external meta data server.
  • 13. How does a distributed volume work? ➢ Distribute works on the basis of a dht algorithm. ➢ A 32-bit hash space is divided into N ranges for N bricks ➢ At the time of directory creation, a range is assigned to each directory. ➢ During a file creation or retrieval, hash is computed on the file name. This hash value is used to locate or place the file. ➢Different directories in the same brick end up with different hash ranges.
  • 14. How does a distributed volume work?
  • 15. How does a distributed volume work?
  • 16. How does a distributed volume work?
  • 17. Replicated Volume ➢Creates synchronous copies of all directory and file updates. ➢Provides high availability of data when node failures occur. ➢Transaction driven for ensuring consistency. ➢Changelogs maintained for re-conciliation. ➢Any number of replicas can be configured.
  • 18. How does a replicated volume work?
  • 19. How does a replicated volume work?
  • 20. Distributed Replicated Volume ➢ Distribute files across replicated bricks ➢ Number of bricks must be a multiple of the replica count ➢ Ordering of bricks in volume definition matters ➢ Scaling and high reliability ➢ Improved read performance in most environments. ➢ Most preferred model of deployment currently.
  • 21. Distributed Replicated Volume
  • 22. Striped Volume ➢Files are striped into chunks and placed in various bricks. ➢Recommended only when very large files greater than the size of the disks are present. ➢Chunks are files with holes – this helps in maintaining offset consistency. ➢A brick failure can result in data loss. Redundancy with replication is highly recommended
  • 23. Translators in GlusterFS ➢Building blocks for a GlusterFS process. ➢Based on Translators in GNU HURD. ➢Each translator is a functional unit. ➢Translators can be stacked together for achieving desired functionality. ➢ Translators are deployment agnostic – can be loaded in either the client or server stacks.
  • 24. VFS Server I/O Cache Distribute / Stripe POSIX Ext4 Ext4Ext4 POSIX POSIX Brick 1 ServerServer Read Ahead Brick 2 Brick n-1 Gluster Server Replicate Ext4 POSIX Server Brick n Replicate Customizable GlusterFS Client/Server Stack Client Gluster Server Client GigE, 10GigE – TCPIP / InfiniBand – RDMA Gluster ServerGluster Server Client Client
  • 25. Dynamic Volume Management Collectively refers to application transparent operations that can be performed in the storage layer. ➢ Addition of Bricks to a volume ➢ Remove brick from a volume ➢ Rebalance data spread within a volume ➢ Replace a brick in a volume ➢ Adaptive Performance / Functionality tuning.
  • 26. Access Mechanisms: Gluster volumes can be accessed via the following mechanisms: ➢ FUSE ➢ NFS ➢ SMB ➢ libgfapi ➢ ReST ➢ HDFS
  • 27. ReST based access
  • 28. UFO Client Proxy Account Container Object HTTP Request (REST) Directory Volume FileClient NFS or GlusterFS Mount Unified File and object view. Entity mapping between file and object building blocks
  • 29. Interface Possibilities qemu NFS SMB Hadoop FUSE Cinder Swift (UFO) Files Blocks Objects libgfapi Whatever IP RDMA Transports files BD Back ends DB Interface Possibilities
  • 30. OpenStack and GlusterFS – Current Integration Glance Images Nova Nodes Swift Objects Cinder Data Glance Data Swift Data Swift API Storage Server Storage Server Storage Server… KVM KVM KVM … ● Separate Compute and Storage Pools ● GlusterFS directly provides Swift object service ● Integration with Keystone ● GeoReplication for multi-site support ● Swift data also available via other protocols ● Supports non-OpenStack use in addition to OpenStack use Logical View Physical View
  • 31. OpenStack and GlusterFS - Future Direction Hadoop Guest Other Guest ... Host Gluster Guest Hadoop Guest Other Guest ... Host Gluster Guest Nova Compute Nodes
  • 32. Open Stack and GlusterFS - Future Direction ● POC based on proposed OpenStack FaaS (File as a Service) proposal ● Cinder-like virtual NAS service ● Tenant-specific file shares ● Hypervisor mediated for security ● Avoid exposing servers to Quantum tenant network ● Optional multi-site or multi-zone GeoReplication ● FaaS data optionally available to non OpenStack nodes ● Initial focus on Linux guest ● Windows (SMB) and NFS shares also under consideration
  • 33. Open Stack and GlusterFS - Summary ● Can address storage needs across the stack: ● Block ● Object ● File ● Hadoop/Big Data
  • 34. Resources Mailing lists: gluster-users@gluster.org gluster-devel@nongnu.org IRC: #gluster and #gluster-dev on freenode Links: http://www.gluster.org http://hekafs.org http://forge.gluster.org http://www.gluster.org/community/documentation/index.php/Arch
  • 35. Questions? Thank You!