• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
The Future of GlusterFS and Gluster.org

The Future of GlusterFS and Gluster.org



These are the slides from a webinar I did today (Jan 26, 2012). It's all about where the GlusterFS project stands today and where it's going.

These are the slides from a webinar I did today (Jan 26, 2012). It's all about where the GlusterFS project stands today and where it's going.



Total Views
Views on SlideShare
Embed Views



15 Embeds 1,247

http://didacticosyartesanias.blogspot.com 818
http://biancasousa.com.br 110
http://didacticosyartesanias.blogspot.com.ar 98
http://didacticosyartesanias.blogspot.mx 87
http://didacticosyartesanias.blogspot.com.es 73
http://www.didacticosyartesanias.blogspot.com 18
http://luxabard.blogspot.com 11
http://www.linkedin.com 8
http://a0.twimg.com 7
http://www.blogger.com 7
http://losmejoresaccesoriostuning.blogspot.com 3
http://paper.li 2
http://didacticosyartesanias.blogspot.fr 2
http://didacticosyartesanias.blogspot.de 2
https://duckduckgo.com 1



Upload Details

Uploaded via as OpenOffice

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.


11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • Although it's a commonplace that GlusterFS storage capacity scales up to petabytes (a million 'GBs' / a quadrillion bytes**) technically speaking, its upper limit is much higher than that: about 72 brontobytes.

    This is approximately one byte for every single cell in the bodies of every single person in the United States.

    (**quadrillion short-scale, not long-scale)
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    The Future of GlusterFS and Gluster.org The Future of GlusterFS and Gluster.org Presentation Transcript

    • The Future of GlusterFS and Gluster.org John Mark Walker GlusterFS Community Guy Red Hat, Inc. January 25, 2012
    • The Roots of GlusterFS
        • Distributed storage solutions difficult to find
        • Decided to write their own
        • No filesystem experts – Pro & Con
        • Applied lessons from microkernel architecture
          • GNU Hurd
    • The Roots of GlusterFS
      • All storage solutions were either
        • Too expensive. or...
        • Not scalable, or…
        • Single purpose, or…
        • Don’t support legacy apps, or…
        • Don't support new apps, or...
        • Do some combo of the above, but not very well
    • The Roots of GlusterFS
      • The challenge:
        • Create a storage system that was…
          • Scalable
          • Seamlessly integrated in the data center
          • Future-proof
      • The solution: GlusterFS
        • Scalable, with DHT
        • POSIX-compliant
        • Stackable
        • User-space
    • GlusterFS Client Architecture
      • Creating a file system in user space
        • Utilizes fuse module
          • Kernel goes through fuse, which hands off to glusterd
      Linux kernel Fuse Ext4 glusterd … … Applications
    • No Centralized Metadata Client A Client B Client C Server X Files Extended Attr. Server Y Files Extended Attr. Server Z Files Extended Attr.
    • What is a Translator?
      • Add/remove layers
      • Reorder layers
      • Move layers between client and server
      • Implement new layers
        • e.g. encryption
      • Replace old layers
        • e.g. replication
      FUSE Interface Layer Performance Layer Distribution Layer Replication Layer Protocol Layer Local Filesystem Later
    • Some Features
      • Distributed, replicated and/or striped volumes
      • Global namespace
      • High availability
      • Geo-replication
      • Rebalancing
      • Remove or replace bricks
      • Self healing
      • volume profile and top metrics
    • No one ever expects the Red Hat acquisition
    • Red Hat Invests in GlusterFS
      • Unstructured data volume to grow 44x by 2020
      • Cloud and virtualization are driving scale-out storage growth
      • Scale-out storage shipments to exceed 63,000 PB by 2015 (74% CAGR)
      • 40% of core cloud spend related to storage
      • GlusterFS-based solutions up to 50% less than other storage systems
    • Red Hat Invests in GlusterFS
      • GlusterFS adds to the Red Hat stack
        • Complements other Red Hat offerings
        • Many integration points
      • More engineers hacking on GlusterFS than ever before
      RHEL RHEV Bare Metal Clouds GlusterFS Unified Storage JBoss
    • Red Hat Invests in GlusterFS
      • Acceleration of community investment
        • GlusterFS needs to be “bigger than Red Hat”
        • Transformation of GlusterFS from product to project
          • From “open core” to upstream
        • More resources for engineering and community outreach
        • Red Hat's success rests on economies of scale
          • Critical mass of users and developers
    • Join a Winning Team
      • We're hiring hackers and engineers
      • Looking for community collaborators
        • ISVs, students, IT professionals, fans, et al.
      “ Join me, and together, we can rule the galaxy...”
    • The Immediate Future
    • The Gluster Community
      • 300,000+ downloads
        • ~35,000 /month
        • >300% increase Y/Y
      • 1000+ deployments
        • 45 countries
      • 2,000+ registered users
        • Mailing lists, Forums, etc.
      Global adoption
    • The Gluster Community
      • Why are we changing?
        • Only 1 non-Red Hat core contributor
          • There were 2, but he acquired us
        • Want to be the software standard for distributed storage
        • Want to be more inclusive, more community-driven
      Goal: create global ecosystem that supports ISVs, service providers and more
    • Towards “Real” Open Source
      • GlusterFS, prior to acquisition
        • “ Open Core”
        • Tied directly to Gluster products
          • No differentiation
        • Very little outside collaboration
        • Contributors had to assign copyright to Gluster
          • Discouraged would-be contributors
    • Towards “Real” Open Source “ Open Core”
        • All engineering controlled by project/product sponsor
        • No innovation outside of core engineering team
        • All open source features also in commercial product
        • Many features in Commercial product not in open source code
      Commercial Product Open Source Code
    • Towards “Real” Open Source “ Real” Open Source
        • Many points of collaboration and innovation in open source project
        • Engineering team from multiple sources
        • Project and product do not completely overlap
        • Commercial products are hardened, more secure and thoroughly tested
      Open Source Code Commercial Products
    • Towards “Real” Open Source “ Real” Open Source
        • Enables more innovation on the fringes
        • Engineering team from multiple sources
        • Open source project is “upstream” from commercial product
        • “ Downstream” products are hardened, more secure and thoroughly tested
      Fedora Linux RHEL
    • Towards “Real” Open Source “ Real” Open Source
        • Enables more innovation on the fringes
        • Engineering team from multiple sources
        • Open source project is “upstream” from commercial product
        • “ Downstream” products are hardened, more secure and thoroughly tested
      GlusterFS Red Hat Storage
    • Project Roadmaps
    • GlusterFS 3.3 ETA in Q2/Q3 2012 What's New in GlusterFS 3.3
      • New features
        • Unified File & Object access
        • Hadoop / HDFS compatibility
      • New Volume Type
        • Replicated + striped (+ distributed) volumes
      • Enhancements to Distributed volumes (DHT translator)
        • Rebalance can migrate open files
        • Remove-brick can migrate data to remaining bricks
      • Enhancements to Replicated volumes (AFR translator)
        • Change replica count on an active volume, add replication to distribute-only volumes
        • Granular locking – Much faster self-healing for large files
        • Proactive self-heal process starts without FS stat
        • Round-trip reduction for lower latency
        • Quorum enforcement - avoid split brain scenarios
    • File and Object Storage
      • Traditional SAN / NAS support either file or block storage
      • New storage methodologies implement RESTful APIs over HTTP
      • Demand for unifying the storage infrastructure increasing
      • Treats files as objects and volumes as buckets
      • Available now in 3.3 betas
      • Soon to be backported to 3.2.x
      • Contributing to OpenStack project
        • Re-factored Swift API
    • Technology Integrations GlusterFS used as VM storage system
        • Pause and re-start VM’s, even on another hypervisor
        • HA and DR for VM’s
        • Faster VM deployment
        • V-motion –like capability
      Shared storage ISOs and appliances
        • oVirt / RHEV
        • CloudStack
        • OpenStack
      Goal: The standard for cloud storage OpenStack Imaging Services Unified File & Object Storage … Compute API Layer Mobile Apps. Web Clients. Enterprise Software Ecosystem
    • HDFS/Hadoop Compatibility
      • HDFS compatibility library
        • Simultaneous file and object access within Hadoop
      • Benefits
        • Legacy app access to MapReduce applications
        • Enables data storage consolidation
      • Simplify and unify storage deployments
      • Provide users with file level access to data
      • Enable legacy applications to access data via NFS
        • Analytic apps can access data without modification
    • The Gluster Community
      • What is changing?
        • HekaFS / CloudFS being folded into Gluster project
          • HekaFS == GlusterFS + multi-tenancy and SSL for auth and data encryption
          • HekaFS.org
          • ETA ~9 months
    • What else?
    • GlusterFS Advisory Board
      • Advisory board
        • Consists of industry and community leaders from Facebook, Citrix, Fedora, and OpenStack
          • Richard Wareing, Storage Engineer, Facebook
          • Jeff Darcy, Filesystem Engineer, Red Hat; Founder, HekaFS Project
          • AB Periasamy, Co-Founder, GlusterFS project
          • Ewan Mellor, Xen Engineer, Citrix; Member, OpenStack project
          • David Nalley, CloudStack Community Mgr; Fedora Advisory Board
          • Louis Zuckerman, Sr. System Administrator, Picture Marketing
          • Joe Julian, Sr. System Administrator, Ed Wyse Beauty Products
          • Greg DeKoenigsberg, Community VP, Eucalyptus; co-founder, Fedora
          • John Mark Walker, Gluster.org Community Guy (Chair)
    • Gluster.org Web Site
      • Services for users and developers
        • Developer section with comprehensive docs
        • Collaborative project hosting
        • Continuing development of end user documentation and interactive tools
      • Published roadmaps
        • Transparent feature development
    • GlusterFS Downloads
      • Where's the code?
        • GlusterFS 3.3
          • Simultaneous file + object
          • HDFS compatibility
          • Improved self-healing + VM hosting
            • Granular locking
          • Beta 3 due Feb/Mar 2012
          • http://download.gluster.org/pub/gluster/glusterfs
    • Gluster.org Services
      • Gluster.org
        • Portal into all things GlusterFS
      • Community.gluster.org
        • Self-support site; Q&A; HOWTOs; tutorials
      • Patch review, CI
        • review.gluster.com
      • #gluster
        • IRC channel on Freenode
    • Development Process
      • Source code
        • Hosted at github.com/gluster
      • Bugs and Feature Requests
        • Bugzilla.redhat.com – select GlusterFS from menu
      • Patches
        • Submit via Gerritt at review.gluster.com
      • See Development Work Flow doc:
        • gluster.org/community/documentation/index.php/Development_Work_Flow
    • Thank You
      • GlusterFS contacts
        • Gluster.org/interact/mailinglists
        • @RedHatStorage & @GlusterOrg
        • #gluster on Freenode
      • My contact info
        • [email_address]
        • Twitter & identi.ca: @johnmark