In this webinar, leading storage analyst firm Storage Strategies NOW, will discuss the findings from their comprehensive outlook report on the state of the cloud storage market and storage services that are layered on top of it. We will review: the definition of cloud storage, requirements, deployment, the market and its trends, API’s, cloud computing initiatives, best practices and infrastructure providers. Tom Trainer, Director of Product Marketing at Gluster, will provide an overview of Gluster’s storage products along with case studies demonstrating the strategic deployment of Gluster storage in both the public and private cloud.
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS StorageGlusterFS
Gluster has partnered with Redapt, Inc., an innovative data center architecture and infrastructure solutions provider, to integrate GlusterFS with hardware providing customers with highly-scalable NAS storage technology for on-premise, virtual and cloud environments. Gluster's storage technology enables Redapt to offer a comprehensive, cost-effective storage solution delivering the scalability, performance and reliability that companies need to effectively run their data centers.
This webinar will provide an overview of the partnership, benefits of the joint solution, and include use cases of how customers today are deploying the joint solution. .
In this Introduction to GlusterFS webinar, introduction and review of the GlusterFS architecture and key functionalities. Learn how GlusterFS is deployed in the datacenter, in the cloud, or between the two. We’ll also cover a brief update on GlusterFS v3.3 which is currently in beta.
Gluster Webinar: Introduction to GlusterFSGlusterFS
GlusterFS is an open source, scale-out network filesystem. It runs on commodity hardware and allows indefinite growth in capacity and performance by simply adding server nodes. Key benefits include flexibility to deploy on any hardware, linearly scalable performance, and superior storage economics compared to traditional storage solutions. GlusterFS uses a distributed hashing technique instead of a metadata server to provide high availability and reliability.
Award winning scale-up and scale-out storage for XenGlusterFS
This webinar discusses the Gluster Virtual Storage Appliance for Xen which packages GlusterFS in a virtual machine container optimized for ease of use with little to no configuration required. The Virtual Appliance seamlessly integrates with existing virtualization environments such as Citrix Xen, allowing you to deploy virtual storage the same way you deploy virtual machines. Deploy on premise to create a private cloud using any certified Xen server hardware platforms and certified storage: JBOD, DAS, or SAN.
Ben Golub gives insight to the latest storage trends including the EMC's latest acquisition of Isilon.
http://blog.gluster.com/2010/11/storage-is-sexy-again/
Gluster Webinar: Introduction to GlusterFS v3.3GlusterFS
Looking for a high performance, scale-out NAS file system? Or are you a new user of GlusterFS and want to learn more? This webinar includes an introduction and review of the GlusterFS architecture and key features. Learn how GlusterFS is deployed in the datacenter, in the cloud, or between the two. We’ll also cover a brief update on GlusterFS v3.3 which is currently in beta.
On the agenda:
*Brief intro to Gluster’s History
*Gluster Architecture Design Goals
*Key Technical Differentiators
*Gluster Elastic Hashing Algorithm
*Deployment scenarios
*Use Cases
Introduction to GlusterFS Webinar - September 2011GlusterFS
Looking for a high performance, scale-out NAS file system? Or are you a new user of GlusterFS and want to learn more? This educational monthly webinar provides an introduction and review of the GlusterFS architecture and key functionalities. Learn how GlusterFS is deployed in the datacenter, in the cloud, or between the two.
GlusterFS Architecture - June 30, 2011 MeetupGlusterFS
The document discusses GlusterFS, an open source distributed file system. It provides details about GlusterFS architecture, which uses a userspace filesystem design running on top of FUSE. It also summarizes GlusterFS capabilities like elastic scaling, high availability through replication, and support for various volume types including distribute, replicate, and stripe. Benchmark results show GlusterFS achieving high performance with 64 servers and 220 clients connected over InfiniBand.
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS StorageGlusterFS
Gluster has partnered with Redapt, Inc., an innovative data center architecture and infrastructure solutions provider, to integrate GlusterFS with hardware providing customers with highly-scalable NAS storage technology for on-premise, virtual and cloud environments. Gluster's storage technology enables Redapt to offer a comprehensive, cost-effective storage solution delivering the scalability, performance and reliability that companies need to effectively run their data centers.
This webinar will provide an overview of the partnership, benefits of the joint solution, and include use cases of how customers today are deploying the joint solution. .
In this Introduction to GlusterFS webinar, introduction and review of the GlusterFS architecture and key functionalities. Learn how GlusterFS is deployed in the datacenter, in the cloud, or between the two. We’ll also cover a brief update on GlusterFS v3.3 which is currently in beta.
Gluster Webinar: Introduction to GlusterFSGlusterFS
GlusterFS is an open source, scale-out network filesystem. It runs on commodity hardware and allows indefinite growth in capacity and performance by simply adding server nodes. Key benefits include flexibility to deploy on any hardware, linearly scalable performance, and superior storage economics compared to traditional storage solutions. GlusterFS uses a distributed hashing technique instead of a metadata server to provide high availability and reliability.
Award winning scale-up and scale-out storage for XenGlusterFS
This webinar discusses the Gluster Virtual Storage Appliance for Xen which packages GlusterFS in a virtual machine container optimized for ease of use with little to no configuration required. The Virtual Appliance seamlessly integrates with existing virtualization environments such as Citrix Xen, allowing you to deploy virtual storage the same way you deploy virtual machines. Deploy on premise to create a private cloud using any certified Xen server hardware platforms and certified storage: JBOD, DAS, or SAN.
Ben Golub gives insight to the latest storage trends including the EMC's latest acquisition of Isilon.
http://blog.gluster.com/2010/11/storage-is-sexy-again/
Gluster Webinar: Introduction to GlusterFS v3.3GlusterFS
Looking for a high performance, scale-out NAS file system? Or are you a new user of GlusterFS and want to learn more? This webinar includes an introduction and review of the GlusterFS architecture and key features. Learn how GlusterFS is deployed in the datacenter, in the cloud, or between the two. We’ll also cover a brief update on GlusterFS v3.3 which is currently in beta.
On the agenda:
*Brief intro to Gluster’s History
*Gluster Architecture Design Goals
*Key Technical Differentiators
*Gluster Elastic Hashing Algorithm
*Deployment scenarios
*Use Cases
Introduction to GlusterFS Webinar - September 2011GlusterFS
Looking for a high performance, scale-out NAS file system? Or are you a new user of GlusterFS and want to learn more? This educational monthly webinar provides an introduction and review of the GlusterFS architecture and key functionalities. Learn how GlusterFS is deployed in the datacenter, in the cloud, or between the two.
GlusterFS Architecture - June 30, 2011 MeetupGlusterFS
The document discusses GlusterFS, an open source distributed file system. It provides details about GlusterFS architecture, which uses a userspace filesystem design running on top of FUSE. It also summarizes GlusterFS capabilities like elastic scaling, high availability through replication, and support for various volume types including distribute, replicate, and stripe. Benchmark results show GlusterFS achieving high performance with 64 servers and 220 clients connected over InfiniBand.
Red Hat Storage - Introduction to GlusterFSGlusterFS
Red Hat Storage introduces GlusterFS, an open source scale-out file system. GlusterFS provides scalable, affordable storage using commodity hardware. It allows linearly scaling performance and capacity by adding servers. GlusterFS has a global namespace and supports various protocols, enabling flexible deployment across private and public clouds. Many enterprises rely on GlusterFS for applications, virtual machines, Hadoop, and hybrid cloud solutions.
This document describes Petascale Cloud Filesystem, a distributed file system designed by Gluster for large-scale cloud storage. It discusses Gluster's architecture advantages like being software-only, fully distributed with no single point of failure, and able to elastically scale out storage. The document also provides examples of Gluster deployments at organizations like Partners Healthcare, Pandora, and Cincinnati Bell Technology Solutions to provide centralized storage services and support private and public cloud environments.
Gluster Webinar May 25: Whats New in GlusterFS 3.2GlusterFS
This webinar provides an overview of the latest features introduced in GlusterFS 3.2 including Asynchronous Geo-Replication, Usage Quotas, and Advanced Monitoring Tools.
Hadoop clusters can be provisioned quickly and easily on virtual infrastructure using techniques like linked clones and thin provisioning. This allows Hadoop to leverage capabilities of virtualization like high availability, resource controls, and re-using spare resources. Shared storage like SAN is useful for VM images and metadata, while local disks provide scalable bandwidth for HDFS data. Virtualizing Hadoop simplifies operations and enables flexible, on-demand provisioning of Hadoop clusters.
1) Running Hadoop on VMs provides advantages like easier cluster management, ability to consolidate clusters on spare resources, and more elastic scaling of clusters.
2) Separating Hadoop compute and data nodes into different VMs allows truly elastic scaling of clusters.
3) Hortonworks is working with VMware to provide first class support for running Hadoop on VMs, including high availability features and optimizations for performance.
The document discusses virtualizing Hadoop clusters on VMware vSphere. It describes how Hadoop enables parallel processing of large datasets across clusters using MapReduce. Virtualizing Hadoop provides benefits like simple operations, high availability, and elastic scaling. The document outlines challenges with using Hadoop and how virtualization addresses them. It provides examples of deploying Hadoop clusters on Serengeti and configuring different distributions. Performance results show little overhead from virtualization and benefits of local storage. Joint engineering with Hortonworks adds high availability to Hadoop master daemons using vSphere features.
Virtual Machines are a mainstay in the enterprise. Apache Hadoop is normally run on bare machines. This talk walks through the convergence and the use of virtual machines for running ApacheHadoop. We describe the results from various tests and benchmarks which show that the overhead of using VMs is small. This is a small price to pay for the advantages offered by virtualization. The second half of talk compares multi-tenancy with VMs versus multi-tenancy of with Hadoop`s Capacity scheduler. We follow on with a comparison of resource management in V-Sphere and the finer grained resource management and scheduling in NextGen MapReduce. NextGen MapReduce supports a general notion of a container (such as a process, jvm, virtual machine etc) in which tasks are run;. We compare the role of such first class VM support in Hadoop.
IBM Spectrum Scale 4.2.3 provides concise security capabilities including:
1) Secure data at rest through encryption and secure deletion capabilities as well as support for NIST algorithms.
2) Secure data in transit with support for Kerberos, SSL/TLS, and configurable security levels for cluster communication.
3) Role-based access control and support for directory services like Active Directory for authentication and authorization.
4) Secure administration through SSH/TLS for commands and REST APIs, role-based access in the GUI, and limited admin nodes.
5) Additional features like file and object access control lists, firewall support, immutability mode for compliance, and audit logging.
Introduction to IBM Spectrum Scale and Its Use in Life ScienceSandeep Patil
IBM Spectrum Scale is a scalable file system that can be used to support life science research. It provides high scalability, high availability, and a software read cache called Local Read Only Cache (LROC) that uses SSDs to improve performance. The University of Basel uses Spectrum Scale in their scientific computing and storage infrastructure to support various research areas including bioinformatics, structural biology, and hosting reference services. It provides features such as cluster file systems, data migration, hierarchical storage management, encryption, and disaster recovery between two sites using asynchronous file migration.
This document discusses using HBase for geo-based content processing at NAVTEQ. It outlines the problems with their previous system, such as ineffective scaling and high Oracle licensing costs. Their solution was to implement HBase on Hadoop for horizontal scalability and flexible rules-based processing. Some challenges included unstable early versions of HBase, database design issues, and interfacing batch systems with real-time systems. Cloudera support helped address many technical issues and provide best practices for operating Hadoop and HBase at scale.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
Hadoop World 2011: HDFS Federation - Suresh Srinivas, HortonworksCloudera, Inc.
Scalability of the NameNode has been a key issue for HDFS clusters. Because the entire file system metadata is stored in memory on a single NameNode, and all metadata operations are processed on this single system, the NameNode both limits the growth in size of the cluster and makes the NameService a bottleneck for the MapReduce framework as demand increases. This presentation will describe the features and implementation of HDFS Federation scheduled for release with Hadoop-0.23.
Spectrum Scale - Diversified analytic solution based on various storage servi...Wei Gong
This slides describe diversified analytic solutions based on Spectrum Scale with various deployment mode, such as storage rich-server, share storage, IBM DeepFlash 150 and Elastic Storage Server. It deep dives several advanced data management features and solutions for BD&A workload derived from Spectrum Scale.
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
Spectrum Scale Unified File and Object with WAN CachingSandeep Patil
This document provides an overview of IBM Spectrum Scale's Active File Management (AFM) capabilities and use cases. AFM uses a home-and-cache model to cache data from a home site at local clusters for low-latency access. It expands GPFS' global namespace across geographical distances and provides automated namespace management. The document discusses AFM caching basics, global sharing, use cases like content distribution and disaster recovery. It also provides details on Spectrum Scale's protocol support, unified file and object access, using AFM with object storage, and configuration.
OpenStorage refers to open source storage software that allows for disaggregated hardware components from different vendors. NexentaStor is a leading OpenStorage solution that runs on standard hardware and provides file and block access using protocols like NFS, CIFS, and iSCSI. It offers storage efficiency features like deduplication, compression, and thin provisioning. NexentaStor can also be used with OpenStack Nova to provision volumes and attach them to virtual machines. Nexenta has contributed code to OpenStack Swift to leverage NexentaStor's self-healing capabilities for object storage. OpenStorage is growing with adoption in cloud computing and more integration with projects like OpenStack.
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on DemandRichard McDougall
Elastic, Multi-tenant Hadoop on Demand! Richard McDougall, Chief Architect, Application Infrastructure and Big Data, VMware, Inc @richardmcdougll ApacheCon Europe, 2012. Broadens the application of Hadoop technology with horizontal and vertical use cases. Hadoop enables parallel processing through a programming framework for highly parallel data processing using MapReduce and the Hadoop Distributed File System (HDFS) for distributed data storage. Serengeti automates deployment of Hadoop on virtual platforms in under 30 minutes for multi-tenant elastic Hadoop as a service.
Big Data and virtualization are two of the most exciting trends in the industry today. In this session you will learn about the components of Big Data systems, and how real-time, interactive and distributed processing systems like Hadoop integrate with existing applications and databases. The combination of Big Data systems with virtualization gives Hadoop and other Big Data technologies the key benefits of cloud computing: elasticity, multi-tenancy and high availability. A new open source project that VMware will announce at the Hadoop Summit will make it easy to deploy, configure and manage Hadoop on a virtualized infrastructure. We will discuss reference architectures for key Hadoop distributions anddiscuss future directions of this new open source project.
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5Doug O'Flaherty
The document discusses IBM Spectrum Scale, a software-defined storage product. It provides a unified file and object storage system with integrated analytics support. New features in versions 4.2 and 3.5 include reducing costs through compression and quality of service policies, accelerating analytics with native HDFS support, and simplifying deployment with new graphical user interfaces.
This document discusses storage as a service and how OpenStack provides storage options. It describes Storage as a Service and the types of storage available in OpenStack, including ephemeral, object, block, and file storage. It provides overviews of the OpenStack Object Storage (Swift) and Block Storage (Cinder) projects and how they work. The document also discusses filesystem storage in OpenStack using protocols like NFS and CIFS.
Red Hat Storage - Introduction to GlusterFSGlusterFS
Red Hat Storage introduces GlusterFS, an open source scale-out file system. GlusterFS provides scalable, affordable storage using commodity hardware. It allows linearly scaling performance and capacity by adding servers. GlusterFS has a global namespace and supports various protocols, enabling flexible deployment across private and public clouds. Many enterprises rely on GlusterFS for applications, virtual machines, Hadoop, and hybrid cloud solutions.
This document describes Petascale Cloud Filesystem, a distributed file system designed by Gluster for large-scale cloud storage. It discusses Gluster's architecture advantages like being software-only, fully distributed with no single point of failure, and able to elastically scale out storage. The document also provides examples of Gluster deployments at organizations like Partners Healthcare, Pandora, and Cincinnati Bell Technology Solutions to provide centralized storage services and support private and public cloud environments.
Gluster Webinar May 25: Whats New in GlusterFS 3.2GlusterFS
This webinar provides an overview of the latest features introduced in GlusterFS 3.2 including Asynchronous Geo-Replication, Usage Quotas, and Advanced Monitoring Tools.
Hadoop clusters can be provisioned quickly and easily on virtual infrastructure using techniques like linked clones and thin provisioning. This allows Hadoop to leverage capabilities of virtualization like high availability, resource controls, and re-using spare resources. Shared storage like SAN is useful for VM images and metadata, while local disks provide scalable bandwidth for HDFS data. Virtualizing Hadoop simplifies operations and enables flexible, on-demand provisioning of Hadoop clusters.
1) Running Hadoop on VMs provides advantages like easier cluster management, ability to consolidate clusters on spare resources, and more elastic scaling of clusters.
2) Separating Hadoop compute and data nodes into different VMs allows truly elastic scaling of clusters.
3) Hortonworks is working with VMware to provide first class support for running Hadoop on VMs, including high availability features and optimizations for performance.
The document discusses virtualizing Hadoop clusters on VMware vSphere. It describes how Hadoop enables parallel processing of large datasets across clusters using MapReduce. Virtualizing Hadoop provides benefits like simple operations, high availability, and elastic scaling. The document outlines challenges with using Hadoop and how virtualization addresses them. It provides examples of deploying Hadoop clusters on Serengeti and configuring different distributions. Performance results show little overhead from virtualization and benefits of local storage. Joint engineering with Hortonworks adds high availability to Hadoop master daemons using vSphere features.
Virtual Machines are a mainstay in the enterprise. Apache Hadoop is normally run on bare machines. This talk walks through the convergence and the use of virtual machines for running ApacheHadoop. We describe the results from various tests and benchmarks which show that the overhead of using VMs is small. This is a small price to pay for the advantages offered by virtualization. The second half of talk compares multi-tenancy with VMs versus multi-tenancy of with Hadoop`s Capacity scheduler. We follow on with a comparison of resource management in V-Sphere and the finer grained resource management and scheduling in NextGen MapReduce. NextGen MapReduce supports a general notion of a container (such as a process, jvm, virtual machine etc) in which tasks are run;. We compare the role of such first class VM support in Hadoop.
IBM Spectrum Scale 4.2.3 provides concise security capabilities including:
1) Secure data at rest through encryption and secure deletion capabilities as well as support for NIST algorithms.
2) Secure data in transit with support for Kerberos, SSL/TLS, and configurable security levels for cluster communication.
3) Role-based access control and support for directory services like Active Directory for authentication and authorization.
4) Secure administration through SSH/TLS for commands and REST APIs, role-based access in the GUI, and limited admin nodes.
5) Additional features like file and object access control lists, firewall support, immutability mode for compliance, and audit logging.
Introduction to IBM Spectrum Scale and Its Use in Life ScienceSandeep Patil
IBM Spectrum Scale is a scalable file system that can be used to support life science research. It provides high scalability, high availability, and a software read cache called Local Read Only Cache (LROC) that uses SSDs to improve performance. The University of Basel uses Spectrum Scale in their scientific computing and storage infrastructure to support various research areas including bioinformatics, structural biology, and hosting reference services. It provides features such as cluster file systems, data migration, hierarchical storage management, encryption, and disaster recovery between two sites using asynchronous file migration.
This document discusses using HBase for geo-based content processing at NAVTEQ. It outlines the problems with their previous system, such as ineffective scaling and high Oracle licensing costs. Their solution was to implement HBase on Hadoop for horizontal scalability and flexible rules-based processing. Some challenges included unstable early versions of HBase, database design issues, and interfacing batch systems with real-time systems. Cloudera support helped address many technical issues and provide best practices for operating Hadoop and HBase at scale.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
Hadoop World 2011: HDFS Federation - Suresh Srinivas, HortonworksCloudera, Inc.
Scalability of the NameNode has been a key issue for HDFS clusters. Because the entire file system metadata is stored in memory on a single NameNode, and all metadata operations are processed on this single system, the NameNode both limits the growth in size of the cluster and makes the NameService a bottleneck for the MapReduce framework as demand increases. This presentation will describe the features and implementation of HDFS Federation scheduled for release with Hadoop-0.23.
Spectrum Scale - Diversified analytic solution based on various storage servi...Wei Gong
This slides describe diversified analytic solutions based on Spectrum Scale with various deployment mode, such as storage rich-server, share storage, IBM DeepFlash 150 and Elastic Storage Server. It deep dives several advanced data management features and solutions for BD&A workload derived from Spectrum Scale.
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
Spectrum Scale Unified File and Object with WAN CachingSandeep Patil
This document provides an overview of IBM Spectrum Scale's Active File Management (AFM) capabilities and use cases. AFM uses a home-and-cache model to cache data from a home site at local clusters for low-latency access. It expands GPFS' global namespace across geographical distances and provides automated namespace management. The document discusses AFM caching basics, global sharing, use cases like content distribution and disaster recovery. It also provides details on Spectrum Scale's protocol support, unified file and object access, using AFM with object storage, and configuration.
OpenStorage refers to open source storage software that allows for disaggregated hardware components from different vendors. NexentaStor is a leading OpenStorage solution that runs on standard hardware and provides file and block access using protocols like NFS, CIFS, and iSCSI. It offers storage efficiency features like deduplication, compression, and thin provisioning. NexentaStor can also be used with OpenStack Nova to provision volumes and attach them to virtual machines. Nexenta has contributed code to OpenStack Swift to leverage NexentaStor's self-healing capabilities for object storage. OpenStorage is growing with adoption in cloud computing and more integration with projects like OpenStack.
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on DemandRichard McDougall
Elastic, Multi-tenant Hadoop on Demand! Richard McDougall, Chief Architect, Application Infrastructure and Big Data, VMware, Inc @richardmcdougll ApacheCon Europe, 2012. Broadens the application of Hadoop technology with horizontal and vertical use cases. Hadoop enables parallel processing through a programming framework for highly parallel data processing using MapReduce and the Hadoop Distributed File System (HDFS) for distributed data storage. Serengeti automates deployment of Hadoop on virtual platforms in under 30 minutes for multi-tenant elastic Hadoop as a service.
Big Data and virtualization are two of the most exciting trends in the industry today. In this session you will learn about the components of Big Data systems, and how real-time, interactive and distributed processing systems like Hadoop integrate with existing applications and databases. The combination of Big Data systems with virtualization gives Hadoop and other Big Data technologies the key benefits of cloud computing: elasticity, multi-tenancy and high availability. A new open source project that VMware will announce at the Hadoop Summit will make it easy to deploy, configure and manage Hadoop on a virtualized infrastructure. We will discuss reference architectures for key Hadoop distributions anddiscuss future directions of this new open source project.
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5Doug O'Flaherty
The document discusses IBM Spectrum Scale, a software-defined storage product. It provides a unified file and object storage system with integrated analytics support. New features in versions 4.2 and 3.5 include reducing costs through compression and quality of service policies, accelerating analytics with native HDFS support, and simplifying deployment with new graphical user interfaces.
This document discusses storage as a service and how OpenStack provides storage options. It describes Storage as a Service and the types of storage available in OpenStack, including ephemeral, object, block, and file storage. It provides overviews of the OpenStack Object Storage (Swift) and Block Storage (Cinder) projects and how they work. The document also discusses filesystem storage in OpenStack using protocols like NFS and CIFS.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. Key aspects of its design include handling frequent component failures as the norm, managing huge files up to multiple gigabytes in size containing many objects, optimizing for file appending and sequential reads of appended data, and co-designing the file system interface to increase flexibility for applications. The largest deployment to date includes over 1,000 storage nodes providing hundreds of terabytes of storage.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. Key aspects of its design include handling frequent component failures as the norm, managing huge files up to multiple gigabytes in size containing many objects, optimizing for file appending and sequential reads of appended data, and co-designing the file system interface to increase flexibility for applications. The largest deployment to date includes over 1,000 storage nodes providing hundreds of terabytes of storage.
This document discusses private cloud storage solutions as an alternative to public cloud services like Dropbox. It introduces ownCloud, an open source file sync and sharing solution that can be deployed on a company's private cloud infrastructure using OpenShift and Red Hat Storage. This provides secure access to files while giving users the same easy experience as consumer file sync services. The document provides an overview of the key components and demonstrates how ownCloud could be deployed on OpenShift along with MySQL and PHP to provide a private, self-hosted file sharing and sync solution.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. The key design drivers were the assumptions that components often fail, files are huge, writes are append-only, and concurrent appending is important. The system has a single master that manages metadata and assigns chunks to chunkservers, which store replicated file chunks. Clients communicate directly with chunkservers to read and write large, sequentially accessed files in chunks of 64MB.
The Google File System is a scalable distributed file system designed to meet the rapidly growing data storage needs of Google. It provides fault tolerance on inexpensive commodity hardware and high aggregate performance to large numbers of clients. The key design drivers were the assumptions that components often fail, files are huge, writes are append-only, and concurrent appending is important. The system has a single master that manages metadata and assigns chunks to chunkservers, which store replicated file chunks. Clients communicate directly with chunkservers to read and write large, sequentially accessed files in chunks of 64MB.
Liberate Your Files with a Private Cloud Storage Solution powered by Open SourceIsaac Christoffersen
Many of today's enterprises are working under a false assumption that there is a trade-off between consumer-centric file sharing and corporate IT policy compliance. This is because most market-leading SaaS solutions for file sync and share are not designed around enterprise IT's needs. They represent growing risks with vendor lock-in, data security, compliance and data ownership.
With a track record in delivering innovative Open Source solutions, Vizuri has an answer to help enterprises overcome these hurdles. By leveraging innovative Red Hat and ownCloud open source solutions, this solution help corporate IT provide a simple to use file sync and share solution for employees. As a result, organizations are able to retain a greater control over valuable intellectual property.
Integrating On-premises Enterprise Storage Workloads with AWS (ENT301) | AWS ...Amazon Web Services
AWS gives designers of enterprise storage systems a completely new set of options. Aimed at enterprise storage specialists and managers of cloud-integration teams, this session gives you the tools and perspective to confidently integrate your storage workloads with AWS. We show working use cases, a thorough TCO model, and detailed customer blueprints. Throughout we analyze how data-tiering options measure up to the design criteria that matter most: performance, efficiency, cost, security, and integration.
This document discusses Red Hat Storage, an open, unified, and extensible scale-out network attached storage software solution. Red Hat Storage provides a highly flexible and scalable platform to address challenges around growing unstructured data volumes. It delivers global data accessibility, support for virtualized and cloud environments, and cost efficiency through standardization and use of commodity hardware.
Red Hat Storage Server Roadmap & Integration With Open StackRed_Hat_Storage
"Red Hat Storage Server is an open, software-defined storage product for private, public, and hybrid cloud environments, based on the open source GlusterFS project, a distributed scale-out file system technology.
In this session, you’ll:
Hear about the near- and medium-term Red Hat Storage Server roadmap.
Get deep insight into its integration roadmap with Red Hat Enterprise Linux OpenStack Platform and its feature roadmap for running big data analytics workloads.
Have an opportunity to share your perspectives with senior business and technical leaders from the Red Hat Storage team to help shape the future of Red Hat Storage Server."
Cloud Computing: Making the right choiceIndicThreads
Session Presented @IndicThreads Cloud Computing Conference, Pune, India ( http://u10.indicthreads.com )
------------
The concept of cloud computing is quickly scaling the chasm between hype and reality. Cloud Computing is rapidly becoming popular amongst enterprises that realize the benefits of shared infrastructure, lowered costs and minimal management overheads. But not all organizations and applications may benefit from a cloud computing platform. A legacy application ported in a native fashion to a cloud computing platform may not utilize any of the platform’s USPs at all. More importantly, wrong choice of platform can be disastrous. Deciding the optimal cloud vendor or platform for your requirements is a complex task.
Consider the plethora of choices available in the world of cloud computing:
* Public Cloud or Private Cloud or Hybrid Cloud
* Infrastructure-as-a-Service (IaaS): Amazon AWS, Rackspace Cloud, GoGrid, Terremark,
* Platform-as-a-Service (PaaS): Google AppEngine, Microsoft Azure, Heroku
* Software-as-a-Service (SaaS): Salesforce, Netsuite, Google Apps, saas.com
* Should you use IaaS, PaaS or SaaS for your application?
* Which cloud database fits your application? SimpleDB, SQL, RDS, Hadoop?
We will discuss the various business and technology factors to consider, while choosing a cloud vendor. We will explore the pros and cons of various cloud vendors and their offerings. Lastly, we will also discuss some real-life use-cases of applications and servers being migrated to cloud computing and what factors led to selection of a particular cloud vendor.
Takeaways from the session
This talk would serve as an introduction to a wide variety of cloud computing platforms. The audience would be able to answer questions like: “What options are available for cloud computing?”, “What are their pros and cons?”, “Should I consider migrating my application or server to the cloud?”, “Should I use IaaS, PaaS or SaaS?”, “Which is the best cloud vendor for my use-case?”
VMworld 2013
Chris Greer, FedEx
Richard McDougall, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document provides guidance on cloud architecture best practices for architects. It discusses 7 key lessons: 1) design for failure and nothing fails, 2) loose coupling sets you free, 3) implement elasticity, 4) build security in every layer, 5) don't fear constraints, 6) think parallel, and 7) leverage many storage options. The document uses examples of moving a web architecture to AWS to illustrate applying these lessons around scalability, availability and resilience.
1. beyond mission critical virtualizing big data and hadoopChiou-Nan Chen
Virtualizing big data platforms like Hadoop provides organizations with agility, elasticity, and operational simplicity. It allows clusters to be quickly provisioned on demand, workloads to be independently scaled, and mixed workloads to be consolidated on shared infrastructure. This reduces costs while improving resource utilization for emerging big data use cases across many industries.
Hadoop has traditionally been an on-premises workload, with very few notable implementations on the cloud. With Organizations either having jumped on the cloud bandwagon or have started planning their expansion into the ecosystem, it is imperative for us to explore how Hadoop conforms to the cloud paradigm. With the coming off age of some very useful cloud paradigms and the nature of Big Data with high seasonality of workloads, this is becoming a very common ask from customers. Robust architectures, elastic scale, open platforms, OSS integrations, and addressing complex pain points will all be part of this lively talk. To be able to implement effective solutions for Big Data in the cloud it is imperative that you understand the core principles and grasp the design principles of how the cloud can enhance the benefits of parallelized analytics. Join this session to understand the nitty-gritties of implementing Big Data in the cloud and the various options therein. Big Data + Cloud is definitely a deadly combination.
Examining Technical Best Practices for Veritas and AWS Using a Detailed Refer...Veritas Technologies LLC
This document provides an overview and best practices for using Veritas solutions with AWS. It discusses common use cases and challenges with workload protection and data management in multi-cloud environments. It then outlines best practices for data movement and long-term retention in AWS using Veritas Access, as well as best practices for workload resiliency and migration to AWS using Veritas Resiliency Platform and CloudMobility. The presentation concludes with a discussion of advisory, training, and managed services available from Veritas to help with cloud adoption and migration.
The document discusses Grid computing and the Globus Toolkit. It provides an overview of Grid computing, describing it as the sharing of computer resources and coordinated problem solving across multiple institutions. It then summarizes the Globus Toolkit, describing it as open source software that provides basic components for Grid functionality, including security, execution management, data management, and monitoring. The Globus Toolkit aims to make it easier to build collaborative distributed applications that can exploit shared Grid infrastructure.
Maybe your business has outgrown its file server and you’re thinking of replacing it. Or perhaps your server is dated and not supporting your business like it should, so you’re considering moving to the cloud. It might be that you’re starting a new business and wondering if an in-house server is adequate or if you should adopt cloud technology from the start.
Regardless of why you’re debating an in-house server versus a cloud-based server, it’s a tough decision that will impact your business on a daily basis. We know there’s a lot to think about, and we’re here to help show why you should consolidate your file servers and move your data to the cloud.
In this webinar with Talon Storage Solutions, we covered:
-Challenges of using a physical file server
-Benefits of using a cloud file server
-Current State of the File Server market
-Reference Architecture examples for cloud file servers
-Demo: how to architect a cloud file server with highly-available storage
Learn more at https://www.softnas.com
This document summarizes VMware's Cloud Application Platform and its components. It discusses how VMware focuses on re-thinking end-user computing, modernizing application development, and evolving core infrastructure. It also outlines how vFabric helps build, run, and scale applications in the cloud through frameworks, services, and infrastructure components. Finally, it introduces Cloud Foundry as a platform as a service for deploying and scaling applications in the cloud era.
Similar to Cloud Storage Adoption, Practice, and Deployment (20)
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
2. Today’s Speakers
John Kreisa Tom Trainer Deni Connor James Bagley
Vice President Director Founding Analyst Senior Analyst
Marketing Product Marketing Storage Strategies NOW &
Gluster, Inc. Gluster, Inc. Business Development Consultant
Storage Strategies NOW
A Better Way To Do Storage 2
3. Poll Question
Are you using cloud storage today?
– Yes, in a test environment
– Yes, it’s deployed in a production environment
– No, however we are considering it
– Just researching
A Better Way To Do Storage 3
4. Cloud Storage: Adoption, Practice and Deployment
SSN Report Co-sponsored by
– The Storage Networking Industry Association
– Gluster
– Storage Strategies NOW
133 Respondents
Q1 2011
A Better Way To Do Storage 4
5. What is Cloud Storage and Cloud Computing?
Cloud computing is a model for enabling convenient, on
demand network access to a shared pool of configurable
computing resources (e.g., networks, servers, storage
applications and services) that can be rapidly provisioned and
released with minimal management effort or service provider
interaction.
www.nist.org
A Better Way To Do Storage 5
6. Cloud Storage Attributes
Resource pooling and multi-tenancy
Scalable and elastic
Accessible via standard Internet APIs and communications protocols
Service-based
Pricing is normally based on usage
Shared and collaborative
On-demand self-service
A Better Way To Do Storage 6
7. How is Cloud Storage Being Deployed?
Public cloud
Private cloud
Hybrid cloud
Community cloud
Cooperative Cloud
A Better Way To Do Storage 7
8. Cloud Storage Requirements
Multi-tenancy
Security and secure transmission channel
Data verification and audit trails
Performance and Quality of Service
Data protection and Availability
Retention, compliance, eDiscovery
Manageability
Data and Metadata Portability
Metering and Billing
A Better Way To Do Storage 8
17. Cloud Takeaways
Adoption of cloud storage
– Flowing down from C-level to IT operations
– 75% of CIO/CTO’s expect to deploy
– 54% of IT Operations expect to deploy
Most likely data to store in the cloud
1. E-mail
2. Backup data
BC/DR driving cloud storage
– Business Continuity and Disaster Recovery
Modifications to existing applications
A Better Way To Do Storage 17
19. What is the Gluster File System?
A scale-out file system for
– Network Attached Storage (NAS)
– Object storage – In Beta Now: GlusterFS 3.3
– Highly available storage
– Predictable, linearly scalable performance
GlusterFS provides
– Unified file and object storage
– Flexibility to deploy in ANY environment
– Scalability to Petabytes & beyond
– Superior storage economics
A Better Way To Do Storage 19
20. Gluster 3.3 Unified Access to Files & Objects
Window Access
– Improves Widows performance
– Uses HTTP, not slower CIFS
Object Storage
– API
– Internet Protocol (IP)
– ResTFul
– Get/Put
– Buckets
– Objects seen as files
Network Attached Storage (NAS)
– NFS / CIFS / GlusterFS
– POSIX compliant
– Access files within objects
A Better Way To Do Storage 20
21. Key Differentiators
Filesystem runs in user space
Software only
Unified file and object storage
Open source
Modular, stackable storage OS architecture
Data stored in native formats
No metadata – Elastic hashing
A Better Way To Do Storage 21
22. Open Source
200,000+ downloads
Global Adoption – ~12,000 /month
500+ registered deployments
– 45 countries
2,500+ registered users
– Mailing lists, Forums, etc.
Active community
– Diverse testing environments
– Bugs identification and fixes
– Code contributions
Member of broader ecosystem
– OpenStack, Linux Foundation, Open
Virtualization Alliance
A Better Way To Do Storage 22
23. A Standard Gluster Deployment
Clients/Apps Clients/Apps Clients/Apps Standard clients
running standard apps
IP Network Over any standard IP
network
Access application
data, as files & folders,
Gluster Global Namespace (NFS, CIFS, Gluster Native)
in a global namespace,
Application data VMs VMDK VMDK using a variety of
standard protocols
Stored in a
commoditized,
virtual storage pool virtualized, scale-out,
centrally managed pool
DAS, SAN, NAS, Object
A Better Way To Do Storage 23
24. Unifying Public and Private Cloud Storage
Client/Apps Client/Apps Client/Apps
Client/Apps
Client/Apps Client/Apps
Client/Apps Client/Apps
Client/Apps
IP Network
Gluster Global Namespace
Private Cloud Public Cloud
Replication
A Better Way To Do Storage 24
25. 4 Supported Ways to Consume GlusterFS
Virtual Machines
– GlusterFS deployable on the leading virtual machines
Amazon Web Services (AWS)
– GlusterFS deployed within Amazon Machine Image (AMI)
RightScale Cloud Management
– GlusterFS is available within a RightScale ServerTemplate
– Deployable via the RightScale Cloud Management Dashboard
Storage software appliance
– Deployable on bare metal and supports any hardware on the Red Hat Hardware
Compatibility List (HCL) of certified servers and storage
A Better Way To Do Storage 25
26. Pandora Internet Radio
Problem
• Explosive user & title growth
• As many as 12 file formats for each song
• ‘Hot’ content and long tail
Solution
• Three data centers, each with a six-node
GlusterFS cluster
• 1.2 PB of audio served • Replication for high availability
per week • 250+ TB total capacity
• 13 million files Benefits
• Over 50 GB/sec peak • Easily scale capacity
traffic • Centralized management; one administrator
to manage day-to-day operations
• No changes to application
• Higher reliability
A Better Way To Do Storage 26
27. Brightcove
Problem
• Cloud-based online video platform
• Explosive customer & title growth
• Massive video in multiple locations
• Costs rising, esp. with HD formats
Solution
• Complete scale-out based on commodity
DAS/JBOD
• Replication for high availability
• Over 1 PB currently in
• 1PB total capacity
Gluster
• Separate 4 PB project Benefits
in the works • Easily scale capacity
• Centralized management; one administrator
to manage day-to-day operations
• Higher reliability
• Path to multi-site
A Better Way To Do Storage 27
28. Cincinnati Bell Technology Solutions
Problem
• Host a dedicated enterprise cloud solution
• Large scale VMware environment
• Need high availability
Solution
• Large scale VM
• Gluster for VM storage, NFS to clients
storage
• SAS drives on back-end
• Low cost service • Replication for high availability
delivery for enterprise
customer Benefits
• Drastic reduction in • Storage provisioning from 6 wks to 15 min.
provisioning time • Vendor agnostic storage
• Low cost of service delivery
• Elastic growth
A Better Way To Do Storage 28
30. Summary
Cloud Storage: Adoption, Practice and Deployment
GlusterFS – A scale-out NAS and Object file system
Flexible, scalable storage for any cloud environment
Innovative architecture provides a better way to do
storage
A Better Way To Do Storage 30
31. Questions and Answers
Your turn - ask our experts
Start a trial: http://www.gluster.com/trybuy/
Additional resources: http://www.gluster.com/products/resources/
Join the community: http://www.gluster.org/
Follow on twitter: @gluster.
Read our blog: http://blog.gluster.com/
A Better Way To Do Storage 31