• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Deep Dive into Openstack Storage, Sean Cohen, Red Hat
 

Deep Dive into Openstack Storage, Sean Cohen, Red Hat

on

  • 1,037 views

I invite you to come and listen to my presentation about how Openstack and Gluster are integrating together in both Cinder and Swift. ...

I invite you to come and listen to my presentation about how Openstack and Gluster are integrating together in both Cinder and Swift.
I will give a brief description about Openstack storage components (Cinder, Swift and Glance) , followed by an intro to Gluster, and then present the integration points and some preferred topology and configuration between gluster and openstack.

Statistics

Views

Total Views
1,037
Views on SlideShare
1,037
Embed Views
0

Actions

Likes
2
Downloads
84
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Deep Dive into Openstack Storage, Sean Cohen, Red Hat Deep Dive into Openstack Storage, Sean Cohen, Red Hat Presentation Transcript

    • Deep Dive into Red Hat Enterprise Linux Openstack Storage Sean Cohen Sr. Product Manager Red Hat Dec 9, 2013 1
    • The Red Hat Way Red Hat’s business model is 100% open source. We have no alternative commercial solutions, and we never will. 2
    • From Community to Enterprise ● ● ● ● ● ● Open source, communitydeveloped (upstream) software Founded by Rackspace Hosting and NASA Managed by the OpenStack Foundation Vibrant group of developers collaborating on open source cloud infrastructure Software distributed under the Apache 2.0 license No certifications, no support ● ● ● ● ● ● ● 3 Latest OpenStack software, packaged in a managed open source community Facilitated by Red Hat Aimed at architects and developers who want to create, test, collaborate Freely available, not for sale ● ● ● ● ● Six-month release cadence mirroring community No certification, no support Installs on Red Hat and derivatives ● Enterprise-hardened OpenStack software Delivered with an enterprise life cycle Six-month release cadence offset from community releases to allow testing Aimed at long-term production deployments Certified hardware and software through the Red Hat OpenStack Cloud Infrastructure Partner Network Supported by Red Hat
    • Red Hat Continues to be Top Contributor also in OpenStack Havana Projects led by Red Hat 4
    • What's new in Havana Storage 5
    • Cinder 6
    • Block Storage - Cinder Encrypted Volumes ● Cinder volumes are now encrypted ● Data is decrypted and encrypted as needed at read/write time ● Process is transparent to guest instances. ● Encryption is done by Nova using dm-crypt, Cinder is made aware of encryption keys QEMU Assisted Snapshotting ● 7 Provides snapshotting of volumes on backends by storing data as QCOW2 files on these volumes. With Nova support, this can also enable quiescing via the QEMU guest agent
    • Block Storage - Cinder Centralized Mount Options ● ● When connecting to NFS or GlusterFS backed volumes, uses mount options from Cinder Was previously set on all Compute nodes Extend Volume ● Add support for extending the size of an existing volume. ● To resize your volume, you must first detach it from the server. ● 8 Resize the volume by passing the volume ID and the new size as parameters (using the new cinder extend command)
    • Block Storage - Cinder QoS support for volumes ● Across Block Storage drivers to guarantee applications performance (IOPS / Bandwidth), with settings such as: ● maximum MB/second (maxBWS) ● maximum IO/second (maxIOPS) Volume host attaching ● ● 9 Allow client require to attach a volume to a host by api but an instance only. This change allow attach_volume API support 'host_name' as a argument but not 'instance_uuid' only.
    • Block Storage - Cinder Transfer ownership of volumes ● ● Added the support for transferring Cinder Volumes from one tenant or project to another. As both projects can’t use the volume at the same time, you can create a transfer from one tenant, and then accept it from the other # cinder transfer­create <volume_id> # Tenant A # cinder transfer­accept <transfer_id> <auth_key> # Tenant B 10
    • Block Storage - Cinder Volume Migration ● Administrators are able to migrate a volumes to another host or to an entirely different backend, like so ● ● ● ● ● Check if storage can migrate the volume, if not, create a new volume If original volume is detached, Cinder server attaches both and runs 'dd' If original volume is attached, Nova performs the copy (KVMonly in Havana) Hot Swap Attached Volumes ● ● 12 # cinder migrate <volume-id> <target> Transparently swap volumes attached to an instance No reads or writes are lost/discarded
    • Block Storage - Cinder Extended Quotas ● ● ● ● Quotas are operational limits. For example, the number of gigabytes allowed for each tenant can be controlled so that cloud resources are optimized. Quotas can be enforced at both the tenant (or project) and the tenant-user level. Edit default quota settings such as update a particular quota value to prevent system capacities from being exhausted without notification. Using the class quotas named `default` as the default editable quotas. cinderclient command to update default quota example:  # cinder quota­class­update default <key> <value> 13
    • Block Storage - Cinder ● Cinder Backup ● ● ● ● 14 Starting the Havana release users may be able to use an alternative object store than Swift Backup service improvements to Object Storage so any driver can take advantage Enable the generalized backup layer to allow backups from any iSCSI device that doesn't have internal optimizations Added Ceph driver to backup service (allowing Ceph as a backup target with differential backups from Ceph to Ceph)
    • Block Storage - Cinder ● Scheduler hints Filter Scheduler: Example Flow ● Drivers continuously report capabilities and state ● Scheduler starts with list of all back-ends ● Filters out unsuitable back-ends ● Insufficient free space ● Insufficient capabilities ● ● 15 Sorts according to weights (e.g., available space) Returns best candidate
    • Block Storage - Cinder ● Scheduler hints ● cinderclient's code and to cinder API was introduced with a flexible hint mechanism which enhances user's ability to design filters and interact with them. ● Chooses back-end to place a new volume on ● Configurable plugins for scheduler ● ● Chance ● ● Simple Filter Most common is the filter scheduler ● 16 Has plug-able filters & weights
    • Block Storage - Cinder iSER Transport Protocol Support ● iSCSI over RDMA Increases performance compared to iSCSI over TCP (up to 5x faster bandwidth and lower CPU overhead), drove by Mellanox in Havana Support for raw disks without LVM ● ● In addition to or instead of the base LVM implementation libvirt uses the local storage as storage for the instance. The instance will get a new disk, usually a /dev/vdX disk. Rate Limited Disk Access ● QoS parameters extracted from Cinder ● Allows rate limiting per volume ● Can be enforced by Nova (KVM-only in Havana) or by storage 17
    • Block Storage ● Added native GlusterFS support. ● ● If qemu_allowed_storage_drivers is set to gluster in nova.conf then QEMU is configured to access the volume directly using libgfapi instead of via fuse. Added support for the following Gluster volume features: ● Volume Snapshots (QEMU assisted) Create ● Delete ● List ● Create volume from snapshot Volume Clones ● ● ● ● 18 Extend GlusterFS volume Volume Migration (Host assisted)
    • Block Storage New Vendor Drivers ● Dell EqualLogic volume driver ● VMware VMDK cinder driver ● IBM General Parallel File System (GPFS) ● Microsoft Windows Storage Server driver Major Additions To Existing Drivers ● Add a NFS Volume Driver to support Nexenta storage in Cinder ● Add Fibre Channel drivers for Huawei storage systems Backup Drivers ● Allow Ceph as an option for volume backup ● IBM Tivoli Storage Manager (TSM) 19
    • Block Storage New Vendor Certifications in Havana ● The following vendors with OpenStack storage drivers are part of our Partner Network, that we are currently working with to test and certify their products on RHEL OSP 4.0: ● ● 20 Coraid, Dell ,EMC, Hitachi, IBM, Inktank, Mellanox, NetApp, SolidFire, Zadara and many more... Vendors can submit their certification results for review once the GA bits are available.
    • 21
    • Glance Deep Dive 22
    • Image Service - Glance ● Glance Multi-locations ● ● ● Glance now supports adding/removing multiple location information to the metadata of an image, an image maybe have more then one location within the backend store. Glance Registry service deprecation ● 23 Enable image domain object fetch data from multiple locations, allow API client consume image from multiple backend store. Implement Registry Database Driver for the registry service in order to support legacy deployments based on 2 separate services
    • Image Service - Glance ● Total disk quota for glance users ● Added the ability to limit the usage of some basic imagerelated resources, such as: The number of images stored ● The amount of storage in occupied by a set of images Direct URL Metadata ● ● ● ● ● 24 As each storage system have a means to return direct URL specific meta-data to the client when direct_url is enabled The direct URL can now provide additional information to the client. For example, with a file:// URL the client may need to know the NFS host that is exporting it, the mount point, and FS type used.
    • Swift Deep Dive 25
    • Object Storage - Swift Global clusters ● Globally Distributed OpenStack Swift Cluster ● Replication across the world ● A globally replicated cluster is created by deploying storage nodes in each Region. The proxy nodes will have an affinity to a Region and be able to optimistically write to storage nodes based on the storage nodes’ Region. ● Local reads/writes for performance ● Tiered zones ● 26 Added a region tier above zones. This allows for the existing "unique-as-possible" placement strategy to continue to work across a distributed cluster and ensures that data is as protected from failures as possible.
    • Object Storage - Swift ● Proxy affinity (writes) ● ● Dedicated replication network support ● 27 In a multi-region scenario, writes are sent to <replica count> servers in the same region as the proxy. This keeps latency on writes down, and allows WAN traffic to be more strictly controlled, eg through a separate replication network. Added support for using dedicated network for replication traffic. Separating client-bound traffic between proxy-servers and storage-servers, and improves replication performance.
    • Object Storage - Swift ● Cluster-side crossdomain.xml file ● Useful for flash, cross-domain JavaScript ● ● <allow­access­from domain="*.mirantis.net" /> ● <allow­access­from domain="*.mirantis.com" /> ● </cross­domain­policy> Configuration Directory ● 28 <cross­domain­policy> ● ● <?xml version="1.0"?> Allow a single configuration object to be sourced from multiple files (either via swift.utils.readconf or paste.deploy.appconfig).
    • Object Storage - Swift ● Thread Pools ● ● Performance Improvements: ● ● ● ● ● 29 Use external real threads to allow for actual concurrent reads on multiple disks, ensuring that a single slow disk won't end up with all the threads stuck waiting for it. Optimized storage disk operations Memcache pool of connections (to prevent the connection count from growing without bound) Faster Handoff node selection (replicate handoff first) Cluster-wide crossdomain.xml file to better enable Flash apps reading content directly from a Swift cluster. Configuration Directory (ConfD) support to better manage configurations
    • 30
    • Icehouse Storage  Roadmap Highlights 31
    • Features in the Works for Icehouse Cinder Volume Replication Multi-attach Volume Retype ACLs for volumes Volume export/import Bare metal volumes Public Volumes Attachment notifications Filtering weighing (as part of placement decision making) 32
    • Features in the Works for Icehouse Glance image-recover New download workflow ("Export") New Upload Workflow ("Import") Add multifilesystem store to support NFS servers as backend Adding image location selection strategy (in multi-location) 33
    • Features in the Works for Icehouse Swift Storage Policies Shard large containers Pluggable Back-end API (Gluster, Ceph) Multi-ring servers Improved Object Replicator- aka Local storage volume (volume in local storage and incremental snapshots are stored in swift) Object Replicator - 'ssync' (an rsync alternative) Searchable Metadata (driven by HP and IBM Softlayer) Cluster Federation 34
    • We’ve built the world’s largest ecosystem for commercially supported OpenStack deployments It’s open. It’s innovative. And it’s all yours. 35
    • Join the RDO Community http://openstack.redhat.com http://redhatstack.com