B 8スポンサー講演資料 osnexus steven umbehocker (アファーム・ビジネスパートナーズ株)
1. Deploying Hybrid Clouds with QuantaStor SDS + IBM SoftLayer
Steve Umbehocker
CEO, OSNEXUS
steve@osnexus.com
国内問い合わせ: Products@affirmbp.com
アファーム・ビジネスパートナーズ株式会社
2. • QuantaStor is the enterprise Software Defined Storage (SDS) platform
used by IBM SoftLayer for dedicated SAN/NAS deployments across their
datacenters worldwide.
• Multi-protocol with iSCSI/FC/CIFS/NFS support
• Designed to run on all major brand server hardware (IBM, Dell, HP,
Supermicro)
What is QuantaStor SDS?
QuantaStorは、IBM SoftLayerが世界中のデータセンターで専用のSAN / NAS導入のために、使用するエンタープライズ·
ソフトウェアで定義されたストレージ(SDS)プラットフォーム
iSCSI/ FC / CIFS/ NFSをサポートするマルチプロトコル
主要なブランドのサーバハードウェア上で実行するように設計 (IBM,DELL,HP,Supermicro)
4. Designed for IT Generalists
QuantaStorは日本語環境も配慮して、使いやすいように設計
また、すべてのドキュメントはwiki.osnexus.comウェブサイト上で公開
5. • Dedicated, secure, SAN/NAS storage
appliance for virtual machines
• VMware, Hyper-V, XenServer, KVM
• Scale-up and scale-out archive storage
• Multi-site data replication for DR (disaster
recovery fail-over)
• Including hybrid-clouds for replicating data
between on-premises locations and SoftLayer
datacenters
Common Use Cases
仮想マシンの専用、安全な、SAN / NASストレージアプライアンス VMware, Hyper-V, XenServer, KVM
スケールアップとスケールアウトのアーカイブストレージ
DRのためのマルチサイトデータレプリケーション(災害復旧フェイルオーバー)
オンプレミスの場所とSoftLayerデータセンターとの間でデータを複製するためのハイブリッド·クラウドを含む
6. • QuantaStor is a Linux based platform
• Extensible using 3rd party Linux software (Splunk, CopperEgg, etc)
• OSNEXUS contributes to the development of open-source storage technologies including
the ZFS file-system
Open I/O Stack Architecture
QuantaStorSDS
Linux Kernel (Ubuntu Server based)
Filesystem Management (ZFS/XFS)
NFS
Hardware Management (HBAs & RAID controllers)
SAN/NAS Protocol Driver Management
iSCSICIFS FC IB
Scale-out Filesystem Management (Gluster/Ceph)
Network Interface Management
QuantaStorは、Linuxベースのプラットフォーム
拡張可能なサードパーティ製のLinuxソフトウェア(Splunkは、CopperEgg、など)を使用
OSNEXUSは、ZFSファイルシステムを含むオープンソースプロジェクトの発展に貢献
7. Scale-out grid management architecture
Easy to use HTML5 web user interface
Thin-provisioning
Compression
Encryption
Online expand Storage Pools
Instant snapshots & rollback
Remote-replication / Disaster Recovery (DR)
Backup policies
Cloud backup
Network Share Quota Management
SSD caching
Bit-rot protection
Software & hardware RAID management
Call-home alerting, SNMP, PagerDuty integration
Enterprise SAN/NAS Feature Set
Scriptable CLI and REST APIs for automation
Utilization metrics for charge-back accounting
VMware 5 certified
Cloud Metrics integration with Librato
Scale-out NAS w/ NFS/CIFS
iSCSI, FC and Infiniband protocol support
Cascading site-to-site-to-site replication
Rollback from remote-replicas
Storage Tiers for grouping Storage Pools across
appliances
Integrated with OpenStack Cinder for automated
provisioning of iSCSI volumes
Multi-tenancy resource groups
Role Based Access Controls (RBAC)
Active Directory integration for CIFS access
.. and more ..
8. Use Cases:
Server Virtualization
QuantaStor Storage Appliance
iSCSI/FC
Storage Pool
Windows
Server
2012
Hyper-V
Citrix
XenServer
KVM /
OpenStack
VMware
NETWORK – 10GbE / 1GbE
HDD HDD HDD HDD
SSD
HDD
SSD
SSD
HDD HDD HDD HDDHDD
Volume Volume Volume Volume
Volume Volume VolumeVolume
Hardware RAID Controller
Data compression supported
at pool and volume level.
Optional encryption / SED
integration for HIPAA
compliance
Boost IOPS with
SSD Caching
Use SAS, SATA / HDD or
SSD as required to meet
performance demands of
the target workloads
9. Use Case: Hybrid Cloud / DR
QuantaStor Appliance
SoftLayer DataCenter - Tokyo
QuantaStor Appliance
Remote-replication
Remote-replication
Remote-replication
SoftLayer DataCenter - NY
On-Premises Office
Locations QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
オンプレミス
データ複製
オンプレミスとSoftLayerデータセンター内でQuantaStorアプライアンスを展開することにより,簡単に災害復旧レプリケーション
ポリシーがセットし、データを保護
10. QuantaStor Grid Management Technology
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
(Master)
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
Grid Nodes communicate
with the master node to
receive events to keep their
metadata in sync.
Seattle New York Dallas
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
QuantaStor
Appliance
Scale-out NAS
• Single Namespace
• Highly Available
• Expand Performance & Capacity by
adding appliances
• Encryption (SED/SafeStore)
DR
DR
DR
DR
QuantaStorアプライアンスはグリッドを構成することで、1つのサイトでQuantaStorを統合管理することが可能
グリッドが確立されると、データのレプリケーションを容易に行うことが可能
また、オンプレミスとSoftLayerのデータセンター内にあるQuantaStorをグリッド化し、大規模なQuantaStorグリッドとして管理が容易
※SoftLayer とのハイブリッド構成は、VPNにてQuantaStor間の通信を可能とする必要あり
11. Use Case: Scale-out NAS
Appliance B Appliance C Appliance DAppliance A
GlusterFS Volume
POOL-A
(ZFS)
POOL-B
(ZFS)
Windows
2012
Linux OS/X
POOL-C
(ZFS)
POOL-D
(ZFS)
Gluster Service Gluster Service Gluster Service Gluster Service
Gluster BrickGluster BrickGluster BrickGluster Brick
NFS/CIFS NFS/CIFS NFS/CIFS NFS/CIFS
Gluster Client Gluster Client Gluster Client Gluster Client
File
File
FileFileFile
File
QuantaStorはスケールアウトNASストレージを提
供するGlusterFSを含むオープンソースのエンタ
ープライズ·ストレージ·ファイルシステム技術を
統合
図は、複数のアプライアンスは、マルチメディア、
アーカイブおよび他のアプリケーションのための
スケールアウトNASストレージを提供する組み合
わせ方法を示す
17. • Scale-out object storage cluster software designed to scale-out
to multiple petabytes over dozens of server nodes
• Delivers all three pillars of storage including file, block and
object.
What is Ceph?
Ceph
Object Gateway
Ceph
Block Devices (RBDs)
Ceph
File System
Ceph Storage Cluster
Block
devicesobjects Files &
Directories
Ceph
Object Gateway
Ceph
Block Devices (RBDs)
Ceph
File System
Ceph Storage Cluster
Block
devicesobjects Files &
Directories
18. How does Ceph work?
RBD RBD RBD
POOL A
Replica = 2
PG PG
Server
Obj Obj
Obj Obj
Obj Obj
Obj Obj Obj Obj
Obj Obj
RBD RBD RBD
Obj Obj Obj Obj Obj
Server
PG
Rack 1
ObjObj
POOL B
Replica = 3
OSD OSD
ServerServer
Rack 2
OSD OSD
ServerServer
Rack 3
OSD OSD
OSDs hold
data
Placement
Groups decide
where to place
data
Pools group together
RBDs and set
redundancy levels.
RBDs (RADOS block
devices) are your
logical devices used
by OpenStack, etc
19. 1. Make sure you’re using a recent kernel
• RBD driver is in the kernel (rbd.ko)
• RBD driver is not yet buildable via dkms
• Both client & server side should be using recent kernel (>= 3.13)
2. Setup the monitors first (a least 3)
• Don’t forget public network configuration settings: eg: 10.0.0.0/16
3. ZFS filesystem is not a good choice yet but we’re working on it. For
now, use XFS.
4. No commands available to see the correlation of filesystem to the OSD,
just look in /var/lib/ceph/osdN
5. Ceph expects certain configuration files are in place before the packages
are installed. Ceph-deploy puts these files in place then does a push
install.
Ceph Deployment Lessons Learned
Experimenting / Setting up Ceph from scratch? Here’s some tips on things to look out for.
20. Integrating Ceph
Storage Pool (XFS or ZFS based)
STORAGEMANGEMENT
LAYERS
Hardware Management (HBAs & RAID Controllers)
Disk Management
Gluster
SAMBA/NFS
Ceph
iSCSI/FC/IBObject
Ceph technology is integrated into the QuantaStor SDS architecture so that you can leverage the benefits
of the scale-out storage technology without the high-complexity that is typically involved with setting it up
and maintaining it. QuantaStor also extends the capabilities of Ceph by managing the CRUSH maps and
providing traditional block storage access to Ceph RBDs via iSCSI and FC.