SlideShare a Scribd company logo
1 of 36
Spectrum Scale 4.1 System Administration
Spectrum Scale
Elastic Storage Server
Spectrum Scale native RAID (GNR)
Hints & Tips
© Copyright IBM Corporation 2015
Unit objectives
After completing this unit, you should be able to:
• Understand all the Elastic Storage Server Options
• Understand their Value to Client business
• Understand Spectrum Scale Native RAID
• Speak to its value and limitiations
• Describe the components of GNR and where it is supported
• Describe Declustered RAID
• Understand some key tips and hints to best practices.
© Copyright IBM Corporation 2015
Introducing the Elastic Storage Server
• IBM Elastic Storage Server
• The IBM® Elastic Storage Server is a high-performance, GPFS™ network storage disk
solution.
• The IBM Elastic Storage Server features multiple hardware platforms and architectures
that create an enterprise-level solution consisting of the following main components:
Platform and storage management console: IBM Power® System S812L (8247-21L)
1. Two Basic Storage Models (GS small form factor) & (GL large form factor)
Each Model has basic architectural and management requirements
1. Network Switches
 IBM RackSwitch™ G7028 (7120-24L)
 IBM RackSwitch G8052 (7120-48E)
 IBM RackSwitch G8264 (7120-64C)
2. IBM 7042-CR8 Rack-mounted Hardware Management Console (HMC)
3. IBM 7014 Rack Model T42 (enterprise rack)
© Copyright IBM Corporation 2015
Introducing the Elastic Storage Server
• IBM Elastic Storage Server
1. IBM 5146 Model GS1 IBM Elastic Storage Server
2. IBM 5146 Model GS2 IBM Elastic Storage Server,
3. IBM 5146 Model GS4 IBM Elastic Storage Server
4. IBM 5146 Model GS6 IBM Elastic Storage Server
 IBM Power System S822L (8247-22L)
 IBM 5887 EXP24S SFF Gen2-bay drawer
5. IBM 5146 Model GL2 IBM Elastic Storage Server
6. IBM 5146 Model GL4 IBM Elastic Storage Server
7. IBM 5146 Model GL6 IBM Elastic Storage Server
 IBM Power System S822L (8247-22L)
 IBM System Storage DCS3700 Expansion Unit 1818-80E
© Copyright IBM Corporation 2015
• Elastic Storage Server building blocks provide
– Simplified bundles of hardware that are optimized for field use
– These are either performance or capacity optimized
– They only support two array types
• EXP24S (2U 24 x 2.5” SSD or SAS Drive)
• DCS370 Expansion (1818-80E) (4U 60 x 2.5”/3.5” NLSAS Drives)
– They only support GNR RAID management
– The GS & GL models only support a finite set of drive types
– They include a pair of IO servers with each building block
– The first building block requires and HMC & EMS (management node)
– Each unit supports CLI and GUI for solution management
• * Each storage unit has 2 x SSDs for internal GNR use (not for client access)
* It is not a SONAS replacement & It is not an all-inclusive Appliance
© Copyright IBM Corporation 2015
Elastic Storage Server (what it is & what it isn’t)
Elastic Server GS Models
© Copyright IBM Corporation 2015
Elastic Server GL Models
© Copyright IBM Corporation 2015
A closer look at the GL 6 Components
© Copyright IBM Corporation 2015
Power 8 RH Linux
P822L GPFS Storage Server
GPFS 4.1 + GNR RAID Mgr
20 Cores, 128GB Memory
Fat Networking
DCS3700 Expansion Tray
60 Drive (4U)
1818-80E
DCS3700 Expansion Tray
60 Drive (4U)
1818-80E
SAS Connected Storage
IBM 7042-CR8 Rack-mounted Hardware
Management Console (HMC)
IBM 7014 Rack Model T42 (enterprise rack)
Power 8 RH Linux
P821L EMS/Xcat Server
& IBM HMC-7042-CR8
Management Console
Derated – Unofficial
1.4PB Raw
1PB Useable
16MB blocksize
13.6GB/S Seq Read
13.4GB/S Seq Write
30K x 8KB IOPS Read
6K x 8KB IOPS Write
Sample Configurations & Reference Architecture
© Copyright IBM Corporation 2015
Installation of Elastic Storage Server (High Level)
© Copyright IBM Corporation 2015
1. Confirm Private IP range for HMC DHCP server
2. Confirm Private Service network with (6) IPs and private xCat management network with (6) IPs,
• separate networks via switches or VLAN.
3. Confirm Public network connections for HMC and EMS - (2) IPs needed.
4. Confirm Host->IP mappings for the following (We can use the ESS defaults.)
5. + HMC
6. + EMS
7. + IO server 1, IO server 2, IO server 3, IO server 4
8. + 10GigE|40GigE hostname->IP mappings
9. Set up domain names for xCAT private net
10. Set up domain names for high speed interconnect
11. Set up Partition / & partition profile names
12. Confirm Server names
13. Confirm 10GigE/40GigE/IB switches in place and cabled
14. Set up Bonding being used or not?
15. Set up Public network in place and cabled to xCAT EMS and HMC (at minimum)
16. Confirm all building block components in frame (4 IO servers, EMS, HMC, HMC console?, switches)
17. Set up / confirm Dual feed power to frame components
18. Set up HMC console and/or terminal
19. Prepare for install Redhat 7 ISO or DVD
20. Client should register RH license for all ESS servers.
21. Define How many filesystems, block sizes, splitting of metadata?, replication ? (or should we just take defaults?)
22. Confirm all disks in place.? Will check with scripts
23. Confirm all cabling in place? Will be double checked by scripts
24. Confirm Wifi access in lab to setup sametime meeting room (for IBMr work)
25. Confirm client intend to use Standard Spectrum Scale for this ESS install. – Then Follow the 76 Page install
guide.
A look at the Building Block Networking
© Copyright IBM Corporation 2015
End Cluster Result is a = sum of the parts
© Copyright IBM Corporation 2015
What is GNR and How do I communicate the value?
© Copyright IBM Corporation 2015
 Spectrum Scale Native RAID is a software implementation of
storage RAID technologies within Spectrum Scale.
 It requires special Licensing
 It is only approved for pre-certified architectures
 (such as GSS, Elastic Storage Server, DDN GRIDScaler)
 Using conventional dual-ported disks in a JBOD configuration,
Spectrum Scale Native RAID implements sophisticated data
placement and error correction algorithms to deliver high levels of
storage reliability, availability, and performance.
 Standard Spectrum Scale file systems are created from the NSDs
defined through Spectrum Scale Native RAID.
No Hardware Based Controller
Petascale argument for stronger RAID codes
• Disk rebuilding is a fact of life at Petascale level
– With 100,000 disks and an MTBFdisk = 600 Khrs, rebuild is triggered about
four times a day
– 24-hour rebuild implies four concurrent, continuous rebuilds at all times.
• Traditional, 1-fault-tolerant RAID-5 is a non-starter
– Disk hard read error rate of 1-in-1015 bits implies data loss every ~26th
rebuild
– 1015 / (8 disks-per-RAID-group x 600-GB disks * 8 bits/byte )
– Or data loss event every 26/4 = 6.5 days.
• 2-fault-tolerant declustered RAID (8+2P) may not be sufficient
– MDDTL ~ 7 years (simulated, MTTFdisk=600Khrs, Weibull, 100-PB usable).
• 3-fault-tolerant declustered RAID (8+3P) is 400,000x better
– MDDTL ~ 3x106 years (simulated, MTTFdisk=600Khrs, Weibull, 100-PB
usable)
– Guards against unexpected correlated failures.
© Copyright IBM Corporation 2015
Features
• Auto rebalancing
• Only 2% rebuild performance hit
• Reed Solomon erasure code, “8 data +3 parity”
• ~105 year MTDDL for 100-PB file system
• End-to-end, disk-to-Spectrum Scale-client data checksums
No hardware storage controller
• Software RAID on the I/O
Servers
– SAS attached JBOD
– Special JBOD storage drawer
for very dense drive packing
– Solid-state drives (SSDs) for
metadata storage
© Copyright IBM Corporation 2015
SAS
vDISK
Local area network (LAN)
NSD servers
SAS
vDISK
JBODs
Works within Spectrum Scale (GPFS) Network
Shared Disk (NSD)
Disks
IONode
UserSpace
GPFS NSD Server
KernelSpace
GPFS
Kernel IO Layer
OS Device Driver
HBA
Device Driver
ComputeNode
UserSpace GPFS NSD Client
GPFS
Client Application
ControlRPC
DataRDMA
Disk Array
Controller
© Copyright IBM Corporation 2015
Disks
IONode
UserSpace
GPFS NSD Server
KernelSpace
GPFS
Kernel IO Layer
GPFS Vdisk
(PERSEUS)
OS Device Driver
HBA
Device Driver
ComputeNode
UserSpace
GPFS NSD Client
GPFS
Client Application
ControlRPC
DataRDMA
Remove
hardware
controller
Add GPFS
software
controller
Traditional GNR based
RAID algorithm
• Two types of RAID:
• 3 or 4 way replication
• 8 + 2 or 3 way parity
• 2-fault and 3-fault tolerant codes (‘RAID-D2, RAID-D3’)
© Copyright IBM Corporation 2015
3-way Replication (1+2)8 + 2p Reed Solomon2-fault
tolerant
codes
3-fault
tolerant
codes
1 strip
(GPFS
block)
2 or 3
replicated
strips
4-way Replication (1+3)
8 strips
(GPFS block)
2 or 3
redundancy
strips
8 + 3p Reed Solomon
Declustered RAID
• Data, parity and spare strips are uniformly and independently
distributed across disk array.
• Supports an arbitrary number of disks per array
–Not restricted to an integral number of RAID track widths.
© Copyright IBM Corporation 2015
Conventional Declustered
Lower disk rebuild overhead
• Improved file system performance during rebuild
– Throughput of all operational disks is used for rebuilding
after disk failure, reducing load on client.
– Why: Since Spectrum Scale stripes data across all storage
controllers, without declustering, performance would be gated by
slowest rebuilding controller.
• In large systems, some array is likely always rebuilding
– 25,000 disks * 24 hours / (600,000-hour disk MTBF) = 1 rebuild / day
• Or in smaller storage array with out-of-spec failure rates
– 1,500 disks * 2% per month MTBF * 1/30 month = 1 rebuild / day
– With DeClustered GNR RAID
• Non-critical rebuild overhead remains typically < 3%.
• If risk should increase with multiple failures priority increases to reduce the
time in exposure.
© Copyright IBM Corporation 2015
7 disks3 groups
6 disks
spare
disk
21 virtual tracks
(42 strips)
49 strips
7 tracks per group
(2 strips per track)
7 spare
strips
3 1-fault-tolerant
groups
Declustered RAID example
© Copyright IBM Corporation 2015
Traditional GNR Declustered
failed disk
Rd-Wr
time
Declustered RAID rebuild
stripTimes
disks
stripTimeswrrd
eRebuildTim
7
2
77



© Copyright IBM Corporation 2015
 5.3
2
7
edupRebuildSpe
stripTimes
disks
stripTimeswrrd
eRebuildTim
2
6
66



Rd Wr
time
failed disk
High reliability
• Mean time to data loss due with 50,000 disks:
– 3 fault tolerance (8+3P)
• MTTDL  200 million years
• Annual Failure Rate (47-disk array)  4 x 10-12
– 2 fault tolerance (8+2P)
• MTTDL  200 years
• Annual Failure Rate (47-disk array)  5 x 10-6
– 1 fault-tolerance
• MTTDL  1 week (due to latent sector errors)
– 1015 bits / (8 disks * 600-GB disks * 8 bits/byte ) = 26 rebuilds / 4 rebuilds/day
© Copyright IBM Corporation 2015
Simulation assumptions: Disk capacity = 600-GB, MTTF = 600khrs, hard error rate = 1-in-1015 bits,47-
HDD declustered arrays, uncorrelated failures
Deferred disk maintenance
• With GNR, when disks fail and are restored before another failure,
multiple disks can sequentially fail without data loss.
– For example, RAID-D3 with 2 disks worth of spare space can handle up 5
sequential disk failures.
• With RAID-D3, disk maintenance can be deferred with a policy that replaces a disk
after the second disk failure. *Fewer maintenance calls with combined disk
replacements.
– Maintenance interval of a month or longer is possible.
– No more evening panic calls for immediate maintenance on common FRU
replacements.
• This Reduces probability of improper maintenance and/or unintended side effects.
© Copyright IBM Corporation 2015
Data integrity manager
• Highest priority: Restore redundancy after disk failure(s)
– Rebuild data stripes in order of 3, 2, and 1 erasures
– Fraction of stripes affected when 3 disks have failed (assuming 8+3p,
47 disks):
• 23% of stripes have 1 erasure (= 11/47)
• 5% of stripes have 2 erasures (= 11/47 * 10/46)
• 1% of stripes have 3 erasures (= 11/47 * 10/46 * 9/45)
• Medium priority: Rebalance spare space after disk install
– Restores uniform declustering of data, parity, and spare strips.
• Low priority: Scrub and repair media faults
– Verifies checksum/consistency of data and parity/mirror.
© Copyright IBM Corporation 2015
End-to-end checksum
• True end-to-end checksum from disk surface to client’s Spectrum Scale
interface
– Repairs soft/latent read errors
– Repairs lost/missing writes.
• Checksums are maintained on disk and in memory and are transmitted
to/from client.
• Checksum is stored in a 64-byte trailer of 32-KiB buffers
– 8-byte checksum and 56 bytes of ID and version info
– Sequence number used to detect lost/missing writes.
© Copyright IBM Corporation 2015
8 data strips 3 parity strips
32-KiB buffer
64B trailer
¼ to 2-KiB
terminus
IO Node Failover
© Copyright IBM Corporation 2015
Minimal configuration of two Spectrum Scale Native RAID servers and one
storage JBOD. Spectrum Scale Native RAID server 1 is the primary controller for
the first recovery group and backup for the second recovery group. Spectrum
Scale Native RAID server 2 is the primary controller for the second recovery
group and backup for the first recovery group. As shown, when server 1 fails,
control of the first recovery group is taken over by its backup server 2. During the
failure of server 1, the load on backup server 2 increases by 100% from one to
two recovery groups.
Comprehensive Disk and Path Diagnostics
• Asynchronous ‘disk hospital’s design allows for careful
problem determination of disk fault
– While a disk is in the disk hospital, reads are parity reconstructed.
– For writes, strips are marked stale and repaired later when disk
leaves.
– I/Os are resumed in under 10 seconds.
• Thorough Fault Determination
– Power-cycling drives to reset them
– Neighbor checking
– Supports multi-disk carriers.
• Disk Enclosure Management
– Uses SES interface for lights, latch locks, disk power, and so on.
• Manages topology and hardware configuration.
© Copyright IBM Corporation 2015
Disk Hospital Operations
• Before taking severe actions against a disk, GNR checks
neighboring disks to decide if some systemic problem may be
behind the failure.
• Tests paths using SCSI Test Unit Ready commands.
• Power-cycles disks to try to clear certain errors.
• Reads or writes sectors where an I/O occurred in order to test
for media errors.
• Works with higher levels to rewrite bad sectors.
• And Polls disabled paths.
© Copyright IBM Corporation 2015
Analysis with predictive actions
to support best practice healing
(almost like a real hospital)
Storage Component Hierarchy (GNR+JBOD)
• A Recovery group can have:
– max 512 disks
– 16 declustered arrays
– At least 1 SSD log vdisk
– Max 64 vdisks
• A De-clustered array:
– Can contain 128 pdisks
– Smallest is 4 disks
– Must have one large >= 11 disks
– Need 1 or more pdisks worth of
spare space
• Vdisks
– Vdisks are volumes that become
NSDs in Spectrum Scale control.
– Block Size: 1 MiB, 2 MiB, 4 MiB,
8 MiB and 16 MiB
© Copyright IBM Corporation 2015
pdisks
Recovery Group
left
Recovery Group
right
DA DA DA DA
Declustered
Arrays
VD VD VDVdisks = NSD
VD VD VD VD VD
GNR Commands: pdisks
•mmaddpdisk
– Adds a pdisk to a Spectrum Scale Native RAID recovery group.
•mmdelpdisk
– Deletes Spectrum Scale Native RAID pdisks.
•mmlspdisk
– Lists information for one or more Spectrum Scale Native RAID pdisks.
•mmchcarrier
– Allows Spectrum Scale Native RAID Physical Disks (pdisks) to be
physically removed and replaced.
© Copyright IBM Corporation 2015
GNR Commands: Recovery groups
•mmlsrecoverygroup
– Lists information about Spectrum Scale Native RAID recovery groups.
•mmlsrecoverygroupevents
– Displays the Spectrum Scale Native RAID recovery group event log.
•mmchrecoverygroup
– Changes Spectrum Scale Native RAID recovery group and
declustered array attributes.
•mmcrrecoverygroup
– Creates a Spectrum Scale Native RAID recovery group and its
component declustered arrays and pdisks and specifies the servers.
•mmdelrecoverygroup
– Deletes a Spectrum Scale Native RAID recovery group.
© Copyright IBM Corporation 2015
GNR Commands: vdisk
•mmdelvdisk
– Deletes vdisks from a declustered array in a Spectrum Scale Native
RAID recovery group.
•mmlsvdisk
– Lists information for one or more Spectrum Scale Native RAID vdisks.
•mmcrvdisk
– Creates a vdisk within a declustered array of a Spectrum Scale native
RAID recovery group.
© Copyright IBM Corporation 2015
Hints and Tips
© Copyright IBM Corporation 2015
With Elastic Storage Server the client must become a competent administrator of
several technologies:
IBM Power8, AIX, Redhat Enterprise Linux 7, Xcat, Spectrum Scale 4.1, Spectrum
Scale Native RAID
* You should always suggest adding service for Knowledge Transfer and ensure
that your clients have links and document references to support information required
to effectively manage their Spectrum Scale or Elastic Storage server systems.
With Elastic Storage Server and GNR you probably don’t want any 256K
filesystems as GNR only supports data blocksizes down to 512K. That would
mean a non-Vdisk filesystem using 256K block can never have a pool of Vdisk-
based storage.
Clients are seeing better large file, sequential performance as they increase
filesystem block size, as expected. And as they grow, they can update maxblocksize
on all client clusters and move through testing all the way up to the 16M to find the
best solution for their workloads, however, with a good distribution of small file sizes
they will want to keep blocksize low to prevent subblock waste. As the minimum
capacity file data will consume 1/32 of the file system Blocksize. So a 5k file will take
up 32K in a file system with a 1MB blocksize.
Hints and Tips
© Copyright IBM Corporation 2015
With Elastic Storage Server make sure that Power is redundantly connected
to ensure that power issues do not surprise your clients well into production.
Keep it simple (Left to Right) Fully Redundant
Review
• Elastic Storage Server is specifically designed to simplify Building Blocks for
Spectrum Scale file system deployments on from optimized scalable and
allow for the integration of GNR
• Elastic Storage Server has 7 models (4 GS models of small form factor for
SSD & SAS dirves) and 3 GL models of large form factor for NLSAS drives)
• Elastic Storage Server ships with 1 week of Lab Services for Installation and
installation is generally complicated enough to require that week of services,
however if is good to pen in additional lab services for knowledge transfer
for client with a 1st time install.
• Spectrum Scale Native RAID (GNR) removes the need for a RAID controller
and optimizes RAID management for Spectrum Scale file system
performance, and reliability
• Declustered RAID and Reed Solomon algorithms allow for Non-critical
rebuild overhead to typically remain < 3% of a performance impact.
• A well laid plan is cognizant of the sizing the technology to the workloads
and avoiding too many baked in assumptions.
© Copyright IBM Corporation 2015
Any Questions on ESS, GNR,
Hints and Tips
Questions
© Copyright IBM Corporation 2015

More Related Content

What's hot

IBM Tape the future of tape
IBM Tape the future of tapeIBM Tape the future of tape
IBM Tape the future of tapeJosef Weingand
 
Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...
Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...
Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...Sandeep Patil
 
Spectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSpectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSandeep Patil
 
Centralized Logging System Using ELK Stack
Centralized Logging System Using ELK StackCentralized Logging System Using ELK Stack
Centralized Logging System Using ELK StackRohit Sharma
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...xKinAnx
 
Minio Cloud Storage
Minio Cloud StorageMinio Cloud Storage
Minio Cloud StorageMinio
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
 
IBM Spectrum Scale Authentication for File Access - Deep Dive
IBM Spectrum Scale Authentication for File Access - Deep DiveIBM Spectrum Scale Authentication for File Access - Deep Dive
IBM Spectrum Scale Authentication for File Access - Deep DiveShradha Nayak Thakare
 
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Henning Jacobs
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringShapeBlue
 
MinIO January 2020 Briefing
MinIO January 2020 BriefingMinIO January 2020 Briefing
MinIO January 2020 BriefingJonathan Symonds
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Giuseppe Paterno'
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFinside-BigData.com
 
Seller Presentation - Power Systems Power Virtual Server.PPTX
Seller Presentation - Power Systems Power Virtual Server.PPTXSeller Presentation - Power Systems Power Virtual Server.PPTX
Seller Presentation - Power Systems Power Virtual Server.PPTXEdilsonNeto8
 

What's hot (20)

Netapp Storage
Netapp StorageNetapp Storage
Netapp Storage
 
IBM Tape the future of tape
IBM Tape the future of tapeIBM Tape the future of tape
IBM Tape the future of tape
 
Amazon EBS: Deep Dive
Amazon EBS: Deep DiveAmazon EBS: Deep Dive
Amazon EBS: Deep Dive
 
Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...
Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...
Proactive Threat Detection and Safeguarding of Data for Enhanced Cyber resili...
 
Spectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSpectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf Weiser
 
Centralized Logging System Using ELK Stack
Centralized Logging System Using ELK StackCentralized Logging System Using ELK Stack
Centralized Logging System Using ELK Stack
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
 
Minio Cloud Storage
Minio Cloud StorageMinio Cloud Storage
Minio Cloud Storage
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
 
IBM Spectrum Scale Authentication for File Access - Deep Dive
IBM Spectrum Scale Authentication for File Access - Deep DiveIBM Spectrum Scale Authentication for File Access - Deep Dive
IBM Spectrum Scale Authentication for File Access - Deep Dive
 
Zabbix
ZabbixZabbix
Zabbix
 
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
Optimizing Kubernetes Resource Requests/Limits for Cost-Efficiency and Latenc...
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Boosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uringBoosting I/O Performance with KVM io_uring
Boosting I/O Performance with KVM io_uring
 
MinIO January 2020 Briefing
MinIO January 2020 BriefingMinIO January 2020 Briefing
MinIO January 2020 Briefing
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2Filesystem Comparison: NFS vs GFS2 vs OCFS2
Filesystem Comparison: NFS vs GFS2 vs OCFS2
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oF
 
Seller Presentation - Power Systems Power Virtual Server.PPTX
Seller Presentation - Power Systems Power Virtual Server.PPTXSeller Presentation - Power Systems Power Virtual Server.PPTX
Seller Presentation - Power Systems Power Virtual Server.PPTX
 

Viewers also liked

IBM Platform Computing Elastic Storage
IBM Platform Computing  Elastic StorageIBM Platform Computing  Elastic Storage
IBM Platform Computing Elastic StoragePatrick Bouillaud
 
Spectrum Scale final
Spectrum Scale finalSpectrum Scale final
Spectrum Scale finalJoe Krotz
 
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5Doug O'Flaherty
 
Ibm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashIbm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
 
S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5Tony Pearson
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...xKinAnx
 
Welcome to 2015's Digital Enterprise IT Infrastructure
Welcome to 2015's Digital Enterprise IT Infrastructure   Welcome to 2015's Digital Enterprise IT Infrastructure
Welcome to 2015's Digital Enterprise IT Infrastructure John Sing
 
High Performance Computing: The Essential tool for a Knowledge Economy
High Performance Computing: The Essential tool for a Knowledge EconomyHigh Performance Computing: The Essential tool for a Knowledge Economy
High Performance Computing: The Essential tool for a Knowledge EconomyIntel IT Center
 
GPFS - graphical intro
GPFS - graphical introGPFS - graphical intro
GPFS - graphical introAlex Balk
 
Introduction to High-Performance Computing
Introduction to High-Performance ComputingIntroduction to High-Performance Computing
Introduction to High-Performance ComputingUmarudin Zaenuri
 
Unix _linux_fundamentals_for_hpc-_b
Unix  _linux_fundamentals_for_hpc-_bUnix  _linux_fundamentals_for_hpc-_b
Unix _linux_fundamentals_for_hpc-_bMohammad Reza Beygi
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance ComputingDell World
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance ComputingDivyen Patel
 
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral ProgramBig Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
 
High Performance Computing: an Introduction for the Society of Actuaries
High Performance Computing: an Introduction for the Society of ActuariesHigh Performance Computing: an Introduction for the Society of Actuaries
High Performance Computing: an Introduction for the Society of ActuariesAdam DeConinck
 
High performance computing - building blocks, production & perspective
High performance computing - building blocks, production & perspectiveHigh performance computing - building blocks, production & perspective
High performance computing - building blocks, production & perspectiveJason Shih
 
High performance computing
High performance computingHigh performance computing
High performance computingGuy Tel-Zur
 

Viewers also liked (20)

IBM Platform Computing Elastic Storage
IBM Platform Computing  Elastic StorageIBM Platform Computing  Elastic Storage
IBM Platform Computing Elastic Storage
 
Spectrum Scale final
Spectrum Scale finalSpectrum Scale final
Spectrum Scale final
 
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
 
Ibm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashIbm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ash
 
S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5
 
SoNAS
SoNASSoNAS
SoNAS
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
 
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
 
Welcome to 2015's Digital Enterprise IT Infrastructure
Welcome to 2015's Digital Enterprise IT Infrastructure   Welcome to 2015's Digital Enterprise IT Infrastructure
Welcome to 2015's Digital Enterprise IT Infrastructure
 
High Performance Computing: The Essential tool for a Knowledge Economy
High Performance Computing: The Essential tool for a Knowledge EconomyHigh Performance Computing: The Essential tool for a Knowledge Economy
High Performance Computing: The Essential tool for a Knowledge Economy
 
GPFS - graphical intro
GPFS - graphical introGPFS - graphical intro
GPFS - graphical intro
 
Introduction to High-Performance Computing
Introduction to High-Performance ComputingIntroduction to High-Performance Computing
Introduction to High-Performance Computing
 
Unix _linux_fundamentals_for_hpc-_b
Unix  _linux_fundamentals_for_hpc-_bUnix  _linux_fundamentals_for_hpc-_b
Unix _linux_fundamentals_for_hpc-_b
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance Computing
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance Computing
 
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral ProgramBig Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
 
High Performance Computing: an Introduction for the Society of Actuaries
High Performance Computing: an Introduction for the Society of ActuariesHigh Performance Computing: an Introduction for the Society of Actuaries
High Performance Computing: an Introduction for the Society of Actuaries
 
High performance computing - building blocks, production & perspective
High performance computing - building blocks, production & perspectiveHigh performance computing - building blocks, production & perspective
High performance computing - building blocks, production & perspective
 
High–Performance Computing
High–Performance ComputingHigh–Performance Computing
High–Performance Computing
 
High performance computing
High performance computingHigh performance computing
High performance computing
 

Similar to Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases-hints-tips

IBM flash systems
IBM flash systems IBM flash systems
IBM flash systems Solv AS
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Db2 analytics accelerator on ibm integrated analytics system technical over...
Db2 analytics accelerator on ibm integrated analytics system   technical over...Db2 analytics accelerator on ibm integrated analytics system   technical over...
Db2 analytics accelerator on ibm integrated analytics system technical over...Daniel Martin
 
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...ssuserecfcc8
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red_Hat_Storage
 
High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications Rebekah Rodriguez
 
Series 8 RAID Datasheet
Series 8 RAID DatasheetSeries 8 RAID Datasheet
Series 8 RAID DatasheetAdaptec by PMC
 
Mega Launch Recap Slide Deck
Mega Launch Recap Slide DeckMega Launch Recap Slide Deck
Mega Launch Recap Slide DeckVarrow Inc.
 
robust-storage-solution
robust-storage-solutionrobust-storage-solution
robust-storage-solutionTecsun Yeep
 
IBM Power Systems E850C and S824
IBM Power Systems E850C and S824IBM Power Systems E850C and S824
IBM Power Systems E850C and S824David Spurway
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterAaron Joue
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandServers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandAruj Thirawat
 
MT58 High performance graphics for VDI: A technical discussion
MT58 High performance graphics for VDI: A technical discussionMT58 High performance graphics for VDI: A technical discussion
MT58 High performance graphics for VDI: A technical discussionDell EMC World
 
Webinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareWebinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareNetgear Italia
 
[3]dell storage spaces c 1
[3]dell storage spaces c 1[3]dell storage spaces c 1
[3]dell storage spaces c 1Megan Warren
 
G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cTony Pearson
 

Similar to Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases-hints-tips (20)

IBM flash systems
IBM flash systems IBM flash systems
IBM flash systems
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Db2 analytics accelerator on ibm integrated analytics system technical over...
Db2 analytics accelerator on ibm integrated analytics system   technical over...Db2 analytics accelerator on ibm integrated analytics system   technical over...
Db2 analytics accelerator on ibm integrated analytics system technical over...
 
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
 
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
 
High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications
 
Series 8 RAID Datasheet
Series 8 RAID DatasheetSeries 8 RAID Datasheet
Series 8 RAID Datasheet
 
Mega Launch Recap Slide Deck
Mega Launch Recap Slide DeckMega Launch Recap Slide Deck
Mega Launch Recap Slide Deck
 
Summit workshop thompto
Summit workshop thomptoSummit workshop thompto
Summit workshop thompto
 
robust-storage-solution
robust-storage-solutionrobust-storage-solution
robust-storage-solution
 
IBM Power Systems E850C and S824
IBM Power Systems E850C and S824IBM Power Systems E850C and S824
IBM Power Systems E850C and S824
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - ThailandServers Technologies and Enterprise Data Center Trends 2014 - Thailand
Servers Technologies and Enterprise Data Center Trends 2014 - Thailand
 
MT58 High performance graphics for VDI: A technical discussion
MT58 High performance graphics for VDI: A technical discussionMT58 High performance graphics for VDI: A technical discussion
MT58 High performance graphics for VDI: A technical discussion
 
Webinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e softwareWebinar NETGEAR - ReadyNAS, le novità hardware e software
Webinar NETGEAR - ReadyNAS, le novità hardware e software
 
[3]dell storage spaces c 1
[3]dell storage spaces c 1[3]dell storage spaces c 1
[3]dell storage spaces c 1
 
IBM FlashSystem 720 and IBM FlashSystem 820
IBM FlashSystem 720 and IBM FlashSystem 820IBM FlashSystem 720 and IBM FlashSystem 820
IBM FlashSystem 720 and IBM FlashSystem 820
 
G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905c
 

More from xKinAnx

Engage for success ibm spectrum accelerate 2
Engage for success   ibm spectrum accelerate 2Engage for success   ibm spectrum accelerate 2
Engage for success ibm spectrum accelerate 2xKinAnx
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep diveAccelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep divexKinAnx
 
Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudxKinAnx
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 xKinAnx
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...xKinAnx
 
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directionsxKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...xKinAnx
 
Presentation disaster recovery in virtualization and cloud
Presentation   disaster recovery in virtualization and cloudPresentation   disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloudxKinAnx
 
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation   disaster recovery for oracle fusion middleware with the zfs st...Presentation   disaster recovery for oracle fusion middleware with the zfs st...
Presentation disaster recovery for oracle fusion middleware with the zfs st...xKinAnx
 
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation   differentiated virtualization for enterprise clouds, large and...Presentation   differentiated virtualization for enterprise clouds, large and...
Presentation differentiated virtualization for enterprise clouds, large and...xKinAnx
 
Presentation desktops for the cloud the view rollout
Presentation   desktops for the cloud the view rolloutPresentation   desktops for the cloud the view rollout
Presentation desktops for the cloud the view rolloutxKinAnx
 
Presentation design - key concepts and approaches for designing your deskto...
Presentation   design - key concepts and approaches for designing your deskto...Presentation   design - key concepts and approaches for designing your deskto...
Presentation design - key concepts and approaches for designing your deskto...xKinAnx
 
Presentation desarrollos cloud con oracle virtualization
Presentation   desarrollos cloud con oracle virtualizationPresentation   desarrollos cloud con oracle virtualization
Presentation desarrollos cloud con oracle virtualizationxKinAnx
 
Presentation deploying cloud based services
Presentation   deploying cloud based servicesPresentation   deploying cloud based services
Presentation deploying cloud based servicesxKinAnx
 
Presentation dell™ power vault™ md3
Presentation   dell™ power vault™ md3Presentation   dell™ power vault™ md3
Presentation dell™ power vault™ md3xKinAnx
 
Presentation defend your company against cyber threats with security solutions
Presentation   defend your company against cyber threats with security solutionsPresentation   defend your company against cyber threats with security solutions
Presentation defend your company against cyber threats with security solutionsxKinAnx
 
Presentation deduplication backup software and system
Presentation   deduplication backup software and systemPresentation   deduplication backup software and system
Presentation deduplication backup software and systemxKinAnx
 
Presentation dc design for small and mid-size data center
Presentation   dc design for small and mid-size data centerPresentation   dc design for small and mid-size data center
Presentation dc design for small and mid-size data centerxKinAnx
 

More from xKinAnx (20)

Engage for success ibm spectrum accelerate 2
Engage for success   ibm spectrum accelerate 2Engage for success   ibm spectrum accelerate 2
Engage for success ibm spectrum accelerate 2
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep diveAccelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive
 
Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
 
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
 
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
 
Presentation disaster recovery in virtualization and cloud
Presentation   disaster recovery in virtualization and cloudPresentation   disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
 
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation   disaster recovery for oracle fusion middleware with the zfs st...Presentation   disaster recovery for oracle fusion middleware with the zfs st...
Presentation disaster recovery for oracle fusion middleware with the zfs st...
 
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation   differentiated virtualization for enterprise clouds, large and...Presentation   differentiated virtualization for enterprise clouds, large and...
Presentation differentiated virtualization for enterprise clouds, large and...
 
Presentation desktops for the cloud the view rollout
Presentation   desktops for the cloud the view rolloutPresentation   desktops for the cloud the view rollout
Presentation desktops for the cloud the view rollout
 
Presentation design - key concepts and approaches for designing your deskto...
Presentation   design - key concepts and approaches for designing your deskto...Presentation   design - key concepts and approaches for designing your deskto...
Presentation design - key concepts and approaches for designing your deskto...
 
Presentation desarrollos cloud con oracle virtualization
Presentation   desarrollos cloud con oracle virtualizationPresentation   desarrollos cloud con oracle virtualization
Presentation desarrollos cloud con oracle virtualization
 
Presentation deploying cloud based services
Presentation   deploying cloud based servicesPresentation   deploying cloud based services
Presentation deploying cloud based services
 
Presentation dell™ power vault™ md3
Presentation   dell™ power vault™ md3Presentation   dell™ power vault™ md3
Presentation dell™ power vault™ md3
 
Presentation defend your company against cyber threats with security solutions
Presentation   defend your company against cyber threats with security solutionsPresentation   defend your company against cyber threats with security solutions
Presentation defend your company against cyber threats with security solutions
 
Presentation deduplication backup software and system
Presentation   deduplication backup software and systemPresentation   deduplication backup software and system
Presentation deduplication backup software and system
 
Presentation dc design for small and mid-size data center
Presentation   dc design for small and mid-size data centerPresentation   dc design for small and mid-size data center
Presentation dc design for small and mid-size data center
 

Recently uploaded

Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 

Recently uploaded (20)

Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 

Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases-hints-tips

  • 1. Spectrum Scale 4.1 System Administration Spectrum Scale Elastic Storage Server Spectrum Scale native RAID (GNR) Hints & Tips © Copyright IBM Corporation 2015
  • 2. Unit objectives After completing this unit, you should be able to: • Understand all the Elastic Storage Server Options • Understand their Value to Client business • Understand Spectrum Scale Native RAID • Speak to its value and limitiations • Describe the components of GNR and where it is supported • Describe Declustered RAID • Understand some key tips and hints to best practices. © Copyright IBM Corporation 2015
  • 3. Introducing the Elastic Storage Server • IBM Elastic Storage Server • The IBM® Elastic Storage Server is a high-performance, GPFS™ network storage disk solution. • The IBM Elastic Storage Server features multiple hardware platforms and architectures that create an enterprise-level solution consisting of the following main components: Platform and storage management console: IBM Power® System S812L (8247-21L) 1. Two Basic Storage Models (GS small form factor) & (GL large form factor) Each Model has basic architectural and management requirements 1. Network Switches  IBM RackSwitch™ G7028 (7120-24L)  IBM RackSwitch G8052 (7120-48E)  IBM RackSwitch G8264 (7120-64C) 2. IBM 7042-CR8 Rack-mounted Hardware Management Console (HMC) 3. IBM 7014 Rack Model T42 (enterprise rack) © Copyright IBM Corporation 2015
  • 4. Introducing the Elastic Storage Server • IBM Elastic Storage Server 1. IBM 5146 Model GS1 IBM Elastic Storage Server 2. IBM 5146 Model GS2 IBM Elastic Storage Server, 3. IBM 5146 Model GS4 IBM Elastic Storage Server 4. IBM 5146 Model GS6 IBM Elastic Storage Server  IBM Power System S822L (8247-22L)  IBM 5887 EXP24S SFF Gen2-bay drawer 5. IBM 5146 Model GL2 IBM Elastic Storage Server 6. IBM 5146 Model GL4 IBM Elastic Storage Server 7. IBM 5146 Model GL6 IBM Elastic Storage Server  IBM Power System S822L (8247-22L)  IBM System Storage DCS3700 Expansion Unit 1818-80E © Copyright IBM Corporation 2015
  • 5. • Elastic Storage Server building blocks provide – Simplified bundles of hardware that are optimized for field use – These are either performance or capacity optimized – They only support two array types • EXP24S (2U 24 x 2.5” SSD or SAS Drive) • DCS370 Expansion (1818-80E) (4U 60 x 2.5”/3.5” NLSAS Drives) – They only support GNR RAID management – The GS & GL models only support a finite set of drive types – They include a pair of IO servers with each building block – The first building block requires and HMC & EMS (management node) – Each unit supports CLI and GUI for solution management • * Each storage unit has 2 x SSDs for internal GNR use (not for client access) * It is not a SONAS replacement & It is not an all-inclusive Appliance © Copyright IBM Corporation 2015 Elastic Storage Server (what it is & what it isn’t)
  • 6. Elastic Server GS Models © Copyright IBM Corporation 2015
  • 7. Elastic Server GL Models © Copyright IBM Corporation 2015
  • 8. A closer look at the GL 6 Components © Copyright IBM Corporation 2015 Power 8 RH Linux P822L GPFS Storage Server GPFS 4.1 + GNR RAID Mgr 20 Cores, 128GB Memory Fat Networking DCS3700 Expansion Tray 60 Drive (4U) 1818-80E DCS3700 Expansion Tray 60 Drive (4U) 1818-80E SAS Connected Storage IBM 7042-CR8 Rack-mounted Hardware Management Console (HMC) IBM 7014 Rack Model T42 (enterprise rack) Power 8 RH Linux P821L EMS/Xcat Server & IBM HMC-7042-CR8 Management Console Derated – Unofficial 1.4PB Raw 1PB Useable 16MB blocksize 13.6GB/S Seq Read 13.4GB/S Seq Write 30K x 8KB IOPS Read 6K x 8KB IOPS Write
  • 9. Sample Configurations & Reference Architecture © Copyright IBM Corporation 2015
  • 10. Installation of Elastic Storage Server (High Level) © Copyright IBM Corporation 2015 1. Confirm Private IP range for HMC DHCP server 2. Confirm Private Service network with (6) IPs and private xCat management network with (6) IPs, • separate networks via switches or VLAN. 3. Confirm Public network connections for HMC and EMS - (2) IPs needed. 4. Confirm Host->IP mappings for the following (We can use the ESS defaults.) 5. + HMC 6. + EMS 7. + IO server 1, IO server 2, IO server 3, IO server 4 8. + 10GigE|40GigE hostname->IP mappings 9. Set up domain names for xCAT private net 10. Set up domain names for high speed interconnect 11. Set up Partition / & partition profile names 12. Confirm Server names 13. Confirm 10GigE/40GigE/IB switches in place and cabled 14. Set up Bonding being used or not? 15. Set up Public network in place and cabled to xCAT EMS and HMC (at minimum) 16. Confirm all building block components in frame (4 IO servers, EMS, HMC, HMC console?, switches) 17. Set up / confirm Dual feed power to frame components 18. Set up HMC console and/or terminal 19. Prepare for install Redhat 7 ISO or DVD 20. Client should register RH license for all ESS servers. 21. Define How many filesystems, block sizes, splitting of metadata?, replication ? (or should we just take defaults?) 22. Confirm all disks in place.? Will check with scripts 23. Confirm all cabling in place? Will be double checked by scripts 24. Confirm Wifi access in lab to setup sametime meeting room (for IBMr work) 25. Confirm client intend to use Standard Spectrum Scale for this ESS install. – Then Follow the 76 Page install guide.
  • 11. A look at the Building Block Networking © Copyright IBM Corporation 2015
  • 12. End Cluster Result is a = sum of the parts © Copyright IBM Corporation 2015
  • 13. What is GNR and How do I communicate the value? © Copyright IBM Corporation 2015  Spectrum Scale Native RAID is a software implementation of storage RAID technologies within Spectrum Scale.  It requires special Licensing  It is only approved for pre-certified architectures  (such as GSS, Elastic Storage Server, DDN GRIDScaler)  Using conventional dual-ported disks in a JBOD configuration, Spectrum Scale Native RAID implements sophisticated data placement and error correction algorithms to deliver high levels of storage reliability, availability, and performance.  Standard Spectrum Scale file systems are created from the NSDs defined through Spectrum Scale Native RAID. No Hardware Based Controller
  • 14. Petascale argument for stronger RAID codes • Disk rebuilding is a fact of life at Petascale level – With 100,000 disks and an MTBFdisk = 600 Khrs, rebuild is triggered about four times a day – 24-hour rebuild implies four concurrent, continuous rebuilds at all times. • Traditional, 1-fault-tolerant RAID-5 is a non-starter – Disk hard read error rate of 1-in-1015 bits implies data loss every ~26th rebuild – 1015 / (8 disks-per-RAID-group x 600-GB disks * 8 bits/byte ) – Or data loss event every 26/4 = 6.5 days. • 2-fault-tolerant declustered RAID (8+2P) may not be sufficient – MDDTL ~ 7 years (simulated, MTTFdisk=600Khrs, Weibull, 100-PB usable). • 3-fault-tolerant declustered RAID (8+3P) is 400,000x better – MDDTL ~ 3x106 years (simulated, MTTFdisk=600Khrs, Weibull, 100-PB usable) – Guards against unexpected correlated failures. © Copyright IBM Corporation 2015
  • 15. Features • Auto rebalancing • Only 2% rebuild performance hit • Reed Solomon erasure code, “8 data +3 parity” • ~105 year MTDDL for 100-PB file system • End-to-end, disk-to-Spectrum Scale-client data checksums No hardware storage controller • Software RAID on the I/O Servers – SAS attached JBOD – Special JBOD storage drawer for very dense drive packing – Solid-state drives (SSDs) for metadata storage © Copyright IBM Corporation 2015 SAS vDISK Local area network (LAN) NSD servers SAS vDISK JBODs
  • 16. Works within Spectrum Scale (GPFS) Network Shared Disk (NSD) Disks IONode UserSpace GPFS NSD Server KernelSpace GPFS Kernel IO Layer OS Device Driver HBA Device Driver ComputeNode UserSpace GPFS NSD Client GPFS Client Application ControlRPC DataRDMA Disk Array Controller © Copyright IBM Corporation 2015 Disks IONode UserSpace GPFS NSD Server KernelSpace GPFS Kernel IO Layer GPFS Vdisk (PERSEUS) OS Device Driver HBA Device Driver ComputeNode UserSpace GPFS NSD Client GPFS Client Application ControlRPC DataRDMA Remove hardware controller Add GPFS software controller Traditional GNR based
  • 17. RAID algorithm • Two types of RAID: • 3 or 4 way replication • 8 + 2 or 3 way parity • 2-fault and 3-fault tolerant codes (‘RAID-D2, RAID-D3’) © Copyright IBM Corporation 2015 3-way Replication (1+2)8 + 2p Reed Solomon2-fault tolerant codes 3-fault tolerant codes 1 strip (GPFS block) 2 or 3 replicated strips 4-way Replication (1+3) 8 strips (GPFS block) 2 or 3 redundancy strips 8 + 3p Reed Solomon
  • 18. Declustered RAID • Data, parity and spare strips are uniformly and independently distributed across disk array. • Supports an arbitrary number of disks per array –Not restricted to an integral number of RAID track widths. © Copyright IBM Corporation 2015 Conventional Declustered
  • 19. Lower disk rebuild overhead • Improved file system performance during rebuild – Throughput of all operational disks is used for rebuilding after disk failure, reducing load on client. – Why: Since Spectrum Scale stripes data across all storage controllers, without declustering, performance would be gated by slowest rebuilding controller. • In large systems, some array is likely always rebuilding – 25,000 disks * 24 hours / (600,000-hour disk MTBF) = 1 rebuild / day • Or in smaller storage array with out-of-spec failure rates – 1,500 disks * 2% per month MTBF * 1/30 month = 1 rebuild / day – With DeClustered GNR RAID • Non-critical rebuild overhead remains typically < 3%. • If risk should increase with multiple failures priority increases to reduce the time in exposure. © Copyright IBM Corporation 2015
  • 20. 7 disks3 groups 6 disks spare disk 21 virtual tracks (42 strips) 49 strips 7 tracks per group (2 strips per track) 7 spare strips 3 1-fault-tolerant groups Declustered RAID example © Copyright IBM Corporation 2015 Traditional GNR Declustered
  • 21. failed disk Rd-Wr time Declustered RAID rebuild stripTimes disks stripTimeswrrd eRebuildTim 7 2 77    © Copyright IBM Corporation 2015  5.3 2 7 edupRebuildSpe stripTimes disks stripTimeswrrd eRebuildTim 2 6 66    Rd Wr time failed disk
  • 22. High reliability • Mean time to data loss due with 50,000 disks: – 3 fault tolerance (8+3P) • MTTDL  200 million years • Annual Failure Rate (47-disk array)  4 x 10-12 – 2 fault tolerance (8+2P) • MTTDL  200 years • Annual Failure Rate (47-disk array)  5 x 10-6 – 1 fault-tolerance • MTTDL  1 week (due to latent sector errors) – 1015 bits / (8 disks * 600-GB disks * 8 bits/byte ) = 26 rebuilds / 4 rebuilds/day © Copyright IBM Corporation 2015 Simulation assumptions: Disk capacity = 600-GB, MTTF = 600khrs, hard error rate = 1-in-1015 bits,47- HDD declustered arrays, uncorrelated failures
  • 23. Deferred disk maintenance • With GNR, when disks fail and are restored before another failure, multiple disks can sequentially fail without data loss. – For example, RAID-D3 with 2 disks worth of spare space can handle up 5 sequential disk failures. • With RAID-D3, disk maintenance can be deferred with a policy that replaces a disk after the second disk failure. *Fewer maintenance calls with combined disk replacements. – Maintenance interval of a month or longer is possible. – No more evening panic calls for immediate maintenance on common FRU replacements. • This Reduces probability of improper maintenance and/or unintended side effects. © Copyright IBM Corporation 2015
  • 24. Data integrity manager • Highest priority: Restore redundancy after disk failure(s) – Rebuild data stripes in order of 3, 2, and 1 erasures – Fraction of stripes affected when 3 disks have failed (assuming 8+3p, 47 disks): • 23% of stripes have 1 erasure (= 11/47) • 5% of stripes have 2 erasures (= 11/47 * 10/46) • 1% of stripes have 3 erasures (= 11/47 * 10/46 * 9/45) • Medium priority: Rebalance spare space after disk install – Restores uniform declustering of data, parity, and spare strips. • Low priority: Scrub and repair media faults – Verifies checksum/consistency of data and parity/mirror. © Copyright IBM Corporation 2015
  • 25. End-to-end checksum • True end-to-end checksum from disk surface to client’s Spectrum Scale interface – Repairs soft/latent read errors – Repairs lost/missing writes. • Checksums are maintained on disk and in memory and are transmitted to/from client. • Checksum is stored in a 64-byte trailer of 32-KiB buffers – 8-byte checksum and 56 bytes of ID and version info – Sequence number used to detect lost/missing writes. © Copyright IBM Corporation 2015 8 data strips 3 parity strips 32-KiB buffer 64B trailer ¼ to 2-KiB terminus
  • 26. IO Node Failover © Copyright IBM Corporation 2015 Minimal configuration of two Spectrum Scale Native RAID servers and one storage JBOD. Spectrum Scale Native RAID server 1 is the primary controller for the first recovery group and backup for the second recovery group. Spectrum Scale Native RAID server 2 is the primary controller for the second recovery group and backup for the first recovery group. As shown, when server 1 fails, control of the first recovery group is taken over by its backup server 2. During the failure of server 1, the load on backup server 2 increases by 100% from one to two recovery groups.
  • 27. Comprehensive Disk and Path Diagnostics • Asynchronous ‘disk hospital’s design allows for careful problem determination of disk fault – While a disk is in the disk hospital, reads are parity reconstructed. – For writes, strips are marked stale and repaired later when disk leaves. – I/Os are resumed in under 10 seconds. • Thorough Fault Determination – Power-cycling drives to reset them – Neighbor checking – Supports multi-disk carriers. • Disk Enclosure Management – Uses SES interface for lights, latch locks, disk power, and so on. • Manages topology and hardware configuration. © Copyright IBM Corporation 2015
  • 28. Disk Hospital Operations • Before taking severe actions against a disk, GNR checks neighboring disks to decide if some systemic problem may be behind the failure. • Tests paths using SCSI Test Unit Ready commands. • Power-cycles disks to try to clear certain errors. • Reads or writes sectors where an I/O occurred in order to test for media errors. • Works with higher levels to rewrite bad sectors. • And Polls disabled paths. © Copyright IBM Corporation 2015 Analysis with predictive actions to support best practice healing (almost like a real hospital)
  • 29. Storage Component Hierarchy (GNR+JBOD) • A Recovery group can have: – max 512 disks – 16 declustered arrays – At least 1 SSD log vdisk – Max 64 vdisks • A De-clustered array: – Can contain 128 pdisks – Smallest is 4 disks – Must have one large >= 11 disks – Need 1 or more pdisks worth of spare space • Vdisks – Vdisks are volumes that become NSDs in Spectrum Scale control. – Block Size: 1 MiB, 2 MiB, 4 MiB, 8 MiB and 16 MiB © Copyright IBM Corporation 2015 pdisks Recovery Group left Recovery Group right DA DA DA DA Declustered Arrays VD VD VDVdisks = NSD VD VD VD VD VD
  • 30. GNR Commands: pdisks •mmaddpdisk – Adds a pdisk to a Spectrum Scale Native RAID recovery group. •mmdelpdisk – Deletes Spectrum Scale Native RAID pdisks. •mmlspdisk – Lists information for one or more Spectrum Scale Native RAID pdisks. •mmchcarrier – Allows Spectrum Scale Native RAID Physical Disks (pdisks) to be physically removed and replaced. © Copyright IBM Corporation 2015
  • 31. GNR Commands: Recovery groups •mmlsrecoverygroup – Lists information about Spectrum Scale Native RAID recovery groups. •mmlsrecoverygroupevents – Displays the Spectrum Scale Native RAID recovery group event log. •mmchrecoverygroup – Changes Spectrum Scale Native RAID recovery group and declustered array attributes. •mmcrrecoverygroup – Creates a Spectrum Scale Native RAID recovery group and its component declustered arrays and pdisks and specifies the servers. •mmdelrecoverygroup – Deletes a Spectrum Scale Native RAID recovery group. © Copyright IBM Corporation 2015
  • 32. GNR Commands: vdisk •mmdelvdisk – Deletes vdisks from a declustered array in a Spectrum Scale Native RAID recovery group. •mmlsvdisk – Lists information for one or more Spectrum Scale Native RAID vdisks. •mmcrvdisk – Creates a vdisk within a declustered array of a Spectrum Scale native RAID recovery group. © Copyright IBM Corporation 2015
  • 33. Hints and Tips © Copyright IBM Corporation 2015 With Elastic Storage Server the client must become a competent administrator of several technologies: IBM Power8, AIX, Redhat Enterprise Linux 7, Xcat, Spectrum Scale 4.1, Spectrum Scale Native RAID * You should always suggest adding service for Knowledge Transfer and ensure that your clients have links and document references to support information required to effectively manage their Spectrum Scale or Elastic Storage server systems. With Elastic Storage Server and GNR you probably don’t want any 256K filesystems as GNR only supports data blocksizes down to 512K. That would mean a non-Vdisk filesystem using 256K block can never have a pool of Vdisk- based storage. Clients are seeing better large file, sequential performance as they increase filesystem block size, as expected. And as they grow, they can update maxblocksize on all client clusters and move through testing all the way up to the 16M to find the best solution for their workloads, however, with a good distribution of small file sizes they will want to keep blocksize low to prevent subblock waste. As the minimum capacity file data will consume 1/32 of the file system Blocksize. So a 5k file will take up 32K in a file system with a 1MB blocksize.
  • 34. Hints and Tips © Copyright IBM Corporation 2015 With Elastic Storage Server make sure that Power is redundantly connected to ensure that power issues do not surprise your clients well into production. Keep it simple (Left to Right) Fully Redundant
  • 35. Review • Elastic Storage Server is specifically designed to simplify Building Blocks for Spectrum Scale file system deployments on from optimized scalable and allow for the integration of GNR • Elastic Storage Server has 7 models (4 GS models of small form factor for SSD & SAS dirves) and 3 GL models of large form factor for NLSAS drives) • Elastic Storage Server ships with 1 week of Lab Services for Installation and installation is generally complicated enough to require that week of services, however if is good to pen in additional lab services for knowledge transfer for client with a 1st time install. • Spectrum Scale Native RAID (GNR) removes the need for a RAID controller and optimizes RAID management for Spectrum Scale file system performance, and reliability • Declustered RAID and Reed Solomon algorithms allow for Non-critical rebuild overhead to typically remain < 3% of a performance impact. • A well laid plan is cognizant of the sizing the technology to the workloads and avoiding too many baked in assumptions. © Copyright IBM Corporation 2015
  • 36. Any Questions on ESS, GNR, Hints and Tips Questions © Copyright IBM Corporation 2015