SlideShare a Scribd company logo
1 of 8
Download to read offline
Find most
 economic
 disk setup
 with BVQ
   Michael Pirker

   Michael.Pirker@SVA.de
   +49 151 180 25 26 00


Bsys2.pptx
Target for this presentation
   • Target
         • a mail environment has to be moved from an oversized 10k 300GB storage to a new
           storage environment. The new environment shall use 7.2k 1TB drives with Raid 6.
         • BVQ is already installed so we can measure IOPS, RW Distribution and caching to do
           a most economic sizing

         • We analyze the performance of all disks that are involved
              • We can proof that 2500 IOPS are what we need
              • It is already known that R/W distribution is 55% Read
              • The diagram shows a constant read cache hit rate of 90%
              • The diagram shows a constant write cache hit rate (overwite in cache) of 15%
                SVC will put all write data on the cache but 15% of this data will be overwritten before
                destaging happens. These IOPS will never affect the disk.
              • Safety Margins We see a transfer rates which will improve write penalties for raid 6. ánd we
                use a conservative IO estimation for 7k drives. We do not take in account that the storage
                system may further improve IO behavior with of own caching


Page 2
Select Volumes for performance analysis


                      Make use of the BVQ Accounting Package

                         Create a Volume Group of all Volumes that
                      belong to the mail system
                         Analyze the Volume Group now

                      Without Accounting Package you can select all
                      volume by search.




Page 3
Facts measured to support calculation
   • Requested
         • 40 TB Capacity
         • 2500 IOPS                       Cache
                                           Hit Read


   • Facts from Chart (4 work days)        Transfer
         • 55% RW Distribution             Size RW

         • 90% Cache Hit Read
         • 15% Cache Hit Write               IOPS
                                             RW


   • Big Transfer size improves            Cache
     “Write Penalty” for Raid 6            Hit Write
     (Full Striped Writes)




Page 4
Results with Spreadsheet
     •   Target IOPS 2500 IOPS

     •   RW Distribution: was measured before. Splits IOPS in
         R/W

     •   Cache Hit Read: Reduces read IOPS with measured
         Cache Hit Rate

     •   Cache Hit Write Reduces write IOPS (Overwrite in
         Cache ) with measured Cache Hit Rate

     •   IOPS eff. is sum of read IOPS
         and write IOPS


     •   The number of disks are now calculated with this formula




     Result is 48 Disks with 7.2kRPM

     Good fit because we need 40TB
     capacity



Page 5
What makes this result particularly valuable?

         • calculated based on facts
              decreased the number of needed Disks dramatically

         • most economic result

         • minimum of costly safety margin



             Other scenarios without knowledge about facts

             We assume Raid 6 and 50% read and 70% cache hit for read
               IOPSeff = 1625  72 Disks

             We assume Raid 6 and nothing more
               IOPSeff = 2500  201 Disks



Page 6
BVQ in the www

   • BVQ Website
       http://www.bvq-software.com/                  (still German will change to English until latest Feb 25. 2013)
       http://www.bvq-software.de/                   (German)
       http://bvqwiki.sva.de                         (technical wiki with download – international)

   • BVQ Videos on YouTube
       http://www.youtube.com/user/SVAGmbH

   • SVA Webseite von SVA GmbH
       http://www.sva.de/



   Internationale Webseiten

   • Developer Works Documents and Presentations
         https://www.ibm.com/developerworks/mydeveloperworks/...
         http://tinyurl.com/BVQ-Documents




Page 7
Page 8

More Related Content

What's hot

Automatic Operation Bot for Ceph - You Ji
Automatic Operation Bot for Ceph - You JiAutomatic Operation Bot for Ceph - You Ji
Automatic Operation Bot for Ceph - You JiCeph Community
 
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex LauDoing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex LauCeph Community
 
Providence net app upgrade plan PPMC
Providence net app upgrade plan PPMCProvidence net app upgrade plan PPMC
Providence net app upgrade plan PPMCAccenture
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Community
 
Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path HBaseCon
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Community
 
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu Chai
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiRADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu Chai
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiCeph Community
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Community
 
Ceph Day Santa Clara: Ceph at DreamHost
Ceph Day Santa Clara: Ceph at DreamHost Ceph Day Santa Clara: Ceph at DreamHost
Ceph Day Santa Clara: Ceph at DreamHost Ceph Community
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan
Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John HaanBasic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan
Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John HaanCeph Community
 
Building an Efficient AI Training Platform at bilibili with Alluxio
Building an Efficient AI Training Platform at bilibili with AlluxioBuilding an Efficient AI Training Platform at bilibili with Alluxio
Building an Efficient AI Training Platform at bilibili with AlluxioAlluxio, Inc.
 
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiStor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiCeph Community
 
Selecting the right persistent storage options for apps in containers Open So...
Selecting the right persistent storage options for apps in containers Open So...Selecting the right persistent storage options for apps in containers Open So...
Selecting the right persistent storage options for apps in containers Open So...bipin kunal
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Community
 
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...ScyllaDB
 
Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...
Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...
Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...Amazon Web Services
 
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...Amazon Web Services
 

What's hot (20)

Automatic Operation Bot for Ceph - You Ji
Automatic Operation Bot for Ceph - You JiAutomatic Operation Bot for Ceph - You Ji
Automatic Operation Bot for Ceph - You Ji
 
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex LauDoing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
 
Providence net app upgrade plan PPMC
Providence net app upgrade plan PPMCProvidence net app upgrade plan PPMC
Providence net app upgrade plan PPMC
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
 
Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu Chai
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu ChaiRADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu Chai
RADOS improvements and roadmap - Greg Farnum, Josh Durgin, Kefu Chai
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
Ceph Day Santa Clara: Ceph at DreamHost
Ceph Day Santa Clara: Ceph at DreamHost Ceph Day Santa Clara: Ceph at DreamHost
Ceph Day Santa Clara: Ceph at DreamHost
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan
Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John HaanBasic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan
Basic and Advanced Analysis of Ceph Volume Backend Driver in Cinder - John Haan
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Building an Efficient AI Training Platform at bilibili with Alluxio
Building an Efficient AI Training Platform at bilibili with AlluxioBuilding an Efficient AI Training Platform at bilibili with Alluxio
Building an Efficient AI Training Platform at bilibili with Alluxio
 
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang HuiStor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
 
Selecting the right persistent storage options for apps in containers Open So...
Selecting the right persistent storage options for apps in containers Open So...Selecting the right persistent storage options for apps in containers Open So...
Selecting the right persistent storage options for apps in containers Open So...
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
 
Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...
Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...
Amazon RDS for MySQL – Diagnostics, Security, and Data Migration (DAT302) | A...
 
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...
 

Similar to Find most economic disk setup with BVQ

Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph Community
 
AWS Activate webinar - Scalable databases for fast growing startups
AWS Activate webinar - Scalable databases for fast growing startupsAWS Activate webinar - Scalable databases for fast growing startups
AWS Activate webinar - Scalable databases for fast growing startupsAmazon Web Services
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
High Concurrency Architecture and Laravel Performance Tuning
High Concurrency Architecture and Laravel Performance TuningHigh Concurrency Architecture and Laravel Performance Tuning
High Concurrency Architecture and Laravel Performance TuningAlbert Chen
 
AWS Webcast - Cost and Performance Optimization in Amazon RDS
AWS Webcast - Cost and Performance Optimization in Amazon RDSAWS Webcast - Cost and Performance Optimization in Amazon RDS
AWS Webcast - Cost and Performance Optimization in Amazon RDSAmazon Web Services
 
Progress OE performance management
Progress OE performance managementProgress OE performance management
Progress OE performance managementYassine MOALLA
 
Progress Openedge performance management
Progress Openedge performance managementProgress Openedge performance management
Progress Openedge performance managementYassine MOALLA
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
SVC / Storwize: cache partition analysis (BVQ howto)
SVC / Storwize: cache partition analysis  (BVQ howto)   SVC / Storwize: cache partition analysis  (BVQ howto)
SVC / Storwize: cache partition analysis (BVQ howto) Michael Pirker
 
Pascal benois performance_troubleshooting-spsbe18
Pascal benois performance_troubleshooting-spsbe18Pascal benois performance_troubleshooting-spsbe18
Pascal benois performance_troubleshooting-spsbe18BIWUG
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
Scaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/DayScaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/DayScyllaDB
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...
Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...
Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...Amazon Web Services
 

Similar to Find most economic disk setup with BVQ (20)

Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance Barriers
 
AWS Activate webinar - Scalable databases for fast growing startups
AWS Activate webinar - Scalable databases for fast growing startupsAWS Activate webinar - Scalable databases for fast growing startups
AWS Activate webinar - Scalable databases for fast growing startups
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
High Concurrency Architecture and Laravel Performance Tuning
High Concurrency Architecture and Laravel Performance TuningHigh Concurrency Architecture and Laravel Performance Tuning
High Concurrency Architecture and Laravel Performance Tuning
 
AWS Webcast - Cost and Performance Optimization in Amazon RDS
AWS Webcast - Cost and Performance Optimization in Amazon RDSAWS Webcast - Cost and Performance Optimization in Amazon RDS
AWS Webcast - Cost and Performance Optimization in Amazon RDS
 
Progress OE performance management
Progress OE performance managementProgress OE performance management
Progress OE performance management
 
Progress Openedge performance management
Progress Openedge performance managementProgress Openedge performance management
Progress Openedge performance management
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
SVC / Storwize: cache partition analysis (BVQ howto)
SVC / Storwize: cache partition analysis  (BVQ howto)   SVC / Storwize: cache partition analysis  (BVQ howto)
SVC / Storwize: cache partition analysis (BVQ howto)
 
Pascal benois performance_troubleshooting-spsbe18
Pascal benois performance_troubleshooting-spsbe18Pascal benois performance_troubleshooting-spsbe18
Pascal benois performance_troubleshooting-spsbe18
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout SessionTaking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Architecture Breakout Session
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Scaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/DayScaling Apache Pulsar to 10 Petabytes/Day
Scaling Apache Pulsar to 10 Petabytes/Day
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Ceph
CephCeph
Ceph
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...
Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...
Optimize MySQL Workloads with Amazon Elastic Block Store - February 2017 AWS ...
 

More from Michael Pirker

Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...
Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...
Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...Michael Pirker
 
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)Michael Pirker
 
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)Michael Pirker
 
IBM SVC / Storwize: Reduction of storage cost made easy
IBM SVC / Storwize: Reduction of storage cost made easyIBM SVC / Storwize: Reduction of storage cost made easy
IBM SVC / Storwize: Reduction of storage cost made easyMichael Pirker
 
IBM SVC / Storwize: Unlock cost savings potentials with BVQ
IBM SVC / Storwize: Unlock cost savings potentials with BVQIBM SVC / Storwize: Unlock cost savings potentials with BVQ
IBM SVC / Storwize: Unlock cost savings potentials with BVQMichael Pirker
 
Bvq storage in_balance_ru
Bvq storage in_balance_ruBvq storage in_balance_ru
Bvq storage in_balance_ruMichael Pirker
 
20140415 bvq storage in_balance 日本語
20140415 bvq storage in_balance 日本語20140415 bvq storage in_balance 日本語
20140415 bvq storage in_balance 日本語Michael Pirker
 
Bvq use case storage tier analysis with bvq (ba).pptx
Bvq use case   storage tier analysis with bvq (ba).pptxBvq use case   storage tier analysis with bvq (ba).pptx
Bvq use case storage tier analysis with bvq (ba).pptxMichael Pirker
 
SVC / Storwize: Exchange analysis unwanted thin provisoning effect
SVC / Storwize: Exchange analysis unwanted thin provisoning effectSVC / Storwize: Exchange analysis unwanted thin provisoning effect
SVC / Storwize: Exchange analysis unwanted thin provisoning effectMichael Pirker
 

More from Michael Pirker (10)

Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...
Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...
Analyze a SVC, STORWIZE metro/ global mirror performance problem-v58-20150818...
 
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (Deutsch)
 
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)
IBM SVC Storwize analysis and monitoring: BVQ storage in Balance (English)
 
IBM SVC / Storwize: Reduction of storage cost made easy
IBM SVC / Storwize: Reduction of storage cost made easyIBM SVC / Storwize: Reduction of storage cost made easy
IBM SVC / Storwize: Reduction of storage cost made easy
 
IBM SVC / Storwize: Unlock cost savings potentials with BVQ
IBM SVC / Storwize: Unlock cost savings potentials with BVQIBM SVC / Storwize: Unlock cost savings potentials with BVQ
IBM SVC / Storwize: Unlock cost savings potentials with BVQ
 
Bvq storage in_balance_ru
Bvq storage in_balance_ruBvq storage in_balance_ru
Bvq storage in_balance_ru
 
20140415 bvq storage in_balance 日本語
20140415 bvq storage in_balance 日本語20140415 bvq storage in_balance 日本語
20140415 bvq storage in_balance 日本語
 
BVQ walkthrough
BVQ walkthroughBVQ walkthrough
BVQ walkthrough
 
Bvq use case storage tier analysis with bvq (ba).pptx
Bvq use case   storage tier analysis with bvq (ba).pptxBvq use case   storage tier analysis with bvq (ba).pptx
Bvq use case storage tier analysis with bvq (ba).pptx
 
SVC / Storwize: Exchange analysis unwanted thin provisoning effect
SVC / Storwize: Exchange analysis unwanted thin provisoning effectSVC / Storwize: Exchange analysis unwanted thin provisoning effect
SVC / Storwize: Exchange analysis unwanted thin provisoning effect
 

Find most economic disk setup with BVQ

  • 1. Find most economic disk setup with BVQ Michael Pirker Michael.Pirker@SVA.de +49 151 180 25 26 00 Bsys2.pptx
  • 2. Target for this presentation • Target • a mail environment has to be moved from an oversized 10k 300GB storage to a new storage environment. The new environment shall use 7.2k 1TB drives with Raid 6. • BVQ is already installed so we can measure IOPS, RW Distribution and caching to do a most economic sizing • We analyze the performance of all disks that are involved • We can proof that 2500 IOPS are what we need • It is already known that R/W distribution is 55% Read • The diagram shows a constant read cache hit rate of 90% • The diagram shows a constant write cache hit rate (overwite in cache) of 15% SVC will put all write data on the cache but 15% of this data will be overwritten before destaging happens. These IOPS will never affect the disk. • Safety Margins We see a transfer rates which will improve write penalties for raid 6. ánd we use a conservative IO estimation for 7k drives. We do not take in account that the storage system may further improve IO behavior with of own caching Page 2
  • 3. Select Volumes for performance analysis Make use of the BVQ Accounting Package Create a Volume Group of all Volumes that belong to the mail system Analyze the Volume Group now Without Accounting Package you can select all volume by search. Page 3
  • 4. Facts measured to support calculation • Requested • 40 TB Capacity • 2500 IOPS Cache Hit Read • Facts from Chart (4 work days) Transfer • 55% RW Distribution Size RW • 90% Cache Hit Read • 15% Cache Hit Write IOPS RW • Big Transfer size improves Cache “Write Penalty” for Raid 6 Hit Write (Full Striped Writes) Page 4
  • 5. Results with Spreadsheet • Target IOPS 2500 IOPS • RW Distribution: was measured before. Splits IOPS in R/W • Cache Hit Read: Reduces read IOPS with measured Cache Hit Rate • Cache Hit Write Reduces write IOPS (Overwrite in Cache ) with measured Cache Hit Rate • IOPS eff. is sum of read IOPS and write IOPS • The number of disks are now calculated with this formula Result is 48 Disks with 7.2kRPM Good fit because we need 40TB capacity Page 5
  • 6. What makes this result particularly valuable? • calculated based on facts decreased the number of needed Disks dramatically • most economic result • minimum of costly safety margin Other scenarios without knowledge about facts We assume Raid 6 and 50% read and 70% cache hit for read IOPSeff = 1625 72 Disks We assume Raid 6 and nothing more IOPSeff = 2500 201 Disks Page 6
  • 7. BVQ in the www • BVQ Website http://www.bvq-software.com/ (still German will change to English until latest Feb 25. 2013) http://www.bvq-software.de/ (German) http://bvqwiki.sva.de (technical wiki with download – international) • BVQ Videos on YouTube http://www.youtube.com/user/SVAGmbH • SVA Webseite von SVA GmbH http://www.sva.de/ Internationale Webseiten • Developer Works Documents and Presentations https://www.ibm.com/developerworks/mydeveloperworks/... http://tinyurl.com/BVQ-Documents Page 7