SlideShare a Scribd company logo
1 of 15
Take control of storage performance
Have You Ever…


      Had to add more disk drives, even though you
            already have too much capacity?




                                   Decided not to run an application on your storage
                                   system to avoid pushing performance off a cliff?

                 Performance




      Implemented a new storage infrastructure to
     isolate a workload from existing applications?




2
There’s No Way to Manage Storage Performance


    • Performance is configured, not managed
    • Forces you to buy resources you don’t need


                 Disk Drives                             Solid-state                     Tiering
        Size by performance= excess capacity   Size by capacity = excess performance   Adds complexity




                              Excess                 Excess                               Complex
                             Capacity             Performance                           Unpredictable




                                        • Every workload impacts every other workload
                                        • There’s no way to predict performance
                                        • Getting it wrong is expensive and painful


3
Forget What You Know




                     • Guarantee performance for applications with:
                       - Quality of Service
                       - Service Levels


                     • Higher storage efficiency with:
                       - PCIe Solid-state minimizes the performance footprint
                       - Dynamic Data Placement provides best price-performance
                       - Data Reduction reduces system $/GB




4
NexGen n5 Storage System



     • Active-Active for Enterprise High Availability   • Real-time Dynamic Data Placement
     • Balanced Performance & Capacity                  • Inline Data Reduction
       - PCIe Solid-state                               • Performance Quality of Service (QoS)
       - 7.2k RPM MDL SAS                               • Performance Service Levels




                                                                   ALL-IN PRICING


5
Managing Performance Requires QoS
     Configuring SAN performance                                        Quality of Service
     Applications share all performance                                 Set QoS based each application’s need
                                                           TOTAL SYSTEM
                                                           100,000 IOPS
                                                            REMAINING
                                                           100,000IOPS
                                                           70,000 IOPS
                                                           40,000
                                                            45,000

       Capacity    250 GB       500 GB         900 GB                   Capacity        250 GB         500 GB           900 GB
    Performance        Cannot manage performance                  Performance        30,000 IOPS     25,000 IOPS      5,000 IOPS


              Shared resources = contention                                Eliminate resource contention with QoS


                                                                                    30,000 IOPS
                                                                                                    20,000 IOPS
                                                                 Guaranteed
                                                                                                                     5,000 IOPS
Performance                                                       Minimum
                                                                 Performance Mission
                                                                                   Critical
                                                                                   Floor           Business
                                                                                                   Critical
                                                   Floor                                           Floor
                                                                                                                   Non-Critical
                                                                                                                   Floor
                  Unpredictable                                                               Managed
                   Inefficient                                                                Optimized

6
Quality of Service in Action




7
Service Levels For Total Control

    Degraded mode impacts                            Quality of Service
    No control or priority over performance levels   Prioritized performance in degraded mode operation

                      • COMPONENT FAILURE!                               • COMPONENT FAILURE!
                      • SYSTEM UPGRADE!                                  • SYSTEM UPGRADE!
                      • REBUILD PROCESS!                                 • REBUILD PROCESS!

                                                                    Mission       Business      Non
                                                                    Critical       Critical    Critical




                                                      Performance
        Performance




                         Conventional
                           Storage

                        No Priority                                             Prioritized
                        No Control                                             Total Control

8
Service Levels in Action




                               Storage Processor Fail
                                  * Overall Impact -36%




                                Mission Critical Volumes
                                           not Impacted




                                Business Critical Volumes
                                          Impacted -40%




                                    Non Critical Volumes
                                         Impacted -50%




9
PCIe Solid-state Is More Efficient

     Solid-state behind SAS                  Solid-state on PCIe
     Designed for high latency disk drives   Designed for CPU and RAM, extreme low latency




                 Lower Capacity                   Maximum Capacity
              Limited Performance                Maximum Performance

10
Leveraging Solid-State for Every Workload
                                    Tier
                                    • Application sends a block write IOP
                                    • The data block is mirrored
                                       -   Data exists on two PCIe solid-state devices
                                       -   Data is in a highly available state
                                    • The block write IOP is acknowledged


                                    Cache (Writes and Reads)
                                    • The redundant copy is moved to disk
                                    • Original copy in solid-state used for writes/reads



                                                   Processor/PCIe Solid-State Offline
                                                   • Data is rebuilt using redundant copy

           P

                                    Archive
                                    • Original copy evicted from solid-state
                                    • Infrequently accessed blocks stored on disk



11
Dynamic Data Placement For Best Price/Performance

     Automated tiering                                 Dynamic Data Placement
     Good performance at a lower $/GB                  Best price/performance ratio




      Response                                         Response
          Time   ??          ??            ??                     3ms           5 ms
                                                                                20ms                  30ms
                                                           Time
                                                                        Migrate volumes between
                                                                            policies on the fly



                                           Fast Tier                                              PCIe Solid-state




                                       Capacity Tier                                                         Disk


                  Reactive Automation                               Real-Time Decision Factors
                                                                    •   Current performance
                                                                    •   QoS setting
                                                                    •   Dedupe ratio
                                                                    •   Last accessed & frequency


                      After-The-Fact                                    Proactive
                         Complex                                         Simple

12
Data Reduction For Lowest $/GB
     Deduplication                                          Data Reduction
     Designed for backup, forces trade-offs                 Designed for primary storage

     Post Process                                     Fully integrated into the data path
     •    Buy extra capacity                          •   All volumes are 100% deduped at create
     •    Impacts performance




     Inline
     •    Requires resources,                         Inline data reduction
          impacts latency        Latency              •   Pattern matching leverages 48 cores of processing
     •    Not acceptable for      impact
                                                      •   Immediate utilization impact
          primary storage
                                                      •   QoS controlled to eliminate performance impact
                                                                         0000 1111
                                           $50/GB
     All solid-state w/ dedupe                                              ack
     •    Doesn’t Improve $/GB
                                           $10/GB
                                                      Default thin provisioning for all volumes
                                              $1/GB
                                                      •   Improved capacity utilization


              Reduces Performance, or                           No Performance Impacts
               Costs Around $10/GB                                    Lower $/GB

13
NexGen n5 Storage System*
                                                                                                                        *Patents Pending




                                                                                           n5 Storage System

         Quality of      Dynamic Data                Data
          Service         Placement                Reduction           Active-active storage processors
                                                                       Redundant disks, fans, and power supplies
                                                                       48 GB RAM
     Volume QoS       Real-time, not batch   Inline pattern matching   1.28 TB PCI-e Fusion-io solid-state
     Service Levels   Heuristics-based       Volume level dedupe       32 TB 7.2K raw, 22 TB usable
     Reporting        N-Tier architecture    Thin provisioning         4 10 GbE or 16 1 GbE data ports, iSCSI

     Live Policy                                                       Optional performance pack (640 GB solid-state)
                      QoS driven             Variable block ingest
     Migration                                                         Optional capacity pack (32 TB disk)




14
NexGen Storage Introductory Presentation

More Related Content

What's hot

Rapid Deployment of Novell ZENworks Configuration Management
Rapid Deployment of Novell ZENworks Configuration ManagementRapid Deployment of Novell ZENworks Configuration Management
Rapid Deployment of Novell ZENworks Configuration ManagementNovell
 
Novell Success Stories: Endpoint Management in Government
Novell Success Stories: Endpoint Management in GovernmentNovell Success Stories: Endpoint Management in Government
Novell Success Stories: Endpoint Management in GovernmentNovell
 
DB2 for i 7.1 - Whats New?
DB2 for i 7.1 - Whats New?DB2 for i 7.1 - Whats New?
DB2 for i 7.1 - Whats New?COMMON Europe
 
World Class Manufacturing Asset Utilization
World Class Manufacturing Asset UtilizationWorld Class Manufacturing Asset Utilization
World Class Manufacturing Asset Utilizationlksnyder
 
Improve DB2 z/OS Test Data Management
Improve DB2 z/OS Test Data ManagementImprove DB2 z/OS Test Data Management
Improve DB2 z/OS Test Data Managementsoftbasemarketing
 
Tivoli Storage Productivity Center... What’s new in v4.2.2?
Tivoli Storage Productivity Center... What’s new in v4.2.2?Tivoli Storage Productivity Center... What’s new in v4.2.2?
Tivoli Storage Productivity Center... What’s new in v4.2.2?IBM India Smarter Computing
 
Nutanix Always On-Solution-Brief
Nutanix Always On-Solution-BriefNutanix Always On-Solution-Brief
Nutanix Always On-Solution-BriefManny Carral
 
Introducing Novell Conferencing
Introducing Novell ConferencingIntroducing Novell Conferencing
Introducing Novell ConferencingNovell
 
Arrows Group Event 06
Arrows Group Event 06Arrows Group Event 06
Arrows Group Event 06agilekev
 
Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)
Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)
Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)Micro Focus
 
The CIBER / CA partnership & Why CIBER is moving to Nimsoft Monitor
The CIBER / CA partnership & Why CIBER is moving to Nimsoft MonitorThe CIBER / CA partnership & Why CIBER is moving to Nimsoft Monitor
The CIBER / CA partnership & Why CIBER is moving to Nimsoft Monitor CA Nimsoft
 
Jee performance tuning existing applications
Jee performance tuning existing applicationsJee performance tuning existing applications
Jee performance tuning existing applicationsShivnarayan Varma
 
Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02Newlink
 
Automating user provisioning with SAP NW BPM
Automating user provisioning with SAP NW BPMAutomating user provisioning with SAP NW BPM
Automating user provisioning with SAP NW BPMBalakrishnan Bala B
 
OOW 09 EBS Application Change Management Pack
OOW 09 EBS Application Change Management PackOOW 09 EBS Application Change Management Pack
OOW 09 EBS Application Change Management Packjucaab
 
1 emc vs_compellent
1 emc vs_compellent1 emc vs_compellent
1 emc vs_compellentjyoti_j2
 
IBM Cloud Burst postavená na platforme IBM System x
IBM Cloud Burst postavená na platforme IBM System xIBM Cloud Burst postavená na platforme IBM System x
IBM Cloud Burst postavená na platforme IBM System xASBIS SK
 
App Dynamics & SOASTA Testing & Monitoring Converge, March 2012
App Dynamics & SOASTA Testing & Monitoring Converge, March 2012App Dynamics & SOASTA Testing & Monitoring Converge, March 2012
App Dynamics & SOASTA Testing & Monitoring Converge, March 2012SOASTA
 

What's hot (20)

Rapid Deployment of Novell ZENworks Configuration Management
Rapid Deployment of Novell ZENworks Configuration ManagementRapid Deployment of Novell ZENworks Configuration Management
Rapid Deployment of Novell ZENworks Configuration Management
 
Novell Success Stories: Endpoint Management in Government
Novell Success Stories: Endpoint Management in GovernmentNovell Success Stories: Endpoint Management in Government
Novell Success Stories: Endpoint Management in Government
 
DB2 9.7 Overview
DB2 9.7 OverviewDB2 9.7 Overview
DB2 9.7 Overview
 
DB2 for i 7.1 - Whats New?
DB2 for i 7.1 - Whats New?DB2 for i 7.1 - Whats New?
DB2 for i 7.1 - Whats New?
 
World Class Manufacturing Asset Utilization
World Class Manufacturing Asset UtilizationWorld Class Manufacturing Asset Utilization
World Class Manufacturing Asset Utilization
 
Improve DB2 z/OS Test Data Management
Improve DB2 z/OS Test Data ManagementImprove DB2 z/OS Test Data Management
Improve DB2 z/OS Test Data Management
 
Tivoli Storage Productivity Center... What’s new in v4.2.2?
Tivoli Storage Productivity Center... What’s new in v4.2.2?Tivoli Storage Productivity Center... What’s new in v4.2.2?
Tivoli Storage Productivity Center... What’s new in v4.2.2?
 
Nutanix Always On-Solution-Brief
Nutanix Always On-Solution-BriefNutanix Always On-Solution-Brief
Nutanix Always On-Solution-Brief
 
Introducing Novell Conferencing
Introducing Novell ConferencingIntroducing Novell Conferencing
Introducing Novell Conferencing
 
Arrows Group Event 06
Arrows Group Event 06Arrows Group Event 06
Arrows Group Event 06
 
Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)
Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)
Developer Conference 1.4 - Customer In Focus- Sammons Financial Group (SFO)
 
The CIBER / CA partnership & Why CIBER is moving to Nimsoft Monitor
The CIBER / CA partnership & Why CIBER is moving to Nimsoft MonitorThe CIBER / CA partnership & Why CIBER is moving to Nimsoft Monitor
The CIBER / CA partnership & Why CIBER is moving to Nimsoft Monitor
 
Jee performance tuning existing applications
Jee performance tuning existing applicationsJee performance tuning existing applications
Jee performance tuning existing applications
 
Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02Vnx series-technical-review-110616214632-phpapp02
Vnx series-technical-review-110616214632-phpapp02
 
Automating user provisioning with SAP NW BPM
Automating user provisioning with SAP NW BPMAutomating user provisioning with SAP NW BPM
Automating user provisioning with SAP NW BPM
 
OOW 09 EBS Application Change Management Pack
OOW 09 EBS Application Change Management PackOOW 09 EBS Application Change Management Pack
OOW 09 EBS Application Change Management Pack
 
Cisco Unified Computing System
Cisco Unified Computing SystemCisco Unified Computing System
Cisco Unified Computing System
 
1 emc vs_compellent
1 emc vs_compellent1 emc vs_compellent
1 emc vs_compellent
 
IBM Cloud Burst postavená na platforme IBM System x
IBM Cloud Burst postavená na platforme IBM System xIBM Cloud Burst postavená na platforme IBM System x
IBM Cloud Burst postavená na platforme IBM System x
 
App Dynamics & SOASTA Testing & Monitoring Converge, March 2012
App Dynamics & SOASTA Testing & Monitoring Converge, March 2012App Dynamics & SOASTA Testing & Monitoring Converge, March 2012
App Dynamics & SOASTA Testing & Monitoring Converge, March 2012
 

Similar to NexGen Storage Introductory Presentation

Nevmug Martins Point Health Care J Anuary 2009
Nevmug   Martins Point Health Care   J Anuary 2009Nevmug   Martins Point Health Care   J Anuary 2009
Nevmug Martins Point Health Care J Anuary 2009csharney
 
VMware Technology: Deliver Predictable Application Performance & Improve Infr...
VMware Technology: Deliver Predictable Application Performance & Improve Infr...VMware Technology: Deliver Predictable Application Performance & Improve Infr...
VMware Technology: Deliver Predictable Application Performance & Improve Infr...NetApp
 
Taking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPT
Taking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPTTaking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPT
Taking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPTqualisystems
 
Business Demands Vs Unchecked Standards And Recommendations
Business Demands Vs Unchecked Standards And RecommendationsBusiness Demands Vs Unchecked Standards And Recommendations
Business Demands Vs Unchecked Standards And RecommendationsArturo Saavedra
 
Sun storage tek 2500 series disk array customer presentation
Sun storage tek 2500 series disk array customer presentationSun storage tek 2500 series disk array customer presentation
Sun storage tek 2500 series disk array customer presentationxKinAnx
 
Flex pod and_the_dark_knight_rises_generic
Flex pod and_the_dark_knight_rises_genericFlex pod and_the_dark_knight_rises_generic
Flex pod and_the_dark_knight_rises_genericMaulie Dass
 
Vm Turbo Slide Deck
Vm Turbo Slide DeckVm Turbo Slide Deck
Vm Turbo Slide Deckprattysd12
 
Engineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the FutureEngineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the FutureBob Rhubart
 
Infrastruttura Scalabile Per Applicazioni Aziendali Sun Microsystems - Virt...
Infrastruttura Scalabile Per Applicazioni Aziendali   Sun Microsystems - Virt...Infrastruttura Scalabile Per Applicazioni Aziendali   Sun Microsystems - Virt...
Infrastruttura Scalabile Per Applicazioni Aziendali Sun Microsystems - Virt...Walter Moriconi
 
Vmt Company Overview Draf Tv5.New
Vmt Company Overview Draf Tv5.NewVmt Company Overview Draf Tv5.New
Vmt Company Overview Draf Tv5.Newprattysd12
 
Mission critical computing by intel
Mission critical computing by intelMission critical computing by intel
Mission critical computing by intelHP ESSN Philippines
 
Sql Server 2008 Performance and Scaleability
Sql Server 2008 Performance and ScaleabilitySql Server 2008 Performance and Scaleability
Sql Server 2008 Performance and Scaleabilitydataplex systems limited
 
Presentation sun storage 6780 solution
Presentation   sun storage 6780 solutionPresentation   sun storage 6780 solution
Presentation sun storage 6780 solutionxKinAnx
 
Architecture Best Practices on Windows Azure
Architecture Best Practices on Windows AzureArchitecture Best Practices on Windows Azure
Architecture Best Practices on Windows AzureNuno Godinho
 
Track 3 - next generation computing
Track 3 - next generation computingTrack 3 - next generation computing
Track 3 - next generation computingEMC Forum India
 
Use Cases and Integration Scenarios with SAP Adaptive Computing Virtualization
Use Cases and Integration Scenarios with SAP Adaptive Computing VirtualizationUse Cases and Integration Scenarios with SAP Adaptive Computing Virtualization
Use Cases and Integration Scenarios with SAP Adaptive Computing VirtualizationGunther_01
 
My sqlstrategyroadmap
My sqlstrategyroadmapMy sqlstrategyroadmap
My sqlstrategyroadmapslidethanks
 
MySQL Strategy&Roadmap
MySQL Strategy&RoadmapMySQL Strategy&Roadmap
MySQL Strategy&Roadmapslidethanks
 

Similar to NexGen Storage Introductory Presentation (20)

saurabh soni rac
saurabh soni racsaurabh soni rac
saurabh soni rac
 
Nevmug Martins Point Health Care J Anuary 2009
Nevmug   Martins Point Health Care   J Anuary 2009Nevmug   Martins Point Health Care   J Anuary 2009
Nevmug Martins Point Health Care J Anuary 2009
 
VMware Technology: Deliver Predictable Application Performance & Improve Infr...
VMware Technology: Deliver Predictable Application Performance & Improve Infr...VMware Technology: Deliver Predictable Application Performance & Improve Infr...
VMware Technology: Deliver Predictable Application Performance & Improve Infr...
 
Taking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPT
Taking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPTTaking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPT
Taking Lab Management to the Next Level - QualiSystems & Testwise in a joint PPT
 
Business Demands Vs Unchecked Standards And Recommendations
Business Demands Vs Unchecked Standards And RecommendationsBusiness Demands Vs Unchecked Standards And Recommendations
Business Demands Vs Unchecked Standards And Recommendations
 
Delphix
DelphixDelphix
Delphix
 
Sun storage tek 2500 series disk array customer presentation
Sun storage tek 2500 series disk array customer presentationSun storage tek 2500 series disk array customer presentation
Sun storage tek 2500 series disk array customer presentation
 
Flex pod and_the_dark_knight_rises_generic
Flex pod and_the_dark_knight_rises_genericFlex pod and_the_dark_knight_rises_generic
Flex pod and_the_dark_knight_rises_generic
 
Vm Turbo Slide Deck
Vm Turbo Slide DeckVm Turbo Slide Deck
Vm Turbo Slide Deck
 
Engineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the FutureEngineered Systems: Oracle’s Vision for the Future
Engineered Systems: Oracle’s Vision for the Future
 
Infrastruttura Scalabile Per Applicazioni Aziendali Sun Microsystems - Virt...
Infrastruttura Scalabile Per Applicazioni Aziendali   Sun Microsystems - Virt...Infrastruttura Scalabile Per Applicazioni Aziendali   Sun Microsystems - Virt...
Infrastruttura Scalabile Per Applicazioni Aziendali Sun Microsystems - Virt...
 
Vmt Company Overview Draf Tv5.New
Vmt Company Overview Draf Tv5.NewVmt Company Overview Draf Tv5.New
Vmt Company Overview Draf Tv5.New
 
Mission critical computing by intel
Mission critical computing by intelMission critical computing by intel
Mission critical computing by intel
 
Sql Server 2008 Performance and Scaleability
Sql Server 2008 Performance and ScaleabilitySql Server 2008 Performance and Scaleability
Sql Server 2008 Performance and Scaleability
 
Presentation sun storage 6780 solution
Presentation   sun storage 6780 solutionPresentation   sun storage 6780 solution
Presentation sun storage 6780 solution
 
Architecture Best Practices on Windows Azure
Architecture Best Practices on Windows AzureArchitecture Best Practices on Windows Azure
Architecture Best Practices on Windows Azure
 
Track 3 - next generation computing
Track 3 - next generation computingTrack 3 - next generation computing
Track 3 - next generation computing
 
Use Cases and Integration Scenarios with SAP Adaptive Computing Virtualization
Use Cases and Integration Scenarios with SAP Adaptive Computing VirtualizationUse Cases and Integration Scenarios with SAP Adaptive Computing Virtualization
Use Cases and Integration Scenarios with SAP Adaptive Computing Virtualization
 
My sqlstrategyroadmap
My sqlstrategyroadmapMy sqlstrategyroadmap
My sqlstrategyroadmap
 
MySQL Strategy&Roadmap
MySQL Strategy&RoadmapMySQL Strategy&Roadmap
MySQL Strategy&Roadmap
 

Recently uploaded

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 

Recently uploaded (20)

Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 

NexGen Storage Introductory Presentation

  • 1. Take control of storage performance
  • 2. Have You Ever… Had to add more disk drives, even though you already have too much capacity? Decided not to run an application on your storage system to avoid pushing performance off a cliff? Performance Implemented a new storage infrastructure to isolate a workload from existing applications? 2
  • 3. There’s No Way to Manage Storage Performance • Performance is configured, not managed • Forces you to buy resources you don’t need Disk Drives Solid-state Tiering Size by performance= excess capacity Size by capacity = excess performance Adds complexity Excess Excess Complex Capacity Performance Unpredictable • Every workload impacts every other workload • There’s no way to predict performance • Getting it wrong is expensive and painful 3
  • 4. Forget What You Know • Guarantee performance for applications with: - Quality of Service - Service Levels • Higher storage efficiency with: - PCIe Solid-state minimizes the performance footprint - Dynamic Data Placement provides best price-performance - Data Reduction reduces system $/GB 4
  • 5. NexGen n5 Storage System • Active-Active for Enterprise High Availability • Real-time Dynamic Data Placement • Balanced Performance & Capacity • Inline Data Reduction - PCIe Solid-state • Performance Quality of Service (QoS) - 7.2k RPM MDL SAS • Performance Service Levels ALL-IN PRICING 5
  • 6. Managing Performance Requires QoS Configuring SAN performance Quality of Service Applications share all performance Set QoS based each application’s need TOTAL SYSTEM 100,000 IOPS REMAINING 100,000IOPS 70,000 IOPS 40,000 45,000 Capacity 250 GB 500 GB 900 GB Capacity 250 GB 500 GB 900 GB Performance Cannot manage performance Performance 30,000 IOPS 25,000 IOPS 5,000 IOPS Shared resources = contention Eliminate resource contention with QoS 30,000 IOPS 20,000 IOPS Guaranteed 5,000 IOPS Performance Minimum Performance Mission Critical Floor Business Critical Floor Floor Non-Critical Floor Unpredictable Managed Inefficient Optimized 6
  • 7. Quality of Service in Action 7
  • 8. Service Levels For Total Control Degraded mode impacts Quality of Service No control or priority over performance levels Prioritized performance in degraded mode operation • COMPONENT FAILURE! • COMPONENT FAILURE! • SYSTEM UPGRADE! • SYSTEM UPGRADE! • REBUILD PROCESS! • REBUILD PROCESS! Mission Business Non Critical Critical Critical Performance Performance Conventional Storage No Priority Prioritized No Control Total Control 8
  • 9. Service Levels in Action Storage Processor Fail * Overall Impact -36% Mission Critical Volumes not Impacted Business Critical Volumes Impacted -40% Non Critical Volumes Impacted -50% 9
  • 10. PCIe Solid-state Is More Efficient Solid-state behind SAS Solid-state on PCIe Designed for high latency disk drives Designed for CPU and RAM, extreme low latency Lower Capacity Maximum Capacity Limited Performance Maximum Performance 10
  • 11. Leveraging Solid-State for Every Workload Tier • Application sends a block write IOP • The data block is mirrored - Data exists on two PCIe solid-state devices - Data is in a highly available state • The block write IOP is acknowledged Cache (Writes and Reads) • The redundant copy is moved to disk • Original copy in solid-state used for writes/reads Processor/PCIe Solid-State Offline • Data is rebuilt using redundant copy P Archive • Original copy evicted from solid-state • Infrequently accessed blocks stored on disk 11
  • 12. Dynamic Data Placement For Best Price/Performance Automated tiering Dynamic Data Placement Good performance at a lower $/GB Best price/performance ratio Response Response Time ?? ?? ?? 3ms 5 ms 20ms 30ms Time Migrate volumes between policies on the fly Fast Tier PCIe Solid-state Capacity Tier Disk Reactive Automation Real-Time Decision Factors • Current performance • QoS setting • Dedupe ratio • Last accessed & frequency After-The-Fact Proactive Complex Simple 12
  • 13. Data Reduction For Lowest $/GB Deduplication Data Reduction Designed for backup, forces trade-offs Designed for primary storage Post Process Fully integrated into the data path • Buy extra capacity • All volumes are 100% deduped at create • Impacts performance Inline • Requires resources, Inline data reduction impacts latency Latency • Pattern matching leverages 48 cores of processing • Not acceptable for impact • Immediate utilization impact primary storage • QoS controlled to eliminate performance impact 0000 1111 $50/GB All solid-state w/ dedupe ack • Doesn’t Improve $/GB $10/GB Default thin provisioning for all volumes $1/GB • Improved capacity utilization Reduces Performance, or No Performance Impacts Costs Around $10/GB Lower $/GB 13
  • 14. NexGen n5 Storage System* *Patents Pending n5 Storage System Quality of Dynamic Data Data Service Placement Reduction Active-active storage processors Redundant disks, fans, and power supplies 48 GB RAM Volume QoS Real-time, not batch Inline pattern matching 1.28 TB PCI-e Fusion-io solid-state Service Levels Heuristics-based Volume level dedupe 32 TB 7.2K raw, 22 TB usable Reporting N-Tier architecture Thin provisioning 4 10 GbE or 16 1 GbE data ports, iSCSI Live Policy Optional performance pack (640 GB solid-state) QoS driven Variable block ingest Migration Optional capacity pack (32 TB disk) 14

Editor's Notes

  1. 30 minutes of content with no discussions. 45-60 with Q&A.
  2. TIME: 1-15 minutesOBJECTIVE: Make the audience identify with storage performance pain points. Have them talk about them if time allows.Before we get started, I just wanted to ask a few questions. [ADVANCE.1]Have any of you ever had to add more disk drives, even though you had more than enough capacity? [ADVANCE.2]How about deciding not to connect an application to your existing storage system to avoid pushing performance off a cliff? [ADVANCE.3]Or have you ever implemented a new storage system to isolate a workload from impacting existing applications?
  3. TIME: 2 minutesOBJECTIVE: Convince the audience the reason for their pain is due to a lack of the ability to manage performance.The reason you’ve had difficult experience managing storage is the fact that there’s no way to manage storage performance.The storage industry has spent the last few decades focused on managing capacity. By consolidating capacity resources into shared systems, customers saw overall cost per GB decrease along with simplified management. However, sharing capacity also meant sharing performance. It wasn’t an issue until x86 processing power exploded and virtualization allowed multiple applications to run on a single host which concentrated performance workloads and exposed a massive management gap. There is no way to manage shared storage performance.Every SAN or NAS product on the market today forces you to configure performance, rather than manage it. What I mean by this is that you go through a process where you estimate a workload then size your storage system by the number of drives. You’re left with a single pool of performance that every application shares with no way to assign resources or prioritize. So you’re forced to buy resources you don’t need. [ADVANCE.1]Disk drives require you to size a system by estimating the number of disk drives required to hit the performance requirement but you’re almost always left with excess capacity.Solid-state capacity lags performance, so you have to purchase by the amount of capacity required but now you’ll likely have excess performance.And finally tiered systems increase management complexity. By forcing end users to define tiering jobs that reconfigure the data layout based on historical data, management becomes more complex and performance is unpredictable because every time the data layout changes, you don’t know what performance level you’re going to get. [ADVANCE.2]Making things worse for all of these approaches is the fact that in any shared storage system every application workload impacts every other application workload so there’s now way to predict performance. And if you get it wrong, it’s painful and expensive. Because the only option left is to reconfigure the system with additional resources or buy a new system.
  4. TIME: 1-10 minutesOBJECTIVE: Position NexGen as new and different, designed to address the performance management gap in the industry. Provide company overview and R&D team experience if time allows.Let’s just pause for a second, I’d like everyone to close their eyes and forget everything you’ve learned about storage. [ADVANCE.1]At NexGen we’ve designed a system from the ground up to guarantee performance for your applications with Quality of Service and Service Levels. Software innovation is great, but we also recognize that storage systems are ultimately governed by cost. So we wanted to deliver these new features in the most cost effective, efficient footprint as possible. This led us to certain architectural and technology choices. [ADVANCE.2]First, we use PCIe solid-state which minimizes the performance footprint of our systems. PCIe solid-state does not consume disk drive slots and avoids bottlenecks caused by disk drive connections and controllers. That way we get maximum performance while avoiding consuming slots designed for low cost capacity disk drives.But solid-state is expensive, so we’ve implemented a hybrid architecture with Dynamic Data Placement that delivers the best price-performance characteristics with real-time tiering between PCIe solid-state and disk. And finally to reduce overall $/GB, we’ve redesigned data deduplication specifically for primary storage, we call this patent pending technology Phased Data Reduction.
  5. TIME: 2 minutesOBJECTIVE: Provide a quick overview of our product.Let’s take a look at the product. [ADVANCE.1]The NexGen n5 Storage System is built on what we call our ioControl Operating Environment. [ADVANCE.2]At ioControl’s foundation are dual storage processors in an active-active configuration. This ensures that all system resource can participate all the time, as opposed to systems in an Active-Passive configuration that reserve half of the system’s resources in case a failure occurs. [ADVANCE.3]Then we balance performance and capacity resources using a PCIe implementation of solid-state along with low cost, high capacity disk drives. This is where NexGen starts to look very different from typical storage systems. Most discussions around PCIe based solid-state are around accelerating a single, physical application server, which is a fantastic solution for the most performance hungry applications. What about the other 80% of your apps, we think they should have access to high performance as well. So we’ve created a system that allows all applications to share PCIe solid-state performance so you can avoid spending 10’s of 1,000’s of dollars upgrading every single server. This allows centralized management which simplifies scalability. But to really take advantage of solid-state performance, you need new storage management capabilities. [ADVANCE.4]That’s where we’ve invested the bulk of our engineering efforts. Innovative new features like Real-time Dynamic Data Placement, Phased Data Reduction, Performance Quality of Service, and Performance Service Levels give NexGen systems the new capabilities required to harness the performance potential of solid-state without breaking the bank. [ADVANCE.5]All management capabilities are included with every n5 system at no additional cost.
  6. TIME: 5 minutesOBJECTIVE: Shared storage system performance is unpredictable. NexGen’s QoS solves that problem.The problem with today’s shared storage is that performance is configured, it’s not managed. What I mean by that is through some process of understanding the workload your application environment generates, you pick some level of performance to size your system too. This is typically done by the number of drives in the system. The problem is that whatever performance level the system delivers, it is a single pool of resources that every application shares. [ADVANCE.1]Unlike capacity, where you are specifically allocating capacity to each volume independently. When every application uses performance resources from the same pool, all applications are treated equally. This creates resource contention where applications compete over the available resources. [ADVANCE.2]So when one application spikes, all other applications are impacted. In this example, your marketing file shares consume more performance which reduces the performance available for your order data base and business intelligence app. [ADVANCE.3]This approach is unpredictable, because you never know when a workload will spike. And it’s inefficient and expensive because you have to configure the system to handle the peak workload which means resources sit idle during non-peak times. [ADVANCE.4]We’ve architected our system to solve this problem. Our software was designed from the ground up to deliver performance quality of service so you can guarantee performance to each application and isolate workloads from one another on the SAN. We do this by assigning volumes to performance policies. The policy defines how much performance each volume gets. [ADVANCE.5]In this example, the business intelligence app gets 30,000 IOPS the order database gets 25,000 IOPS, and the marketing file shares get 5,000 IOPS. This means that no matter what is going on in the system, each application will get at a minimum the targeted level of performance. By setting these guaranteed minimum levels of performance, NexGen essentially eliminates resource contention within the shared storage system.Now, when one application spikes, you know your critical applications like your order database or your business intelligence app will never drop to unacceptable levels. And because the NexGen system works off of guaranteed minimums, if system resources are available, performance can be much higher than the QoS target you set.We also provide performance monitoring capabilities integrated with the user interface so you can monitor performance over time and adjust QoS targets to ensure you’re always in an optimized configuration.Now you have guaranteed storage performance level and the confidence that your system is optimized to be as cost effective and efficient as possible.
  7. TIME: 1 minuteOBJECTIVE: Non critical workload spikes on a NexGen n5 don’t impact mission critical workloads.Here’s Quality of Service in action. What you’re looking at is a screen shot from the NexGen n5 “Metrics” tab. We are simulating an Exchange workload that we’ve categorized as Mission Critical. Then at about 200 seconds we kick off an SQL query that consumes 70,000 IOPS. Then at 400 seconds we kick off a backup job. All three workloads are now hitting the SAN at the same time. The overall system performance has increased to 100,000 IOPS.But look at your Exchange workload. It’s rock solid – no change. Because we’ve categorized it as a mission critical workload the other workloads, the SQL reports and backup jobs don’t impact it. Performance remains consistent, users continue to have a low latency, good experience.
  8. TIME: 3 minutesOBJECTIVE: There’s no control over shared storage system performance is degraded. NexGen’s Service Levels solve that problem.Maintaining performance levels when everything is working perfectly is relatively easy compared to when things aren’t. That’s why NexGen developed performance Service Levels. Service levels tell the system how important it is to maintain the performance QoS settings. [ADVANCE.1]So in our example, if something happens, like a component failure, you need to upgrade firmware, or the system is rebuilding a disk drive, every application is impacted equally. [ADVANCE.2]The issue of course, is that it’s much more important to keep Exchange performance high than it is to keep the performance of marketing’s file shares up. But you have no way to prioritize or control what happens. This issue exists for every single shared storage system on the planet. Except with NexGen. [ADVANCE.3]NexGen has built 3 service levels into our Quality of Service engine. Mission Critical, Business Critical, and Non Critical. These service levels tell our system how important it is to maintain the Quality of Service targets that you’ve set for your volumes. In our example, we’ve categorized Exchange as Mission Critical, SQL as Business Critical, and File Shares as Non Critical. And you can also see that you’ve already set your different performance targets with QoS. Now, when something happens to impact the overall performance of the system… [ADVANCE.4]The NexGen n5 isolates the impact to the non-critical apps first. Then we’ll minimize the impacts to business critical apps. But we ensure that your mission critical applications are not impacted. So Exchange users don’t skip a beat and everything continues on as nothing has happened.Service Levels give you a way to prioritize and control the performance of your system when it’s in a degraded mode state.
  9. TIME: 1 minuteOBJECTIVE: Losing half of the overall system performance on a NexGen n5 doesn’t impact mission critical workloads.Here’s an example of Service Levels in action. What you’re looking at again is a screen shot from the NexGen n5 “Metrics” tab. We are simulating an Exchange workload that we’ve categorized as Mission Critical. Then at about 650 seconds we shut off one of our storage processors. That means the system loses half of its performance resources.With any other storage system on the planet, you’d lose 50% of performance on every single volume. But, with NexGen, because you’ve categorized your Exchange workload as Mission Critical, your SQL reports as Business Critical, and the Backup Job as Non Critical, the n5 knows exactly what to do before the failure occurs.What happens, is your Exchange chugs away at it’s predefined performance target. The SQL reports run about 38% slower and the backup job takes the biggest hit. This is exactly what you want to happen.There is no other storage system on the planet that offers this level of control over system performance.
  10. TIME: 3 minutesOBJECTIVE: SSD’s behind storage controllers are inefficient. NexGen’s PCIe implementation is more efficient.But storage is a lot more than just software capabilities. Budgets are tight, and $/IOP and $/GB matter. So NexGen strives to deliver the most value for the lowest cost. This is what drove us to implement solid-state via PCIe. But this is a very different approach to most other storage vendors. It would have been really easy to just unplug a disk drive and plug in a solid-state drive. From a vendor perspective, this significantly reduces time to market. But it is a very inefficient implementation. Disk drives connect via a SAS backplane which in turn connects to a storage controller that’s typically plugged into a single PCIe slot. This entire approach was designed around aggregating high latency disk drives. It works fine until you start approaching 1,000’s of disk drives or consider solid-state, which is 1,000’s of times faster than disk. This is what EMC, Compellent, Equallogic, LeftHand, 3PAR, Pure, Nimble, Tintri, SolidFire have all done. [ADVANCE.1]By unplugging high capacity disk drives and plugging in high performance solid-state, you will immediately saturate the SAS backplane. If that doesn’t become a bottleneck, RAID algorithms that actually have timing loops designed to wait for disk drives to respond will. And finally, the controller which manages all I/O to the entire back end is plugged into a single PCIe slot. Which can also become a bottleneck. So not only are you reducing the systems capacity by unplugging a disk drive and plugging in a solid-state drive, you limit the performance potential of solid-state. [ADVANCE.2]At NexGen, we implement PCIe solid-state to avoid these issues. The PCIe bus was designed for extreme low latency transfers of massive amounts of data between CPU and RAM, so we avoid all bottlenecks and maximize performance. [ADVANCE.3]The other thing to take note, is that we give each PCIe solid-state device it’s own entire PCIe slot. That’s because they deliver so much performance, they consume all of the bandwidth of that slot. Contrast that with a legacy approach where the ENTIRE backend is limited to a RAID controller that’s plugged into a single PCIe slot. Because of these issues, we’re seeing the industry shift toward PCIe. EMC VF Cache and NetApp Flash Cache both leverage PCIe but these types of implementations are incomplete because they are only used for read workloads. The n5 is an active-active system so it manages read AND write workloads from solid-state so you get solid-state performance for all workloads, not just reads. Another benefit of this approach is that PCIe solid-state has zero footprint from rack space or disk drive slot perspective. So we can maximize the capacity of the system. Then, of course, we use real-time tiering algorithms to move data between our high performance and high capacity tiers.
  11. TIME: 3 minutesOBJECTIVE: [optional] Describe how our data path works. Emphasize the differences from a “read” cache.An important thing to understand about NexGen is that we deliver SHARED PCIe solid-state in an HA or Active-Active storage architecture. That means we allow all workloads, read and write, random and sequential to take advantage of solid-state performance. Here’s how the system works from a data path perspective. In the first phase, solid-state is used as a tier. That means that data stored in solid-state does not require additional copies stored elsewhere for HA. [ADVANCE.1]When an application sends a write, that write is mirrored between two PCIe solid-state cards located in two different storage processors within the system. Once both copies are stored, our system acknowledges back to the host that the write is complete and the data is stored in an HA configuration. The issue with this configuration is that you have redundant data stored in solid-state, which is very expensive. So once the write is acknowledged, we quickly move the redundant date from solid-state to disk. [ADVANCE.2]Now we have the redundant copy of data stored on low cost disk while reads and writes to the original copy are managed all out of solid-state. [ADVANCE.3]So unlike any other storage system, we manage all writes and modifies out of solid-state and use the redundant copy of data ONLY to rebuild our solid-state tier after a failure. [ADVANCE.4]As opposed to using solid-state as a cache, or worse yet, a read-only cache that accelerates only part of the workload. One way to think about this is that we use disk as an “availability” cache versus using solid-state as a “performance” cache. This allows full utilization of solid-state for any type of workload, while avoiding the capacity utilization impacts of storing redundant data in solid-state that happens in any system that uses solid-state as only a tier or a performance cache. [ADVANCE.5]And finally, if data being stored in solid-state goes stale, or is not being accessed frequently, we evict it to make room for other more active data according to our QoS engine. The decision to evict data is made in real time based on our access patterns, dedupe ratios, current performance levels, QoS settings and other data to ensure applications receive the right amount of performance.
  12. TIME: 5 minutesOBJECTIVE: Explain why automated tiering falls short of expectations and how Dynamic Data Placement addresses the gaps.Dynamic Data Placement is patent pending intellectual property that delivers the best price-performance ratio of any midrange storage system. It sounds a little bit like tiering but it’s very different, let me explain.Legacy vendors like Compellent, 3PAR, and EMC use what we refer to as “Reactive Automation”. [ADVANCE.1]After data is written the systems start to track block access patterns. Over time, some blocks get accessed more frequently than others. Then at some point in the future, a batch process is kicked off which moves data around. [ADVANCE.2]But this approach depends on the assumption that your workload last week will be identical to the workload next week, which is never true. So you’re guaranteed to be out of configuration and have to move things around again, constantly chasing your tail. As blocks that were hot last week are now cold this week. [ADVANCE.3]Don’t get me wrong, this was a step in the right direction but issues remain. First, it’s after the fact. So any spike in workload that you encountered, the system didn’t react in real time so the user experience suffered. Second, this adds management complexity. Products like these force you to define when the movement occurs, how fast things move up or down, what block size you’d like the system to manage, and so on… These are tasks that weren’t required before.And finally, it makes performance even less predictable. Overall system performance is defined by what data is living where, the fact that the system is changing that means your performance characteristics will change. Not to mention that vendors charge an arm and a leg for the software required to do this. All of these issues often cause customers to just turn the capability off over time and go back to managing their system like they used to. [ADVANCE.4]Dynamic Data Placement is different. We’ve studied the tiering algorithms of yesterday and re-designed them to address the shortcomings. The key, fundamental difference is that we use our Quality of Service engine to tell Dynamic Data Placement what to do, in real-time. Because you’ve already provisioned out the performance resource to volumes, Dynamic data placement knows exactly how fast volumes are performing but more importantly, how fast the volume should go so you can avoid all of the management complexity associated with defining batch tiering jobs and constantly refining them over time. Here’s how it works. [ADVANCE.5]NexGen stores a certain % of data in solid-state and a certain % in disk so that the performance QoS target can be met. Application data with higher QoS targets get a higher % of blocks in solid-state than those with lower QoS settings. Then the QoS engine works in lock step with Dynamic Data Placement algorithms to make real-time decisions about where blocks should be stored. [ADVANCE.6]QoS is comparing how fast you want the volume to go with how fast it’s actually going. If it’s not getting enough performance, Dynamic Data Placement immediately migrates data from the slow tier to the fast tier so that the QoS targets are met. [ADVANCE.7]And if you can migrate data in real time, you can start to do things proactively. Say you have a VDI environment and you know approximately when boot storms and virus scans occur. [ADVANCE.8]You can pre-emptively move more data into solid-state for those time periods and address the peak workload, then move data back to original configuration for steady state operation. [ADVANCE.9]That way you avoid designing for peak workloads which results in unused resources for the rest of the time. Converse to all this is the ability NOT to move data. If a non-critical app like your marketing file shares spikes, but the QoS targets are being met, we won’t promote data into solid-state so you’re not wasting the most expensive type of capacity in the system storing non-critical data that’s going fast enough anyway. [ADVANCE.10]Just to recap:You can’t define performance with tiering.Dynamic data placement is proactive and allows you to anticipate issues.No need to manage rules or policies, performance QoS drive data placement and migration.Use the “cruise control” analogy here to emphasize the point – if time permits.
  13. TIME: 2 minutesOBJECTIVE: Differentiate NexGen data reduction from traditional approaches.Deduplication technology was designed for backup, not primary storage. So trying to implement off-the-shelf technologies into a primary storage system forces trade-offs and just doesn’t work very well. There are various approaches that legacy vendors have tried. [ADVANCE.1]NetApp uses a post-process technology. That means data is stored in the system, then at some point in the future a process kicks off that analyzes all data and runs the dedupe algorithm. [ADVANCE.1]The issue with this approach is that the algorithms impact system performance, that’s why you run them at night AND you have to have capacity available – in addition to the capacity used to store the data – so the dedupe algorithm has a “scratch pad”. [ADVANCE.2]EMC/Data Domain have designed their dedupe specifically for the backup process. Because hashing algorithms are being used, this approach increases latency which is not acceptable for primary storage. It works fine for backup but does not work for primary storage. [ADVANCE.3]Then you have new vendors selling all solid-state systems. They claim they get around the latency issues with inline dedupe because all data is stored on solid-state. [ADVANCE.4]The issues is that solid-state is way more expensive than disk. These system will start around $50/GB raw then reduce the cost to about $10/GB, [ADVANCE.5]but that’s about what you pay for an enterprise class disk storage system with 15K rpm disk drives. What you really want is to reduce the $/GB that you’re paying today. [ADVANCE.6]All solid-state systems are not capable of this. So with existing dedupe technologies, you’re always making trade-offs. Either you’re reducing system performance OR you’re not lowering the cost of capacity. [ADVANCE.7]That’s why we’ve redesigned data reduction specifically for a primary storage system. This is not off-the-shelf technology, we’ve invested heavily in proprietary patent pending algorithms to get the advantages of data reduction without impacting performance of a primary storage system. Data reduction is fully integrated into the data path so volumes are 100% deduped when they are created. [ADVANCE.8]We’ve implemented inline data reduction that looks for patterns in data. If, for example, the system sees all 1’s or all 0’s being sent, no data is stored. Then if that data is requested, it’s regenerated out of processor space. And by leveraging 48 cores of processing power, any impacts to latency is minimized. [ADVANCE.9]And all volumes are thin provisioned so that capacity is never allocated until it’s written to maximize capacity utilization and achieve sub $1/GB capacity costs. Also everything is controlled via our QoS engine, you know that application storage performance never suffers.
  14. TIME: 1-5 minutesOBJECTIVE: Describe what customers can buy.Every NexGen n5 comes with the ioControl Operating environment which has all of the software capabilities we’ve just discussed. Quality of Service, Service Levels, Dynamic Data Placement, and Data Reduction.The system itself is an active-active enterprise class iSCSI storage array with redundant components across the board to avoid any single point of failures. There is 48GB of RAM, 1.28 TB of PCIe SSD from Fusion-io, 32 TB raw disk drive capacity, 22 TB usable. You have a choice of either 4 10GbE ports or 16 1GbE ports that connect to your applications via iSCSI. And optional performance and capacity packs for scalability depending on how your workload evolves. MSRP: $88,000 USShow customer case study if time permits.