SlideShare a Scribd company logo
1 of 34
Download to read offline
May 5, 2016 1
Allen Samuels
The Consequences of Infinite Storage Bandwidth
Engineering Fellow, Systems and Software Solutions
May 5, 2016
May 5, 2016 2
Disclaimer
During the presentation today, we may make forward-looking statements.
Any statement that refers to expectations, projections, or other characterizations of future events or circumstances is a forward-
looking statement, including those relating to industry predictions and trends, future products and their projected availability,
and evolution of product capacities. Actual results may differ materially from those expressed in these forward-looking
statements due to a number of risks and uncertainties, including among others: industry predictions may not occur as expected,
products may not become available as expected, and products may not evolve as excepted; and the factors detailed under the
caption “Risk Factors” and elsewhere in the documents we file from time to time with the SEC, including, but not limited to, our
annual report on Form 10-K for the year ended January 3, 2016. This presentation contains information from third parties,
which reflect their projections as of the date of issuance. We undertake no obligation to update these forward-looking
statements, which speak only as of the date hereof or the date of issuance by a third party.
May 5, 2016 3
What do I Mean By Infinite Bandwidth ?
May 5, 2016 4
Log scale
• Use DRAM Bandwidth as a
proxy for CPU throughput
• Reasonable approximation
for DMA heavy, and/or
poor cache hit
performance workloads
(e.g. Storage)
Bigdifference
inslope!
Data is for informational purposes only and may contain errors
Network, Storage and DRAM Trends
May 5, 2016 5
Linear scale
InfiniteStorageBandwidth
• Same data as last slide,
but for the Log-
impaired
• Storage Bandwidth is
not literally infinite
• But the ratio of
Network and Storage
to CPU throughput is
widening very quickly
Data is for informational purposes only and may contain errors
Network, Storage and DRAM Trends
May 5, 2016 6
0
50
100
150
200
250
1990 1995 2000 2005 2010 2015 2020 2025
Year
SSDs / CPU Socket
Data is for informational purposes only and may contain errors
May 5, 2016 7
0
5
10
15
20
25
30
35
40
45
50
1995 2000 2005 2010 2015 2020 2025
Year
SSDs / CPU Socket @ 20% Max BW
Data is for informational purposes only and may contain errors
May 5, 2016 8
What happens as we get closer to the limit?
May 5, 2016 9
 New Denser Server Form Factors
– Blades
– Sleds
 Good short term solutions
Let’s Get Small!
May 5, 2016 10
 Storage Cost = Media + Access + Management
 Shared nothing architecture conflates access and management
 Storage costs will become dominated by Management cost
 Storage costs become CPU/DRAM costs
Effects Of The CPU/DRAM Bottleneck
May 5, 2016 11
 Move management to upper layers where CPU can be right-sized by client
 What kind of media access do I want?
– Simple enough functionality to be done directly in drive hardware – NO CPU
– Allow direct access throughout the compute cluster over a network
– Just enough machinery to enable coarse-grained sharing
Embracing The CPU/DRAM Bottleneck
 In short, you really want a SAN !
– Or more technically, Fabric Connected Storage
May 5, 2016 12
Not Your Father’s SAN
 Three problems with current SAN
– Fibre channel transport
– SCSI access protocol
– Drive oriented storage allocation
 All of these want to be updated
– Fibre channel is brittle and costly
– SCSI initiators have long code paths catering to seldom used configurations
– Robust sub-drive storage allocation
May 5, 2016 13
SAN 2.0
 NVMe over Fabrics
 1.0 Spec is out for review, hopefully done in May
 Simple enough for direct hardware execution of data path ops
 Minimal initiator code path lengths improve performance
 Namespaces allow sub-drive allocations
 Not mature enough for enterprise deployment – yet
May 5, 2016 14
SAN 2.0
 What storage network?
– Current candidates are FC, Infiniband and Ethernet
 Ethernet has best economics – if you can make it work
 RoCE is easy on the edge, but hard on the interior
– Only controlled environments have shown multi-switch scalability
– General scalability in a multi-vendor environment likely to be difficult
– Wonderful for intra-rack storage networking
 iWarp is hard on the edge, but easy on the interior
– Scarcity of implementations inhibits deployment
 Storage over IP will see limited cross rack deployment until this is resolved
May 5, 2016 15
 Implementations using OTS stuff are in progress
 Server side implementations look pretty conventional too
 4-5 MIOPS have been shown
 Seems like 10 MIOPS isn’t unreasonable to expect
First Generation Of SAN 2.0
NIC
CPU DRAM
SSD
PCIe
May 5, 2016 16
 Soon, NICs will forward NVMe operations to local PCIe devices
 CPU removed from the software part of the data path
 CPU is still needed for the hardware part of the data path
 IOPS improve, BW is unchanged
 Significant CPU freed for application processing
 Getting closer to the wall!
Second Generation SAN 2.0
May 5, 2016 17
 New generation of combined SSD controller and NIC
– Rethink of interfaces eliminates DRAM buffering
 Network goes right into the drive
 No CPU to be found
 Works well with rack scale architecture
Third Generation SAN 2.0, Imagined
May 5, 2016 18
 Disaggregated / Rack Scale Architecture
– Fabric connected
– Independently scale compute, networking and storage
Let’s Get Really Small
May 5, 2016 19
Call To Action
 Fabric-connected storage isn’t well managed by existing FOSS
 Lots of upper layer management software is available
– OpenStack, Ceph, Gluster, Cassandra, MongoDB, SheepDog, etc.
 Lower layer cluster management still primitive
May 5, 2016 20
What’s It All Mean?
 New form factors are in everybody's future
 The coming avalanche of storage bandwidth wants to be free
– Not imprisoned by a CPU
 Rack Scale Architecture allows new Storage/Compute configs
 Storage will be increasingly “Software Defined” as the HW evolves
May 5, 2016 21
Product Pitch!
May 5, 2016 22
Old Model
 Monolithic, large upfront
investments, and fork-lift upgrades
 Proprietary storage OS
 Costly: $$$$$
New SD-AFS Model
 Disaggregate storage, compute, and software for
better scaling and costs
 Best-in-class solution components
 Open source software - no vendor lock-in
 Cost-efficient: $
Software-defined All-Flash Storage
The disaggregated model for scale
May 5, 2016 23
Scalable Raw Performance
2M IOPS, Latency 1-3ms
12-15 GB/s Throughput
8TB Flash-Card Innovations
• Enterprise Grade Power-Fail Safe
• Alerts & monitoring
• Latching integrated & monitored
• Directly samples air temp
• Form-factor enables lowest cost SSD
InfiniFlash™ Storage Platform
Capacity 512TB – raw all Flash!
All Flash 3U JBOD of Flash (JBOF)
Up to 64 x 8TB SAS Drive Cards
4TB cards also available soon
Operational Efficiency & Resilient
Hot Swappable Architecture, Easy FRU
Low power – typical workload 400-500W
150W(idle) - 750W(max)
MTBF 1.5+ million hours
Hot Swappable !
Fans, SAS Expander Boards,
Power Suppliers, Flash cards
Host Connectivity
Connect up to 8 servers
through 8 SAS ports
Multi-path enabled
Flash Drive Card
EMS Product Management SanDisk Confidential
May 5, 2016 24
InfiniFlash IF500 All-Flash Storage System
Block and Object Storage Powered by Ceph
 Ultra-dense High Capacity Flash storage
– 512TB in 3U, Scale-out software for PB scale capacity
 Highly scalable performance
– Industry leading IOPS/TB
 Cinder, Glance and Swift storage
– Add/remove server & capacity on-demand
 Enterprise-Class storage features
– Automatic rebalancing
– Hot Software upgrade
– Snapshots, replication, thin provisioning
– Fully hot swappable, redundant
 Ceph Optimized for SanDisk flash
– Tuned & Hardened for InfiniFlash
May 5, 2016 25
InfiniFlash SW + HW Advantage
Software Storage System
Software tuned for
Hardware
• Ceph modifications for Flash
• Both Ceph, Host OS tuned for
InfiniFlash
• SW defects that impacts Flash
identified & mitigated
Hardware Configured
for Software
• Right balance of CPU, RAM,
Storage
• Rack level designs for optimal
performance & cost
Software designed for all
systems does not work well with
any system
 Ceph has over 50 tuning
parameters that results in 5x – 6x
performance improvement
 Fixed CPU, RAM hyperconverged
nodes does not work well for all
workloads
May 5, 2016 26
InfiniFlash for OpenStack with Dis-Aggregation
 Compute & Storage Disaggregation enables
Optimal Resource utilization
 Allows for more CPU usage required for OSDs with
small Block workloads
 Allows for higher bandwidth provisioning as required
for large Object workload
 Independent Scaling of Compute and
Storage
 Higher Storage capacity needs doesn't’t force you to
add more compute and vice-versa
 Leads to optimal ROI for PB scale
OpenStack deploymentsHSEB A HSEB B
OSDs
SAS
….
HSEB A HSEB B HSEB A HSEB B
….
ComputeFarm
LUN LUN
iSCSI Storage
…Obj Obj
Swift ObjectStore
…LUN LUN
Nova with Cinder
& Glance
…
LibRBD
QEMU/KVM
RGW
WebServer
KRBD
iSCSI Target
OSDs OSDs OSDs OSDs OSDs
StorageFarm
Confidential – EMS Product Management
May 5, 2016 27
IF500 - Enhancing Ceph for Enterprise Consumption
IF500 provides usability and performance utilities without sacrificing Open Source principles
• SanDisk Ceph Distro ensures packaging with stable, production-ready code with consistent quality
• All Ceph Performance improvements developed by SanDisk are contributed back to community
27
SanDisk
Distribution or
Community
Distribution
 Out-of-the Box
configurations tuned for
performance with Flash
 Sizing & planning tool
 InfiniFlash drive
management integrated
into Ceph management
(Coming Soon)
 Ceph installer that is specifically built for InfiniFlash
 High performance iSCSI storage
 Better diagnostics with log collection tool
 Enterprise hardened SW + HW QA
May 5, 2016 28
InfiniFlash Performance Advantage
900K Random Read Performance with 384TB of storage
Flash Performance unleashed
• Out-of-the Box configurations tuned for
performance with Flash
• Read & Write data-path changes for Flash
• x3-12 block performance improvement –
depending on workload
• Almost linear performance scale with
addition of InfiniFlash nodes
• Write performance WIP with NV-RAM
Journals• Measured with 3 InfiniFlash nodes with 128TB each
• Avg Latency with 4K Block is ~2ms, with 99.9 percentile
latency is under 10ms
• For Lower block size, performance is CPU bound at Storage
Node.
• Maximum Bandwidth of 12.2GB/s measured towards 64KB
blocks
S
28
May 5, 2016 29
InfiniFlash Ceph Performance Advantage
 Single InfiniFlash unit Performance
– 1 x 512TB InfiniFlash unit connected with 8 nodes
– 4K RR IOPS: ~1 million IOPs - 85% of bare metal perf.
• Corresponding Bare metal IF100 IOPS is 1.1 million
– All 8 hosts CPU saturated for 4K Random read.
• More performance potential with higher CPU cycles
– With 64k IO size we are able to utilize full IF150
bandwidth of over 12GB/s.
– Librbd and Krbd performance are comparable.
– Write Performance is on 3x copy configuration. The
more common 2x copy will result in 33% improvement.
Random Write
IO Profile LIBRBD IOPs
4k Random Write 54k
64k Random Write 34k
256k Random Write 11.3k
1,123,175
349,247
87,369
0
5
10
15
20
25
0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
4k 64k 256k
BW(GBps)
IOPS
Random Read Block Performance
LIBRBD IOPs Bandwidth (GBps)
May 5, 2016 30
InfiniFlash Ceph Performance Advantage
 Linear Scaling with 2 InfiniFlash units
– 2 x 512TB InfiniFlash unit connected with 16 nodes
– 1.8M 4K IOPS – 80% of the bare metal performance
– Performance is Scaling almost linearly - Almost doubled the
performance of single IF150 with ceph
– Write perf is 2 X with 16 node cluster compared with 8 node
cluster.
Random Read
Random Write
IO Profile LIBRBD IOPs
4k RR 1800k
64k RR 225k
256k RR 53k
IO Profile LIBRBD BW(MB/s)
4k RR 7194
64k RR 14412
256k RR 13366
May 5, 2016 31
InfiniFlash OS – Hardened Enterprise Class Ceph
 Hardened and tested for Hyperscale
deployments and workloads
 Platform focused testing enables us to deliver a
complete and hardened storage solution
 Single Vendor support for both Hardware &
Software
Enterprise Level
Hardening
Testing at
Scale
Failure
Testing
 9,000 hours
of cumulative
IO tests
 1,100+
unique test
cases
 1,000 hours
of Cluster
Rebalancing
tests
 1,000 hours
of IO on iSCSI
 Over 100
server node
clusters
 Over 4PB of
Flash Storage
 2,000 Cycle
Node Reboot
 1,000 times
Node Abrupt
Power Cycle
 1,000 times
Storage Failure
 1,000 times
Network
Failure
 IO for 250
hours at a
stretch
May 5, 2016 32
IF500 Reference Configurations
Model Entry Mid High
InfiniFlash 128TB 256TB 512TB
Servers1 2 x Dell R 630-2U 4 x Dell R 630-2U 4 x Dell R 630-2U2
Processor per server Dual socket Intel Xeon E5-2690 v3 Dual socket Intel Xeon E5-2690 v3 * Dual socket Intel Xeon E5-2690 v3
Memory per server 128GB RAM 128GB RAM 128GB RAM
HBA per server (1) LSI 9300-8e PCIe 12Gbps (1) LSI 9300-8e PCIe 12Gbps (1) LSI 9300-8e PCIe 12Gbps
Network per server
(1) Mellanox ConnectX-3 dual ports
40GbE
(1) Mellanox ConnectX-3 dual ports
40GbE
(1) Mellanox ConnectX-3 dual ports
40GbE
Boot Drive per server (2) SATA 120GB SSD (2) SATA 120GB SSD (2) SATA 120GB SSD
1 - For larger block workload or less CPU intensive workload, OSD node could use single socket server.
Dell Servers can be substituted with other vendor servers that match the specs.
2 - For Small Block workloads, 8 servers are recommended
May 5, 2016 33
InfiniFlash TCO Advantage
$-
$10,000,000
$20,000,000
$30,000,000
$40,000,000
$50,000,000
$60,000,000
$70,000,000
$80,000,000
Tradtional ObjStore on
HDD
IF500 ObjStore w/ 3
Full Replicas on Flash
IF500 w/ EC - All Flash IF500 - Flash Primary
& HDD Copies
3 year TCO comparison *
3 year Opex
TCA
0
20
40
60
80
100
Tradtional ObjStore on HDD IF500 ObjStore w/ 3 Full
Replicas on Flash
IF500 w/ EC - All Flash IF500 - Flash Primary & HDD
Copies
Total Rack
 Reduce the replica count with higher
reliability of flash
- 2 copies on InfiniFlash vs. 3 copies on
HDD
 InfiniFlash disaggregated architecture
reduces compute usage, thereby
reducing HW & SW costs
- Flash allows the use of erasure coded
storage pool without performance
limitations
- Protection equivalent of 2x storage with
only 1.2x storage
 Power, real estate, maintenance cost
savings over 5 year TCO
* TCO analysis based on a US customer’s OPEX & Cost data for a 100PB deployment
33
May 5, 2016 34
©2016 SanDisk Corporation. All rights reserved. SanDisk is a trademark of SanDisk Corporation, registered in the
United States and other countries. Other brands mentioned herein are for identification purposes only and may be
the trademarks of their holder(s).

More Related Content

What's hot

OpenStack and Rackspace – an Australian perspective: Tony Breeds, Rackspace
OpenStack and Rackspace – an Australian perspective: Tony Breeds, RackspaceOpenStack and Rackspace – an Australian perspective: Tony Breeds, Rackspace
OpenStack and Rackspace – an Australian perspective: Tony Breeds, RackspaceOpenStack
 
Push-button Composition of Oracle Application and Database Environments: Avi ...
Push-button Composition of Oracle Application and Database Environments: Avi ...Push-button Composition of Oracle Application and Database Environments: Avi ...
Push-button Composition of Oracle Application and Database Environments: Avi ...OpenStack
 
Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...
Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...
Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...OpenStack
 
[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot
[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot
[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's HotOpenStack Korea Community
 
CloudStack IPv6 in production
CloudStack IPv6 in productionCloudStack IPv6 in production
CloudStack IPv6 in productionShapeBlue
 
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack FoundationWe Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack FoundationOpenStack
 
CloudStack News, Berlin 16 june 2016
CloudStack News, Berlin 16 june 2016CloudStack News, Berlin 16 june 2016
CloudStack News, Berlin 16 june 2016ShapeBlue
 
An Open, Open source way to enable your Cloud Native Journey
An Open, Open source way to enable your Cloud Native JourneyAn Open, Open source way to enable your Cloud Native Journey
An Open, Open source way to enable your Cloud Native Journeyinwin stack
 
Telco open stack use cases james thorne
Telco open stack use cases   james thorneTelco open stack use cases   james thorne
Telco open stack use cases james thorneSriram Subramanian
 
CSEUG introduction
CSEUG introductionCSEUG introduction
CSEUG introductionShapeBlue
 
Monitoring CloudStack and components
Monitoring CloudStack and componentsMonitoring CloudStack and components
Monitoring CloudStack and componentsShapeBlue
 
Introduction to MANTL Data Platform
Introduction to MANTL Data PlatformIntroduction to MANTL Data Platform
Introduction to MANTL Data PlatformCisco DevNet
 
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...Vietnam Open Infrastructure User Group
 
Introduction and news
Introduction and newsIntroduction and news
Introduction and newsShapeBlue
 
CloudStack news
CloudStack newsCloudStack news
CloudStack newsShapeBlue
 
Deploying OpenStack with Ansible
Deploying OpenStack with AnsibleDeploying OpenStack with Ansible
Deploying OpenStack with AnsibleKevin Carter
 
Role of sdn controllers in open stack
Role of sdn controllers in open stackRole of sdn controllers in open stack
Role of sdn controllers in open stackopenstackindia
 
CI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường Chiến
CI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường ChiếnCI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường Chiến
CI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường ChiếnVietnam Open Infrastructure User Group
 
VMware and Container Orchestration
VMware and Container OrchestrationVMware and Container Orchestration
VMware and Container OrchestrationTesora
 

What's hot (20)

OpenStack and Rackspace – an Australian perspective: Tony Breeds, Rackspace
OpenStack and Rackspace – an Australian perspective: Tony Breeds, RackspaceOpenStack and Rackspace – an Australian perspective: Tony Breeds, Rackspace
OpenStack and Rackspace – an Australian perspective: Tony Breeds, Rackspace
 
Push-button Composition of Oracle Application and Database Environments: Avi ...
Push-button Composition of Oracle Application and Database Environments: Avi ...Push-button Composition of Oracle Application and Database Environments: Avi ...
Push-button Composition of Oracle Application and Database Environments: Avi ...
 
Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...
Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...
Making Openstack Really Easy - Why Build Open Source When You Can Buy? Danny ...
 
[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot
[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot
[OpenStack Day in Korea 2015] Track 2-2 - OpenStack for PaaS: Why it's Hot
 
CloudStack IPv6 in production
CloudStack IPv6 in productionCloudStack IPv6 in production
CloudStack IPv6 in production
 
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack FoundationWe Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation
We Are OpenStack: David F. Flanders & Tom Fifield, OpenStack Foundation
 
CloudStack News, Berlin 16 june 2016
CloudStack News, Berlin 16 june 2016CloudStack News, Berlin 16 june 2016
CloudStack News, Berlin 16 june 2016
 
OPNFV & OpenStack
OPNFV & OpenStackOPNFV & OpenStack
OPNFV & OpenStack
 
An Open, Open source way to enable your Cloud Native Journey
An Open, Open source way to enable your Cloud Native JourneyAn Open, Open source way to enable your Cloud Native Journey
An Open, Open source way to enable your Cloud Native Journey
 
Telco open stack use cases james thorne
Telco open stack use cases   james thorneTelco open stack use cases   james thorne
Telco open stack use cases james thorne
 
CSEUG introduction
CSEUG introductionCSEUG introduction
CSEUG introduction
 
Monitoring CloudStack and components
Monitoring CloudStack and componentsMonitoring CloudStack and components
Monitoring CloudStack and components
 
Introduction to MANTL Data Platform
Introduction to MANTL Data PlatformIntroduction to MANTL Data Platform
Introduction to MANTL Data Platform
 
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
 
Introduction and news
Introduction and newsIntroduction and news
Introduction and news
 
CloudStack news
CloudStack newsCloudStack news
CloudStack news
 
Deploying OpenStack with Ansible
Deploying OpenStack with AnsibleDeploying OpenStack with Ansible
Deploying OpenStack with Ansible
 
Role of sdn controllers in open stack
Role of sdn controllers in open stackRole of sdn controllers in open stack
Role of sdn controllers in open stack
 
CI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường Chiến
CI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường ChiếnCI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường Chiến
CI/CD trên Cloud OpenStack tại Viettel Networks | Hà Minh Công, Phạm Tường Chiến
 
VMware and Container Orchestration
VMware and Container OrchestrationVMware and Container Orchestration
VMware and Container Orchestration
 

Viewers also liked

OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...
OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...
OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...OpenStack
 
From Community to Enterprise and Back Again! Chris Wright, Red Hat
From Community to Enterprise and Back Again! Chris Wright, Red HatFrom Community to Enterprise and Back Again! Chris Wright, Red Hat
From Community to Enterprise and Back Again! Chris Wright, Red HatOpenStack
 
We Are OpenStack: Jonathan Bryce, OpenStack Foundation
We Are OpenStack: Jonathan Bryce, OpenStack FoundationWe Are OpenStack: Jonathan Bryce, OpenStack Foundation
We Are OpenStack: Jonathan Bryce, OpenStack FoundationOpenStack
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatOpenStack
 
The Environment for Innovation: Tristan Goode, Aptira
The Environment for Innovation: Tristan Goode, AptiraThe Environment for Innovation: Tristan Goode, Aptira
The Environment for Innovation: Tristan Goode, AptiraOpenStack
 
Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...
Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...
Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...OpenStack
 
Moving to Cloud for Good: Alexander Tsirel, HiveTec
Moving to Cloud for Good: Alexander Tsirel, HiveTecMoving to Cloud for Good: Alexander Tsirel, HiveTec
Moving to Cloud for Good: Alexander Tsirel, HiveTecOpenStack
 
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, RackspaceBig Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, RackspaceOpenStack
 
An Open Approach to Government Cloud: Dez Blanchfield, Vault Systems
An Open Approach to Government Cloud: Dez Blanchfield, Vault SystemsAn Open Approach to Government Cloud: Dez Blanchfield, Vault Systems
An Open Approach to Government Cloud: Dez Blanchfield, Vault SystemsOpenStack
 
Crowbar and OpenStack: Steve Kowalik, SUSE
Crowbar and OpenStack: Steve Kowalik, SUSECrowbar and OpenStack: Steve Kowalik, SUSE
Crowbar and OpenStack: Steve Kowalik, SUSEOpenStack
 

Viewers also liked (10)

OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...
OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...
OpenStack Australia Day 2016 - Peter Lees, SUSE: Planning an Enterprise OpenS...
 
From Community to Enterprise and Back Again! Chris Wright, Red Hat
From Community to Enterprise and Back Again! Chris Wright, Red HatFrom Community to Enterprise and Back Again! Chris Wright, Red Hat
From Community to Enterprise and Back Again! Chris Wright, Red Hat
 
We Are OpenStack: Jonathan Bryce, OpenStack Foundation
We Are OpenStack: Jonathan Bryce, OpenStack FoundationWe Are OpenStack: Jonathan Bryce, OpenStack Foundation
We Are OpenStack: Jonathan Bryce, OpenStack Foundation
 
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red HatThe Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
 
The Environment for Innovation: Tristan Goode, Aptira
The Environment for Innovation: Tristan Goode, AptiraThe Environment for Innovation: Tristan Goode, Aptira
The Environment for Innovation: Tristan Goode, Aptira
 
Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...
Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...
Implementing OpenStack in a Government Environment: Vanessa Binding, Departme...
 
Moving to Cloud for Good: Alexander Tsirel, HiveTec
Moving to Cloud for Good: Alexander Tsirel, HiveTecMoving to Cloud for Good: Alexander Tsirel, HiveTec
Moving to Cloud for Good: Alexander Tsirel, HiveTec
 
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, RackspaceBig Data and OpenStack, a Love Story: Michael Still, Rackspace
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
 
An Open Approach to Government Cloud: Dez Blanchfield, Vault Systems
An Open Approach to Government Cloud: Dez Blanchfield, Vault SystemsAn Open Approach to Government Cloud: Dez Blanchfield, Vault Systems
An Open Approach to Government Cloud: Dez Blanchfield, Vault Systems
 
Crowbar and OpenStack: Steve Kowalik, SUSE
Crowbar and OpenStack: Steve Kowalik, SUSECrowbar and OpenStack: Steve Kowalik, SUSE
Crowbar and OpenStack: Steve Kowalik, SUSE
 

Similar to The Consequences of Infinite Storage Bandwidth: Allen Samuels, SanDisk

Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off Target
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off TargetWebinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off Target
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off TargetStorage Switzerland
 
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...In-Memory Computing Summit
 
Web Speed And Scalability
Web Speed And ScalabilityWeb Speed And Scalability
Web Speed And ScalabilityJason Ragsdale
 
IBM Power Systems - enabling cloud solutions
IBM Power Systems - enabling cloud solutionsIBM Power Systems - enabling cloud solutions
IBM Power Systems - enabling cloud solutionsDavid Spurway
 
NVMe over Fibre Channel Introduction
NVMe over Fibre Channel IntroductionNVMe over Fibre Channel Introduction
NVMe over Fibre Channel IntroductionCalvin Zito
 
SUSE Expert Days 2017 FUJITSU
SUSE Expert Days 2017 FUJITSUSUSE Expert Days 2017 FUJITSU
SUSE Expert Days 2017 FUJITSUSUSE España
 
HPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and WorkflowsHPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and Workflowsinside-BigData.com
 
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...Fujitsu India
 
Webinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash MarketWebinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash MarketStorage Switzerland
 
Fusion-io at Symantec Vision 2013
Fusion-io at Symantec Vision 2013Fusion-io at Symantec Vision 2013
Fusion-io at Symantec Vision 2013Sumeet Bansal
 
Ferri Embedded Storage
Ferri Embedded Storage Ferri Embedded Storage
Ferri Embedded Storage Silicon Motion
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage SessionBrocade
 
Next Generation Software-Defined Storage
Next Generation Software-Defined StorageNext Generation Software-Defined Storage
Next Generation Software-Defined StorageStorMagic
 
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankDeploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankWestern Digital
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationSteve Meek
 
Has Your Data Gone Rogue?
Has Your Data Gone Rogue?Has Your Data Gone Rogue?
Has Your Data Gone Rogue?Tony Pearson
 

Similar to The Consequences of Infinite Storage Bandwidth: Allen Samuels, SanDisk (20)

Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off Target
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off TargetWebinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off Target
Webinar: All-Flash For Databases: 5 Reasons Why Current Systems Are Off Target
 
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...
 
Ceph's journey at SUSE
Ceph's journey at SUSECeph's journey at SUSE
Ceph's journey at SUSE
 
Web Speed And Scalability
Web Speed And ScalabilityWeb Speed And Scalability
Web Speed And Scalability
 
NetApp All Flash storage
NetApp All Flash storageNetApp All Flash storage
NetApp All Flash storage
 
IBM Power Systems - enabling cloud solutions
IBM Power Systems - enabling cloud solutionsIBM Power Systems - enabling cloud solutions
IBM Power Systems - enabling cloud solutions
 
NVMe over Fibre Channel Introduction
NVMe over Fibre Channel IntroductionNVMe over Fibre Channel Introduction
NVMe over Fibre Channel Introduction
 
SUSE Expert Days 2017 FUJITSU
SUSE Expert Days 2017 FUJITSUSUSE Expert Days 2017 FUJITSU
SUSE Expert Days 2017 FUJITSU
 
HPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and WorkflowsHPC Storage and IO Trends and Workflows
HPC Storage and IO Trends and Workflows
 
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...
 
Webinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash MarketWebinar: The Bifurcation of the Flash Market
Webinar: The Bifurcation of the Flash Market
 
Fusion-io at Symantec Vision 2013
Fusion-io at Symantec Vision 2013Fusion-io at Symantec Vision 2013
Fusion-io at Symantec Vision 2013
 
Ferri Embedded Storage
Ferri Embedded Storage Ferri Embedded Storage
Ferri Embedded Storage
 
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
 
Next Generation Software-Defined Storage
Next Generation Software-Defined StorageNext Generation Software-Defined Storage
Next Generation Software-Defined Storage
 
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the BankDeploying All-Flash Cloud Infrastructure without Breaking the Bank
Deploying All-Flash Cloud Infrastructure without Breaking the Bank
 
Fulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization PresentationFulcrum Group Storage And Storage Virtualization Presentation
Fulcrum Group Storage And Storage Virtualization Presentation
 
Storage School 2
Storage School 2Storage School 2
Storage School 2
 
Has Your Data Gone Rogue?
Has Your Data Gone Rogue?Has Your Data Gone Rogue?
Has Your Data Gone Rogue?
 

More from OpenStack

Swinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
Swinburne University of Technology - Shunde Zhang & Kieran Spear, AptiraSwinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
Swinburne University of Technology - Shunde Zhang & Kieran Spear, AptiraOpenStack
 
Related OSS Projects - Peter Rowe, Flexera Software
Related OSS Projects - Peter Rowe, Flexera SoftwareRelated OSS Projects - Peter Rowe, Flexera Software
Related OSS Projects - Peter Rowe, Flexera SoftwareOpenStack
 
Supercomputing by API: Connecting Modern Web Apps to HPC
Supercomputing by API: Connecting Modern Web Apps to HPCSupercomputing by API: Connecting Modern Web Apps to HPC
Supercomputing by API: Connecting Modern Web Apps to HPCOpenStack
 
Federation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research CloudFederation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research CloudOpenStack
 
Simplifying the Move to OpenStack
Simplifying the Move to OpenStackSimplifying the Move to OpenStack
Simplifying the Move to OpenStackOpenStack
 
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatHyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatOpenStack
 
Migrating your infrastructure to OpenStack - Avi Miller, Oracle
Migrating your infrastructure to OpenStack - Avi Miller, OracleMigrating your infrastructure to OpenStack - Avi Miller, Oracle
Migrating your infrastructure to OpenStack - Avi Miller, OracleOpenStack
 
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...OpenStack
 
Enabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
Enabling OpenStack for Enterprise - Tarso Dos Santos, VeritasEnabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
Enabling OpenStack for Enterprise - Tarso Dos Santos, VeritasOpenStack
 
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEUnderstanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEOpenStack
 
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack
 
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...OpenStack
 
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
 
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack
 
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...OpenStack
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...OpenStack
 
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...OpenStack
 
Traditional Enterprise to OpenStack Cloud - An Unexpected Journey
Traditional Enterprise to OpenStack Cloud - An Unexpected JourneyTraditional Enterprise to OpenStack Cloud - An Unexpected Journey
Traditional Enterprise to OpenStack Cloud - An Unexpected JourneyOpenStack
 
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash UniversityBuilding a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash UniversityOpenStack
 
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...OpenStack
 

More from OpenStack (20)

Swinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
Swinburne University of Technology - Shunde Zhang & Kieran Spear, AptiraSwinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
Swinburne University of Technology - Shunde Zhang & Kieran Spear, Aptira
 
Related OSS Projects - Peter Rowe, Flexera Software
Related OSS Projects - Peter Rowe, Flexera SoftwareRelated OSS Projects - Peter Rowe, Flexera Software
Related OSS Projects - Peter Rowe, Flexera Software
 
Supercomputing by API: Connecting Modern Web Apps to HPC
Supercomputing by API: Connecting Modern Web Apps to HPCSupercomputing by API: Connecting Modern Web Apps to HPC
Supercomputing by API: Connecting Modern Web Apps to HPC
 
Federation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research CloudFederation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research Cloud
 
Simplifying the Move to OpenStack
Simplifying the Move to OpenStackSimplifying the Move to OpenStack
Simplifying the Move to OpenStack
 
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red HatHyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
Hyperconverged Cloud, Not just a toy anymore - Andrew Hatfield, Red Hat
 
Migrating your infrastructure to OpenStack - Avi Miller, Oracle
Migrating your infrastructure to OpenStack - Avi Miller, OracleMigrating your infrastructure to OpenStack - Avi Miller, Oracle
Migrating your infrastructure to OpenStack - Avi Miller, Oracle
 
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
A glimpse into an industry Cloud using Open Source Technologies - Adrian Koh,...
 
Enabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
Enabling OpenStack for Enterprise - Tarso Dos Santos, VeritasEnabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
Enabling OpenStack for Enterprise - Tarso Dos Santos, Veritas
 
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSEUnderstanding blue store, Ceph's new storage backend - Tim Serong, SUSE
Understanding blue store, Ceph's new storage backend - Tim Serong, SUSE
 
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus NetworksOpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
OpenStack Networks the Web-Scale Way - Scott Laffer, Cumulus Networks
 
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
Diving in the desert: A quick overview into OpenStack Sahara capabilities - A...
 
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
 
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
OpenStack and Red Hat: How we learned to adapt with our customers in a maturi...
 
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
Meshing OpenStack and Bare Metal Networks with EVPN - David Iles, Mellanox Te...
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
 
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
Ironically, Infrastructure Doesn't Matter - Quinton Anderson, Commonwealth Ba...
 
Traditional Enterprise to OpenStack Cloud - An Unexpected Journey
Traditional Enterprise to OpenStack Cloud - An Unexpected JourneyTraditional Enterprise to OpenStack Cloud - An Unexpected Journey
Traditional Enterprise to OpenStack Cloud - An Unexpected Journey
 
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash UniversityBuilding a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
Building a GPU-enabled OpenStack Cloud for HPC - Lance Wilson, Monash University
 
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
Monitoring Uptime on the NeCTAR Research Cloud - Andy Botting, University of ...
 

Recently uploaded

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsHyundai Motor Group
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsAndrey Dotsenko
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 

Recently uploaded (20)

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter RoadsSnow Chain-Integrated Tire for a Safe Drive on Winter Roads
Snow Chain-Integrated Tire for a Safe Drive on Winter Roads
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 

The Consequences of Infinite Storage Bandwidth: Allen Samuels, SanDisk

  • 1. May 5, 2016 1 Allen Samuels The Consequences of Infinite Storage Bandwidth Engineering Fellow, Systems and Software Solutions May 5, 2016
  • 2. May 5, 2016 2 Disclaimer During the presentation today, we may make forward-looking statements. Any statement that refers to expectations, projections, or other characterizations of future events or circumstances is a forward- looking statement, including those relating to industry predictions and trends, future products and their projected availability, and evolution of product capacities. Actual results may differ materially from those expressed in these forward-looking statements due to a number of risks and uncertainties, including among others: industry predictions may not occur as expected, products may not become available as expected, and products may not evolve as excepted; and the factors detailed under the caption “Risk Factors” and elsewhere in the documents we file from time to time with the SEC, including, but not limited to, our annual report on Form 10-K for the year ended January 3, 2016. This presentation contains information from third parties, which reflect their projections as of the date of issuance. We undertake no obligation to update these forward-looking statements, which speak only as of the date hereof or the date of issuance by a third party.
  • 3. May 5, 2016 3 What do I Mean By Infinite Bandwidth ?
  • 4. May 5, 2016 4 Log scale • Use DRAM Bandwidth as a proxy for CPU throughput • Reasonable approximation for DMA heavy, and/or poor cache hit performance workloads (e.g. Storage) Bigdifference inslope! Data is for informational purposes only and may contain errors Network, Storage and DRAM Trends
  • 5. May 5, 2016 5 Linear scale InfiniteStorageBandwidth • Same data as last slide, but for the Log- impaired • Storage Bandwidth is not literally infinite • But the ratio of Network and Storage to CPU throughput is widening very quickly Data is for informational purposes only and may contain errors Network, Storage and DRAM Trends
  • 6. May 5, 2016 6 0 50 100 150 200 250 1990 1995 2000 2005 2010 2015 2020 2025 Year SSDs / CPU Socket Data is for informational purposes only and may contain errors
  • 7. May 5, 2016 7 0 5 10 15 20 25 30 35 40 45 50 1995 2000 2005 2010 2015 2020 2025 Year SSDs / CPU Socket @ 20% Max BW Data is for informational purposes only and may contain errors
  • 8. May 5, 2016 8 What happens as we get closer to the limit?
  • 9. May 5, 2016 9  New Denser Server Form Factors – Blades – Sleds  Good short term solutions Let’s Get Small!
  • 10. May 5, 2016 10  Storage Cost = Media + Access + Management  Shared nothing architecture conflates access and management  Storage costs will become dominated by Management cost  Storage costs become CPU/DRAM costs Effects Of The CPU/DRAM Bottleneck
  • 11. May 5, 2016 11  Move management to upper layers where CPU can be right-sized by client  What kind of media access do I want? – Simple enough functionality to be done directly in drive hardware – NO CPU – Allow direct access throughout the compute cluster over a network – Just enough machinery to enable coarse-grained sharing Embracing The CPU/DRAM Bottleneck  In short, you really want a SAN ! – Or more technically, Fabric Connected Storage
  • 12. May 5, 2016 12 Not Your Father’s SAN  Three problems with current SAN – Fibre channel transport – SCSI access protocol – Drive oriented storage allocation  All of these want to be updated – Fibre channel is brittle and costly – SCSI initiators have long code paths catering to seldom used configurations – Robust sub-drive storage allocation
  • 13. May 5, 2016 13 SAN 2.0  NVMe over Fabrics  1.0 Spec is out for review, hopefully done in May  Simple enough for direct hardware execution of data path ops  Minimal initiator code path lengths improve performance  Namespaces allow sub-drive allocations  Not mature enough for enterprise deployment – yet
  • 14. May 5, 2016 14 SAN 2.0  What storage network? – Current candidates are FC, Infiniband and Ethernet  Ethernet has best economics – if you can make it work  RoCE is easy on the edge, but hard on the interior – Only controlled environments have shown multi-switch scalability – General scalability in a multi-vendor environment likely to be difficult – Wonderful for intra-rack storage networking  iWarp is hard on the edge, but easy on the interior – Scarcity of implementations inhibits deployment  Storage over IP will see limited cross rack deployment until this is resolved
  • 15. May 5, 2016 15  Implementations using OTS stuff are in progress  Server side implementations look pretty conventional too  4-5 MIOPS have been shown  Seems like 10 MIOPS isn’t unreasonable to expect First Generation Of SAN 2.0 NIC CPU DRAM SSD PCIe
  • 16. May 5, 2016 16  Soon, NICs will forward NVMe operations to local PCIe devices  CPU removed from the software part of the data path  CPU is still needed for the hardware part of the data path  IOPS improve, BW is unchanged  Significant CPU freed for application processing  Getting closer to the wall! Second Generation SAN 2.0
  • 17. May 5, 2016 17  New generation of combined SSD controller and NIC – Rethink of interfaces eliminates DRAM buffering  Network goes right into the drive  No CPU to be found  Works well with rack scale architecture Third Generation SAN 2.0, Imagined
  • 18. May 5, 2016 18  Disaggregated / Rack Scale Architecture – Fabric connected – Independently scale compute, networking and storage Let’s Get Really Small
  • 19. May 5, 2016 19 Call To Action  Fabric-connected storage isn’t well managed by existing FOSS  Lots of upper layer management software is available – OpenStack, Ceph, Gluster, Cassandra, MongoDB, SheepDog, etc.  Lower layer cluster management still primitive
  • 20. May 5, 2016 20 What’s It All Mean?  New form factors are in everybody's future  The coming avalanche of storage bandwidth wants to be free – Not imprisoned by a CPU  Rack Scale Architecture allows new Storage/Compute configs  Storage will be increasingly “Software Defined” as the HW evolves
  • 21. May 5, 2016 21 Product Pitch!
  • 22. May 5, 2016 22 Old Model  Monolithic, large upfront investments, and fork-lift upgrades  Proprietary storage OS  Costly: $$$$$ New SD-AFS Model  Disaggregate storage, compute, and software for better scaling and costs  Best-in-class solution components  Open source software - no vendor lock-in  Cost-efficient: $ Software-defined All-Flash Storage The disaggregated model for scale
  • 23. May 5, 2016 23 Scalable Raw Performance 2M IOPS, Latency 1-3ms 12-15 GB/s Throughput 8TB Flash-Card Innovations • Enterprise Grade Power-Fail Safe • Alerts & monitoring • Latching integrated & monitored • Directly samples air temp • Form-factor enables lowest cost SSD InfiniFlash™ Storage Platform Capacity 512TB – raw all Flash! All Flash 3U JBOD of Flash (JBOF) Up to 64 x 8TB SAS Drive Cards 4TB cards also available soon Operational Efficiency & Resilient Hot Swappable Architecture, Easy FRU Low power – typical workload 400-500W 150W(idle) - 750W(max) MTBF 1.5+ million hours Hot Swappable ! Fans, SAS Expander Boards, Power Suppliers, Flash cards Host Connectivity Connect up to 8 servers through 8 SAS ports Multi-path enabled Flash Drive Card EMS Product Management SanDisk Confidential
  • 24. May 5, 2016 24 InfiniFlash IF500 All-Flash Storage System Block and Object Storage Powered by Ceph  Ultra-dense High Capacity Flash storage – 512TB in 3U, Scale-out software for PB scale capacity  Highly scalable performance – Industry leading IOPS/TB  Cinder, Glance and Swift storage – Add/remove server & capacity on-demand  Enterprise-Class storage features – Automatic rebalancing – Hot Software upgrade – Snapshots, replication, thin provisioning – Fully hot swappable, redundant  Ceph Optimized for SanDisk flash – Tuned & Hardened for InfiniFlash
  • 25. May 5, 2016 25 InfiniFlash SW + HW Advantage Software Storage System Software tuned for Hardware • Ceph modifications for Flash • Both Ceph, Host OS tuned for InfiniFlash • SW defects that impacts Flash identified & mitigated Hardware Configured for Software • Right balance of CPU, RAM, Storage • Rack level designs for optimal performance & cost Software designed for all systems does not work well with any system  Ceph has over 50 tuning parameters that results in 5x – 6x performance improvement  Fixed CPU, RAM hyperconverged nodes does not work well for all workloads
  • 26. May 5, 2016 26 InfiniFlash for OpenStack with Dis-Aggregation  Compute & Storage Disaggregation enables Optimal Resource utilization  Allows for more CPU usage required for OSDs with small Block workloads  Allows for higher bandwidth provisioning as required for large Object workload  Independent Scaling of Compute and Storage  Higher Storage capacity needs doesn't’t force you to add more compute and vice-versa  Leads to optimal ROI for PB scale OpenStack deploymentsHSEB A HSEB B OSDs SAS …. HSEB A HSEB B HSEB A HSEB B …. ComputeFarm LUN LUN iSCSI Storage …Obj Obj Swift ObjectStore …LUN LUN Nova with Cinder & Glance … LibRBD QEMU/KVM RGW WebServer KRBD iSCSI Target OSDs OSDs OSDs OSDs OSDs StorageFarm Confidential – EMS Product Management
  • 27. May 5, 2016 27 IF500 - Enhancing Ceph for Enterprise Consumption IF500 provides usability and performance utilities without sacrificing Open Source principles • SanDisk Ceph Distro ensures packaging with stable, production-ready code with consistent quality • All Ceph Performance improvements developed by SanDisk are contributed back to community 27 SanDisk Distribution or Community Distribution  Out-of-the Box configurations tuned for performance with Flash  Sizing & planning tool  InfiniFlash drive management integrated into Ceph management (Coming Soon)  Ceph installer that is specifically built for InfiniFlash  High performance iSCSI storage  Better diagnostics with log collection tool  Enterprise hardened SW + HW QA
  • 28. May 5, 2016 28 InfiniFlash Performance Advantage 900K Random Read Performance with 384TB of storage Flash Performance unleashed • Out-of-the Box configurations tuned for performance with Flash • Read & Write data-path changes for Flash • x3-12 block performance improvement – depending on workload • Almost linear performance scale with addition of InfiniFlash nodes • Write performance WIP with NV-RAM Journals• Measured with 3 InfiniFlash nodes with 128TB each • Avg Latency with 4K Block is ~2ms, with 99.9 percentile latency is under 10ms • For Lower block size, performance is CPU bound at Storage Node. • Maximum Bandwidth of 12.2GB/s measured towards 64KB blocks S 28
  • 29. May 5, 2016 29 InfiniFlash Ceph Performance Advantage  Single InfiniFlash unit Performance – 1 x 512TB InfiniFlash unit connected with 8 nodes – 4K RR IOPS: ~1 million IOPs - 85% of bare metal perf. • Corresponding Bare metal IF100 IOPS is 1.1 million – All 8 hosts CPU saturated for 4K Random read. • More performance potential with higher CPU cycles – With 64k IO size we are able to utilize full IF150 bandwidth of over 12GB/s. – Librbd and Krbd performance are comparable. – Write Performance is on 3x copy configuration. The more common 2x copy will result in 33% improvement. Random Write IO Profile LIBRBD IOPs 4k Random Write 54k 64k Random Write 34k 256k Random Write 11.3k 1,123,175 349,247 87,369 0 5 10 15 20 25 0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 4k 64k 256k BW(GBps) IOPS Random Read Block Performance LIBRBD IOPs Bandwidth (GBps)
  • 30. May 5, 2016 30 InfiniFlash Ceph Performance Advantage  Linear Scaling with 2 InfiniFlash units – 2 x 512TB InfiniFlash unit connected with 16 nodes – 1.8M 4K IOPS – 80% of the bare metal performance – Performance is Scaling almost linearly - Almost doubled the performance of single IF150 with ceph – Write perf is 2 X with 16 node cluster compared with 8 node cluster. Random Read Random Write IO Profile LIBRBD IOPs 4k RR 1800k 64k RR 225k 256k RR 53k IO Profile LIBRBD BW(MB/s) 4k RR 7194 64k RR 14412 256k RR 13366
  • 31. May 5, 2016 31 InfiniFlash OS – Hardened Enterprise Class Ceph  Hardened and tested for Hyperscale deployments and workloads  Platform focused testing enables us to deliver a complete and hardened storage solution  Single Vendor support for both Hardware & Software Enterprise Level Hardening Testing at Scale Failure Testing  9,000 hours of cumulative IO tests  1,100+ unique test cases  1,000 hours of Cluster Rebalancing tests  1,000 hours of IO on iSCSI  Over 100 server node clusters  Over 4PB of Flash Storage  2,000 Cycle Node Reboot  1,000 times Node Abrupt Power Cycle  1,000 times Storage Failure  1,000 times Network Failure  IO for 250 hours at a stretch
  • 32. May 5, 2016 32 IF500 Reference Configurations Model Entry Mid High InfiniFlash 128TB 256TB 512TB Servers1 2 x Dell R 630-2U 4 x Dell R 630-2U 4 x Dell R 630-2U2 Processor per server Dual socket Intel Xeon E5-2690 v3 Dual socket Intel Xeon E5-2690 v3 * Dual socket Intel Xeon E5-2690 v3 Memory per server 128GB RAM 128GB RAM 128GB RAM HBA per server (1) LSI 9300-8e PCIe 12Gbps (1) LSI 9300-8e PCIe 12Gbps (1) LSI 9300-8e PCIe 12Gbps Network per server (1) Mellanox ConnectX-3 dual ports 40GbE (1) Mellanox ConnectX-3 dual ports 40GbE (1) Mellanox ConnectX-3 dual ports 40GbE Boot Drive per server (2) SATA 120GB SSD (2) SATA 120GB SSD (2) SATA 120GB SSD 1 - For larger block workload or less CPU intensive workload, OSD node could use single socket server. Dell Servers can be substituted with other vendor servers that match the specs. 2 - For Small Block workloads, 8 servers are recommended
  • 33. May 5, 2016 33 InfiniFlash TCO Advantage $- $10,000,000 $20,000,000 $30,000,000 $40,000,000 $50,000,000 $60,000,000 $70,000,000 $80,000,000 Tradtional ObjStore on HDD IF500 ObjStore w/ 3 Full Replicas on Flash IF500 w/ EC - All Flash IF500 - Flash Primary & HDD Copies 3 year TCO comparison * 3 year Opex TCA 0 20 40 60 80 100 Tradtional ObjStore on HDD IF500 ObjStore w/ 3 Full Replicas on Flash IF500 w/ EC - All Flash IF500 - Flash Primary & HDD Copies Total Rack  Reduce the replica count with higher reliability of flash - 2 copies on InfiniFlash vs. 3 copies on HDD  InfiniFlash disaggregated architecture reduces compute usage, thereby reducing HW & SW costs - Flash allows the use of erasure coded storage pool without performance limitations - Protection equivalent of 2x storage with only 1.2x storage  Power, real estate, maintenance cost savings over 5 year TCO * TCO analysis based on a US customer’s OPEX & Cost data for a 100PB deployment 33
  • 34. May 5, 2016 34 ©2016 SanDisk Corporation. All rights reserved. SanDisk is a trademark of SanDisk Corporation, registered in the United States and other countries. Other brands mentioned herein are for identification purposes only and may be the trademarks of their holder(s).