SlideShare a Scribd company logo
Just Because You Could, Doesn't Mean You Should:
Lessons Learned in Storage Best Practices (v2.0)
Patrick Carmichael, VMware
STO4791
#STO4791
22
Just Because You Could, Doesn’t Mean You SHOULD.
 Storage Performance and Technology
 Interconnect vs IOP
 Command Sizing.
 PDU Sizing
 SSD vs Magnetic Disk
 Tiering
 SSD vs Spinning Media, Capacity vs Performance
 Thin Provisioning
 Architecting for Failure
 Planning for the failure from the initial design
 Complete Site Failure
 Backup RTO
 DR RTO
 The low-cost DR site
33
Storage Performance – Interconnect vs IOP
 Significant advances in interconnect performance
 FC 2/4/8/16GB
 iSCSI 1G/10G/40G
 NFS 1G/10G/40G
 Differences in performance between technologies
• None – NFS, iSCSI and FC are effectively interchangeable
• IO Footprint from array perspective.
 Despite advances, performance limit is still hit at the media itself
 90% of storage performance cases seen by GSS that are not config related,
are media related
 Payload (throughput) is fundamentally different from IOP (cmd/s)
 IOP performance is always lower than throughput
44
Factors That Affect Performance
 Performance versus Capacity
 Disk performance does not scale with drive size
 Larger drives generally equate lower performance
 IOPS(I/Os per second) is crucial
 How many IOPS does this number of disks provide?
 How many disks are required to achieve a
required number of IOPS?
 More spindles generally equals greater performance
55
Factors That Affect Performance - I/O Workload and RAID
 Understanding workload is a crucial consideration when designing
for optimal performance
 Workload is characterized by IOPS and write % vs read %
 Design choice is usually a question of:
 How many IOPs can I achieve with a given number of disks?
• Total Raw IOPS = Disk IOPS * Number of disks
• Functional IOPS = (Raw IOPS * Write%)/(Raid Penalty) + (Raw IOPS * Read %)
 How many disks are required to achieve a required IOPS value?
• Disks Required = ((Read IOPS) + (Write IOPS*Raid Penalty))/ Disk IOPS
66
IOPS Calculations – Fixed Number of disks
 Calculating IOPS for a given number of disks
• 8 x 146GB 15K RPM SAS drives
• ~150 IOPS per disk
• RAID 5
• 150 * 8 = 1200 Raw IOPS
• Workload is 80% Write, 20% Read
• (1200*0.8)/4 + (1200*0.2) = 480 Functional IOPS
Raid Level IOPS(80%Read 20%Write) IOPS(20%Read 80%Write)
5 1020 480
6 1000 400
10 1080 720
77
IOPS Calculations – Minimum IOPS Requirement
 Calculating number of disks for a required IOPS value
 1200 IOPS required
 15K RPM SAS drives. ~150 IOPS per disk
 Workload is 80% Write, 20% Read
 RAID 5
 Disks Required = (240 + (960*4))/150 IOPS
 27 Disks required
Raid Level Disks(80%Read 20%Write) Disks (20%Read 80%Write)
5 13 27
6 16 40
10 10 14
88
Storage Performance – Interconnect vs IOP – contd.
 While still true for 90% of workloads, this is starting to shift
 SCSI commands passed at native sizes
• Windows 2k8/2k12 (8MB copy)
• Linux (native applications, up to 8MB+)
• NoSQL Databases (Especially Key/Value or Document stores)
• Cassandra (128k)
• Mongo (64-256k)
 ESX will gladly pass whatever size command the guest requests.
 New applications are starting to run into some protocol limitations
• iSCSI PDU (64k/256k)
• NFS Buffer/Request Size
• Write Amplification!
 Offloading these processes offers significant performance
improvements
99
Storage Performance – Interconnect vs IOP – contd.
0.00%
20.00%
40.00%
60.00%
80.00%
100.00%
120.00%
140.00%
160.00%
180.00%
200.00%
4k 8k 16k 32k 128k 1M 2M 4M 8M
Efficiency/Size - Read
Efficiency/Size - Read
1010
Tiering
 Very old concept
• Primarily a manual process in the past
• Based on RAID groups / Disk types
• Second generation arrays offer batch mode / scheduled auto-movement
system of blocks under IO pressure
1111
Why Tiering Works, and Matters
1212
Tiering – contd.
 New arrays from several partners now offer near real-time
block movement
• Dot Hill AssuredSAN
• X-IO ISE2
 Or, batch mode
• Compellent
• VNX
 Auto-tiering will get you better performance, faster
• ~2 hours for 150G working set to be fully moved to SSD.
 However…
1313
Tiering – contd.
 Remember – Just because you could, doesn’t mean you should!
 There are several places where an auto-tiering array may not be
able to help
• Working Set
• Total Data > SSD Space
• Second tier of cache to worry about
• Fully utilized
• May not have spare IOPS with which to move data between layers.
• Rapidly changing datasets
• VDI
• In-memory databases.
• Often hard to predict when they’re fully utilized
• “Never ending” performance bucket.
1414
Tiering – contd.
 These platforms have amazing and very capable technology
 Have the vendor show you, with your workload, what their platform
is capable of
 Workloads differ massively between users and vendors – validate!
1515
SSD vs Spinning Disk
 SSDs offer extremely high performance per single “spindle”
• Exceptionally great for “L2 Cache”, especially on auto-tiering arrays.
• Capable of 1,000,000+ IOPS in pure SSD arrays
• Exceptionally dense consolidation ratios (Excellent for VDI/HPC).
 However…
 Ultra-high performance arrays have other limitations
• Surrounding infrastructure
• Switching (packets/frames a second)
• CPU (packaging frames in software initiators)
• Command sizing
• Some SSDs do not work well outside of certain command sizes
• Architectural limitations
• Overhead loss for scratch space
• Garbage Collect
1616
VMFS/VMDK Updates for the Next Release
 VMFS5 (ESX5) introduced support for LUNs larger than 2TB
• Still limited to 2TB VMDK, regardless of LUN sizing
• 64TB RDM
 Major update in this release
• Max size: 64TB for both VMDK and VMFS volumes.
• Lets you avoid spanning disks inside of a guest, or using RDMs for large
workloads
1717
VMDK Contd.
 VMFS5, in combination with ATS, will
allow consolidation of ever-larger
number of VMs onto single VMFS
datastores
 Alternatively, a couple of very large
VMs can occupy a single lun
• While potentially easier for management, this
means that significantly more data is now
stored in one location.
• When defining a problem space, you’ve now
expanded greatly the amount of data stored
in one place.
 Just because you could, does that
mean you should?
1818
Thin Provisioning
 Thin provisioning offers very unique opportunities to manage your
storage “after” provisioning
• Workloads that require a certain amount of space, but don’t actually use it
• Workloads that may grow over time, and can be managed/moved if they do
• Providing space for disparate groups that have competing requirements
 Many arrays ~only~ offer Thin Provisioning, by design
 The question is, where should you thin provision, and what are the
ramifications?
1919
Thin Provisioning – VM Level
 VM disk is created @ 0b, until used, and then grows @ VMFS Block
size as needed
 Minor performance penalty for provisioning / zeroing
 Disk cannot currently be shrunk – once grown, it stays grown
• There are some workarounds for this, but they are not guaranteed
 What happens when you finally run out of space?
• All VMs stun until space is created
• Production impacting, but potentially a quick fix (shut down VMs, adjust
memory reservation, etc)
• Extremely rare to see data loss of any kind
2020
Thin Provisioning – LUN Level
 Allows your array to seem bigger than it actually is
 Allows you to share resources between groups (the whole goal
of virtualization)
 Some groups may not use all or much of what they’re allocated,
allowing you to utilize the space they’re not using
 Standard sized or process defined luns may waste significant
amounts of space, and space being wasted is $$ being wasted
 Significant CapEX gains can be seen with thin luns
2121
Thin Provisioning – LUN Level - contd
 What happens when you finally run out of space?
• New VAAI T10 primitives, for compatible arrays, will let ESX know that the
underlying storage has no free blocks (related to NULL_UNMAP)
• If VAAI works, and your array is compatible, and you’re on a supported version of
ESX, this will result in the same as a thin VMDK running out of space – All VMs will
stun (that are waiting on blocks). VMs not waiting on blocks will continue as normal.
• Cleanup will require finding additional space on the array, as VSWP files / etc will
already be allocated blocks at the lun level. Depending on your utilization, this may
not be possible, unless UNMAP also works (very limited support at this time).
• If NULL_UNMAP is not available for your environment, or does not work
correctly, then what?
• On a good day, the VMs will simply crash with a write error, or the application inside
will fail (depends on array and how it handles a full filesystem).
• Databases and the like are worst affected, will most likely require rebuild/repair.
• And on a bad day?
2222
Thin Provisioning LUN Level – contd.
2323
Thin Provisioning
 There are many reasons to use Thin Provisioning, at both the VM
and the LUN level
 Thin provisioning INCREASES the management workload of
maintaining your environment. You cannot just ignore it
2424
Freeing space on thin provisioned LUNs
 ESX 5.0 introduced NULL_UNMAP, a T10 command to mark blocks
as “freed” for an underlying storage device.
• Disabled in P01 due to major performance issues
• Re-enabled in U1, manual only, to prevent sudden spikes in load, or corruption
from array firmware issues.
 Vmkfstools –y command, set amount of space to try and reclaim.
 *VALIDATE IF VMFSTOOLS / Auto unmap made it into 5.5*
2525
Just Because You Could, Doesn’t Mean You Should
 Everything we’ve covered so far is based on new technologies
 What about the existing environments?
The ultimate extension of “Just because you could, doesn’t mean
you should,” is what I call “Architecting for Failure”
2626
Architecting for Failure
 The ultimate expression of “Just because you could, doesn’t mean
you should.”
• Many core infrastructure designs are built with tried and true hardware and
software, and people assume that things will always work
• We all know this isn’t true – Murphy’s law
 Architect for the failure.
• Consider all of your physical infrastructure.
• If any component failed, how would you recover?
• If everything failed, how would you recover?
• Consider your backup/DR plan as well
2727
Black Box Testing
 Software engineering concept
• Consider your design, all of the inputs, and all of the expected outputs
• Feed it good entries, bad entries, and extreme entries, find the result, and
make sure it is sane
• If not, make it sane
 This can be applied before, or after, you build your environment
2828
Individual Component Failure
 Black box: consider each step your IO takes, from VM to physical
media
 Physical hardware components are generally easy to compensate
for
• VMware HA and FT both make it possible for a complete system to fail with
little/no downtime to the guests in question
• Multiple hardware components add redundancy and eliminate single points of
failure
• Multiple NICs
• Multiple storage paths
• Traditional hardware (multiple power supplies, etc)
• Even with all of this, many environments are not taking advantage of these
features
 Sometimes, the path that IO takes passes through a single point of
failure that you don’t realize is one
2929
What About a Bigger Problem?
 You’re considering all the different ways to make sure individual
components don’t ruin your day
 What if your problem is bigger?
3030
Temporary Total Site Loss
 Consider your entire infrastructure during a temporary complete
failure
• What would happen if you had to bootstrap it cold?
• This actually happens more often than would be expected
• Hope for the best, prepare for the worst
• Consider what each component relies on – do you have any circular dependencies?
• Also known as the “Chicken and the Egg” problem, these can increase your
RTO significantly
• Example: Storage mounted via DNS, all DNS servers on the same storage
devices. Restoring VC from backup when all networking is via DVS
• What steps are required to bring your environment back to life?
• How long will it take? Is that acceptable?
3131
Permanent Site Loss
 Permanent site loss is not always an “Act of God” type event
• Far more common is a complete loss of a major, critical component
• Site infrastructure (power, networking, etc) may be intact, but your data is not
• Array failures (controller failure, major filesystem corruption, RAID failure)
• Array disaster (thermal event, fire, malice)
• Human error – yes, it happens!
• Multiple recovery options – which do you have?
• Backups
• Tested and verified?
• What’s your RTO for a full failure?
• Array based replication
• Site Recovery Manager
• Manual DR
• How long for a full failover?
• Host based replication
3232
Permanent Site Loss – contd.
 Consider the RTO of your choice of disaster recovery technology
• It equates directly to the amount of time you will be without your virtual
machines
• How long can you, and your business, be without those services?
• A perfectly sound backup strategy is useless, if it cannot return you to
operation quickly enough
 Architect for the Failure – make sure every portion of your
environment can withstand a total failure, and recovery is possible
in a reasonable amount of time
3333
Let’s Be Honest for a Moment…
 We would all love to have a DR site that is equally capable as your
production site.
• I’d also like a horsey. Neither of us have all the budget we’d really like for that.
 There’s a key DR strategy that is often missed: Data existence is
far more important than full operation
• Many places decline to implement a DR strategy due to budget, time, or
believing that they have to build an equal DR site as their production
environment.
• For many of these places, having key services ~only~ online, with the rest of
the data being protected at the DR site and present if needed, is more than
enough.
3434
How to Do It On a Budget of Almost $0
 VMware GSS set out to prove that you can build a “protection” site
with a lot less than you’d expect
 Services required were true tier 1 apps:
• Store front end (based on DVD Store)
• Exchange
• SQL
• Oracle RAC
• JVM
3535
Hardware
 8x Dell D630 Laptops (4g, 150G internal drives)
 2x Dell Inspiron 5100 Laptops (8g, 200G internal drives)
 4x Dell Precision 390 Workstations (8G/16G, 5x 1TB internal drives)
 1 Intel Atom Supermicro (2G / 5x1TB Internal Drives)
• Found by the side of the road
• No joke
 1 Dumb Ethernet Switch (Netgear), purchased from Best Buy
 Duct tape, 1 roll (boot drive for one of the 390s is taped to the
chassis)
 4x Broadcom dual-port 1G cards (borrowed from R710 servers)
3636
Software
 ESX 5.1 U1
 SRM 5.1.1
 Ubuntu + knfsd (low performance storage)
• Host based replication target
 Datacore San Symphony V
• 4x Nodes, high performance, array based replication
 Starwind iSCSI
• 2x Nodes, high performance, management software.
 Lefthand VSA
• 4x Nodes, medium/high performance, array based replication
3737
End results
 While we were certainly unable to bring up the full environment on
the DR site, we were able to replicate all 6TB of data successfully.
 Critical services for managing a disaster-caused outage were
online
• 1/5 of the Exchange mailboxes (executives and IT personnel)
• Web front end
• SQL database for minimal customer queries/interactions
• No major processing
• RAC
• Online for queries, no processing
• JVM
• Offline (RAC processing engine).
 This certainly isn’t ideal, but everything was intact, we could
“manage” the situation, and the “company” hadn’t vanished.
3838
You Didn’t Believe Me…
3939
Really!
4040
Conclusion
 You can make a DR site out of almost anything
• Retired servers
• Spare parts
• Out of warranty equipment
 You don’t need high end equipment – hardware a generation or two
old is more than enough for basic, or even partial operation
 Existing software solutions make it very easy to turn servers alone
into extremely capable shared storage solutions
4141
The End.
 Questions?
4242
Other VMware Activities Related to This Session
 HOL:
HOL-SDC-1304
vSphere Performance Optimization
 Group Discussions:
STO1000-GD
Overall Storage with Patrick Carmichael
THANK YOU
Just Because You Could, Doesn't Mean You Should:
Lessons Learned in Storage Best Practices (v2.0)
Patrick Carmichael, VMware
STO4791
#STO4791

More Related Content

What's hot

Tuning DB2 in a Solaris Environment
Tuning DB2 in a Solaris EnvironmentTuning DB2 in a Solaris Environment
Tuning DB2 in a Solaris Environment
Jignesh Shah
 
San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...
San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...
San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...
DataStax Academy
 
My experience with embedding PostgreSQL
 My experience with embedding PostgreSQL My experience with embedding PostgreSQL
My experience with embedding PostgreSQL
Jignesh Shah
 
How to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machinesHow to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machines
SolarWinds
 
Understanding PostgreSQL LW Locks
Understanding PostgreSQL LW LocksUnderstanding PostgreSQL LW Locks
Understanding PostgreSQL LW Locks
Jignesh Shah
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015
VMUG IT
 
How to configure SQL Server like a pro
How to configure SQL Server like a proHow to configure SQL Server like a pro
How to configure SQL Server like a pro
SolarWinds
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
State of the Art Thin Provisioning
State of the Art Thin ProvisioningState of the Art Thin Provisioning
State of the Art Thin Provisioning
Stephen Foskett
 
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld
 
TDS-16489U-R2 0215 EN
TDS-16489U-R2 0215 ENTDS-16489U-R2 0215 EN
TDS-16489U-R2 0215 EN
QNAP Systems, Inc.
 
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
In-Memory Computing Summit
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Flash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from RealityFlash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from Reality
Hitachi Vantara
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
Howard Marks
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
Ceph Community
 
How to configure SQL Server for SSDs and VMs
How to configure SQL Server for SSDs and VMsHow to configure SQL Server for SSDs and VMs
How to configure SQL Server for SSDs and VMs
SolarWinds
 
NGENSTOR_ODA_P2V_V5
NGENSTOR_ODA_P2V_V5NGENSTOR_ODA_P2V_V5
NGENSTOR_ODA_P2V_V5
UniFabric
 

What's hot (20)

Tuning DB2 in a Solaris Environment
Tuning DB2 in a Solaris EnvironmentTuning DB2 in a Solaris Environment
Tuning DB2 in a Solaris Environment
 
San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...
San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...
San Francisco Cassadnra Meetup - March 2014: I/O Performance tuning on AWS fo...
 
My experience with embedding PostgreSQL
 My experience with embedding PostgreSQL My experience with embedding PostgreSQL
My experience with embedding PostgreSQL
 
How to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machinesHow to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machines
 
Understanding PostgreSQL LW Locks
Understanding PostgreSQL LW LocksUnderstanding PostgreSQL LW Locks
Understanding PostgreSQL LW Locks
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015
 
How to configure SQL Server like a pro
How to configure SQL Server like a proHow to configure SQL Server like a pro
How to configure SQL Server like a pro
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
State of the Art Thin Provisioning
State of the Art Thin ProvisioningState of the Art Thin Provisioning
State of the Art Thin Provisioning
 
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices VMworld 2013: VMware Virtual SAN Technical Best Practices
VMworld 2013: VMware Virtual SAN Technical Best Practices
 
TDS-16489U-R2 0215 EN
TDS-16489U-R2 0215 ENTDS-16489U-R2 0215 EN
TDS-16489U-R2 0215 EN
 
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
IMCSummit 2015 - Day 1 Developer Session - The Science and Engineering Behind...
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Flash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from RealityFlash for the Real World – Separate Hype from Reality
Flash for the Real World – Separate Hype from Reality
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
How to configure SQL Server for SSDs and VMs
How to configure SQL Server for SSDs and VMsHow to configure SQL Server for SSDs and VMs
How to configure SQL Server for SSDs and VMs
 
NGENSTOR_ODA_P2V_V5
NGENSTOR_ODA_P2V_V5NGENSTOR_ODA_P2V_V5
NGENSTOR_ODA_P2V_V5
 

Similar to VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learned in Storage Best Practices (v2.0)

Storage and performance- Batch processing, Whiptail
Storage and performance- Batch processing, WhiptailStorage and performance- Batch processing, Whiptail
Storage and performance- Batch processing, Whiptail
Internet World
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
Aaron Joue
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
RedWireServices
 
IaaS for DBAs in Azure
IaaS for DBAs in AzureIaaS for DBAs in Azure
IaaS for DBAs in Azure
Kellyn Pot'Vin-Gorman
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Community
 
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld
 
Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02
Suresh Kumar
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems
Baruch Osoveskiy
 
Mysql talk
Mysql talkMysql talk
Mysql talk
LogicMonitor
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
inwin stack
 
Linux Huge Pages
Linux Huge PagesLinux Huge Pages
Linux Huge Pages
Geraldo Netto
 
Varrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationVarrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentation
pittmantony
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
Ranga Swami Reddy Muthumula
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
Ranga Swami Reddy Muthumula
 
Running MySQL on Linux
Running MySQL on LinuxRunning MySQL on Linux
Running MySQL on Linux
Great Wide Open
 
2015 deploying flash in the data center
2015 deploying flash in the data center2015 deploying flash in the data center
2015 deploying flash in the data center
Howard Marks
 
2015 deploying flash in the data center
2015 deploying flash in the data center2015 deploying flash in the data center
2015 deploying flash in the data center
Howard Marks
 
DataStax: Extreme Cassandra Optimization: The Sequel
DataStax: Extreme Cassandra Optimization: The SequelDataStax: Extreme Cassandra Optimization: The Sequel
DataStax: Extreme Cassandra Optimization: The Sequel
DataStax Academy
 
Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016
Colin Charles
 

Similar to VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learned in Storage Best Practices (v2.0) (20)

Storage and performance- Batch processing, Whiptail
Storage and performance- Batch processing, WhiptailStorage and performance- Batch processing, Whiptail
Storage and performance- Batch processing, Whiptail
 
How Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver ClusterHow Ceph performs on ARM Microserver Cluster
How Ceph performs on ARM Microserver Cluster
 
Open Source Data Deduplication
Open Source Data DeduplicationOpen Source Data Deduplication
Open Source Data Deduplication
 
IaaS for DBAs in Azure
IaaS for DBAs in AzureIaaS for DBAs in Azure
IaaS for DBAs in Azure
 
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Sto...
 
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use CasesVMworld Europe 2014: Virtual SAN Best Practices and Use Cases
VMworld Europe 2014: Virtual SAN Best Practices and Use Cases
 
Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02Vstoragetamsupportday1 110311121032-phpapp02
Vstoragetamsupportday1 110311121032-phpapp02
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems
 
Mysql talk
Mysql talkMysql talk
Mysql talk
 
Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster Ambedded - how to build a true no single point of failure ceph cluster
Ambedded - how to build a true no single point of failure ceph cluster
 
Linux Huge Pages
Linux Huge PagesLinux Huge Pages
Linux Huge Pages
 
Varrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationVarrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentation
 
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Ceph barcelona-v-1.2
Ceph barcelona-v-1.2Ceph barcelona-v-1.2
Ceph barcelona-v-1.2
 
Running MySQL on Linux
Running MySQL on LinuxRunning MySQL on Linux
Running MySQL on Linux
 
2015 deploying flash in the data center
2015 deploying flash in the data center2015 deploying flash in the data center
2015 deploying flash in the data center
 
2015 deploying flash in the data center
2015 deploying flash in the data center2015 deploying flash in the data center
2015 deploying flash in the data center
 
DataStax: Extreme Cassandra Optimization: The Sequel
DataStax: Extreme Cassandra Optimization: The SequelDataStax: Extreme Cassandra Optimization: The Sequel
DataStax: Extreme Cassandra Optimization: The Sequel
 
Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016
 

More from VMworld

VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld
 
VMworld 2016: Troubleshooting 101 for Horizon
VMworld 2016: Troubleshooting 101 for HorizonVMworld 2016: Troubleshooting 101 for Horizon
VMworld 2016: Troubleshooting 101 for Horizon
VMworld
 
VMworld 2016: Advanced Network Services with NSX
VMworld 2016: Advanced Network Services with NSXVMworld 2016: Advanced Network Services with NSX
VMworld 2016: Advanced Network Services with NSX
VMworld
 
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld
 
VMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI Automation
VMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI AutomationVMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI Automation
VMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI Automation
VMworld
 
VMworld 2016: What's New with Horizon 7
VMworld 2016: What's New with Horizon 7VMworld 2016: What's New with Horizon 7
VMworld 2016: What's New with Horizon 7
VMworld
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld
 
VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...
VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...
VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...
VMworld
 
VMworld 2016: The KISS of vRealize Operations!
VMworld 2016: The KISS of vRealize Operations! VMworld 2016: The KISS of vRealize Operations!
VMworld 2016: The KISS of vRealize Operations!
VMworld
 
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...
VMworld
 
VMworld 2016: Ask the vCenter Server Exerts Panel
VMworld 2016: Ask the vCenter Server Exerts PanelVMworld 2016: Ask the vCenter Server Exerts Panel
VMworld 2016: Ask the vCenter Server Exerts Panel
VMworld
 
VMworld 2016: Virtualize Active Directory, the Right Way!
VMworld 2016: Virtualize Active Directory, the Right Way! VMworld 2016: Virtualize Active Directory, the Right Way!
VMworld 2016: Virtualize Active Directory, the Right Way!
VMworld
 
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld
 
VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6
VMworld
 
VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...
VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...
VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...
VMworld
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
VMworld
 
VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld
 
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld
 
VMworld 2015: Building a Business Case for Virtual SAN
VMworld 2015: Building a Business Case for Virtual SANVMworld 2015: Building a Business Case for Virtual SAN
VMworld 2015: Building a Business Case for Virtual SAN
VMworld
 
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld
 

More from VMworld (20)

VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
 
VMworld 2016: Troubleshooting 101 for Horizon
VMworld 2016: Troubleshooting 101 for HorizonVMworld 2016: Troubleshooting 101 for Horizon
VMworld 2016: Troubleshooting 101 for Horizon
 
VMworld 2016: Advanced Network Services with NSX
VMworld 2016: Advanced Network Services with NSXVMworld 2016: Advanced Network Services with NSX
VMworld 2016: Advanced Network Services with NSX
 
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco InfrastructureVMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
 
VMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI Automation
VMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI AutomationVMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI Automation
VMworld 2016: Enforcing a vSphere Cluster Design with PowerCLI Automation
 
VMworld 2016: What's New with Horizon 7
VMworld 2016: What's New with Horizon 7VMworld 2016: What's New with Horizon 7
VMworld 2016: What's New with Horizon 7
 
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep DiveVMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2016: Virtual Volumes Technical Deep Dive
 
VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...
VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...
VMworld 2016: Advances in Remote Display Protocol Technology with VMware Blas...
 
VMworld 2016: The KISS of vRealize Operations!
VMworld 2016: The KISS of vRealize Operations! VMworld 2016: The KISS of vRealize Operations!
VMworld 2016: The KISS of vRealize Operations!
 
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...
 
VMworld 2016: Ask the vCenter Server Exerts Panel
VMworld 2016: Ask the vCenter Server Exerts PanelVMworld 2016: Ask the vCenter Server Exerts Panel
VMworld 2016: Ask the vCenter Server Exerts Panel
 
VMworld 2016: Virtualize Active Directory, the Right Way!
VMworld 2016: Virtualize Active Directory, the Right Way! VMworld 2016: Virtualize Active Directory, the Right Way!
VMworld 2016: Virtualize Active Directory, the Right Way!
 
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
VMworld 2016: Migrating from a hardware based firewall to NSX to improve perf...
 
VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6VMworld 2015: Troubleshooting for vSphere 6
VMworld 2015: Troubleshooting for vSphere 6
 
VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...
VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...
VMworld 2015: Monitoring and Managing Applications with vRealize Operations 6...
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
 
VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!VMworld 2015: Virtualize Active Directory, the Right Way!
VMworld 2015: Virtualize Active Directory, the Right Way!
 
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
VMworld 2015: Site Recovery Manager and Policy Based DR Deep Dive with Engine...
 
VMworld 2015: Building a Business Case for Virtual SAN
VMworld 2015: Building a Business Case for Virtual SANVMworld 2015: Building a Business Case for Virtual SAN
VMworld 2015: Building a Business Case for Virtual SAN
 
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes ConfigurationsVMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
 

Recently uploaded

Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Speck&Tech
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
Zilliz
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
Zilliz
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
GraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracyGraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracy
Tomaz Bratanic
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 

Recently uploaded (20)

Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
GraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracyGraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracy
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 

VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learned in Storage Best Practices (v2.0)

  • 1. Just Because You Could, Doesn't Mean You Should: Lessons Learned in Storage Best Practices (v2.0) Patrick Carmichael, VMware STO4791 #STO4791
  • 2. 22 Just Because You Could, Doesn’t Mean You SHOULD.  Storage Performance and Technology  Interconnect vs IOP  Command Sizing.  PDU Sizing  SSD vs Magnetic Disk  Tiering  SSD vs Spinning Media, Capacity vs Performance  Thin Provisioning  Architecting for Failure  Planning for the failure from the initial design  Complete Site Failure  Backup RTO  DR RTO  The low-cost DR site
  • 3. 33 Storage Performance – Interconnect vs IOP  Significant advances in interconnect performance  FC 2/4/8/16GB  iSCSI 1G/10G/40G  NFS 1G/10G/40G  Differences in performance between technologies • None – NFS, iSCSI and FC are effectively interchangeable • IO Footprint from array perspective.  Despite advances, performance limit is still hit at the media itself  90% of storage performance cases seen by GSS that are not config related, are media related  Payload (throughput) is fundamentally different from IOP (cmd/s)  IOP performance is always lower than throughput
  • 4. 44 Factors That Affect Performance  Performance versus Capacity  Disk performance does not scale with drive size  Larger drives generally equate lower performance  IOPS(I/Os per second) is crucial  How many IOPS does this number of disks provide?  How many disks are required to achieve a required number of IOPS?  More spindles generally equals greater performance
  • 5. 55 Factors That Affect Performance - I/O Workload and RAID  Understanding workload is a crucial consideration when designing for optimal performance  Workload is characterized by IOPS and write % vs read %  Design choice is usually a question of:  How many IOPs can I achieve with a given number of disks? • Total Raw IOPS = Disk IOPS * Number of disks • Functional IOPS = (Raw IOPS * Write%)/(Raid Penalty) + (Raw IOPS * Read %)  How many disks are required to achieve a required IOPS value? • Disks Required = ((Read IOPS) + (Write IOPS*Raid Penalty))/ Disk IOPS
  • 6. 66 IOPS Calculations – Fixed Number of disks  Calculating IOPS for a given number of disks • 8 x 146GB 15K RPM SAS drives • ~150 IOPS per disk • RAID 5 • 150 * 8 = 1200 Raw IOPS • Workload is 80% Write, 20% Read • (1200*0.8)/4 + (1200*0.2) = 480 Functional IOPS Raid Level IOPS(80%Read 20%Write) IOPS(20%Read 80%Write) 5 1020 480 6 1000 400 10 1080 720
  • 7. 77 IOPS Calculations – Minimum IOPS Requirement  Calculating number of disks for a required IOPS value  1200 IOPS required  15K RPM SAS drives. ~150 IOPS per disk  Workload is 80% Write, 20% Read  RAID 5  Disks Required = (240 + (960*4))/150 IOPS  27 Disks required Raid Level Disks(80%Read 20%Write) Disks (20%Read 80%Write) 5 13 27 6 16 40 10 10 14
  • 8. 88 Storage Performance – Interconnect vs IOP – contd.  While still true for 90% of workloads, this is starting to shift  SCSI commands passed at native sizes • Windows 2k8/2k12 (8MB copy) • Linux (native applications, up to 8MB+) • NoSQL Databases (Especially Key/Value or Document stores) • Cassandra (128k) • Mongo (64-256k)  ESX will gladly pass whatever size command the guest requests.  New applications are starting to run into some protocol limitations • iSCSI PDU (64k/256k) • NFS Buffer/Request Size • Write Amplification!  Offloading these processes offers significant performance improvements
  • 9. 99 Storage Performance – Interconnect vs IOP – contd. 0.00% 20.00% 40.00% 60.00% 80.00% 100.00% 120.00% 140.00% 160.00% 180.00% 200.00% 4k 8k 16k 32k 128k 1M 2M 4M 8M Efficiency/Size - Read Efficiency/Size - Read
  • 10. 1010 Tiering  Very old concept • Primarily a manual process in the past • Based on RAID groups / Disk types • Second generation arrays offer batch mode / scheduled auto-movement system of blocks under IO pressure
  • 11. 1111 Why Tiering Works, and Matters
  • 12. 1212 Tiering – contd.  New arrays from several partners now offer near real-time block movement • Dot Hill AssuredSAN • X-IO ISE2  Or, batch mode • Compellent • VNX  Auto-tiering will get you better performance, faster • ~2 hours for 150G working set to be fully moved to SSD.  However…
  • 13. 1313 Tiering – contd.  Remember – Just because you could, doesn’t mean you should!  There are several places where an auto-tiering array may not be able to help • Working Set • Total Data > SSD Space • Second tier of cache to worry about • Fully utilized • May not have spare IOPS with which to move data between layers. • Rapidly changing datasets • VDI • In-memory databases. • Often hard to predict when they’re fully utilized • “Never ending” performance bucket.
  • 14. 1414 Tiering – contd.  These platforms have amazing and very capable technology  Have the vendor show you, with your workload, what their platform is capable of  Workloads differ massively between users and vendors – validate!
  • 15. 1515 SSD vs Spinning Disk  SSDs offer extremely high performance per single “spindle” • Exceptionally great for “L2 Cache”, especially on auto-tiering arrays. • Capable of 1,000,000+ IOPS in pure SSD arrays • Exceptionally dense consolidation ratios (Excellent for VDI/HPC).  However…  Ultra-high performance arrays have other limitations • Surrounding infrastructure • Switching (packets/frames a second) • CPU (packaging frames in software initiators) • Command sizing • Some SSDs do not work well outside of certain command sizes • Architectural limitations • Overhead loss for scratch space • Garbage Collect
  • 16. 1616 VMFS/VMDK Updates for the Next Release  VMFS5 (ESX5) introduced support for LUNs larger than 2TB • Still limited to 2TB VMDK, regardless of LUN sizing • 64TB RDM  Major update in this release • Max size: 64TB for both VMDK and VMFS volumes. • Lets you avoid spanning disks inside of a guest, or using RDMs for large workloads
  • 17. 1717 VMDK Contd.  VMFS5, in combination with ATS, will allow consolidation of ever-larger number of VMs onto single VMFS datastores  Alternatively, a couple of very large VMs can occupy a single lun • While potentially easier for management, this means that significantly more data is now stored in one location. • When defining a problem space, you’ve now expanded greatly the amount of data stored in one place.  Just because you could, does that mean you should?
  • 18. 1818 Thin Provisioning  Thin provisioning offers very unique opportunities to manage your storage “after” provisioning • Workloads that require a certain amount of space, but don’t actually use it • Workloads that may grow over time, and can be managed/moved if they do • Providing space for disparate groups that have competing requirements  Many arrays ~only~ offer Thin Provisioning, by design  The question is, where should you thin provision, and what are the ramifications?
  • 19. 1919 Thin Provisioning – VM Level  VM disk is created @ 0b, until used, and then grows @ VMFS Block size as needed  Minor performance penalty for provisioning / zeroing  Disk cannot currently be shrunk – once grown, it stays grown • There are some workarounds for this, but they are not guaranteed  What happens when you finally run out of space? • All VMs stun until space is created • Production impacting, but potentially a quick fix (shut down VMs, adjust memory reservation, etc) • Extremely rare to see data loss of any kind
  • 20. 2020 Thin Provisioning – LUN Level  Allows your array to seem bigger than it actually is  Allows you to share resources between groups (the whole goal of virtualization)  Some groups may not use all or much of what they’re allocated, allowing you to utilize the space they’re not using  Standard sized or process defined luns may waste significant amounts of space, and space being wasted is $$ being wasted  Significant CapEX gains can be seen with thin luns
  • 21. 2121 Thin Provisioning – LUN Level - contd  What happens when you finally run out of space? • New VAAI T10 primitives, for compatible arrays, will let ESX know that the underlying storage has no free blocks (related to NULL_UNMAP) • If VAAI works, and your array is compatible, and you’re on a supported version of ESX, this will result in the same as a thin VMDK running out of space – All VMs will stun (that are waiting on blocks). VMs not waiting on blocks will continue as normal. • Cleanup will require finding additional space on the array, as VSWP files / etc will already be allocated blocks at the lun level. Depending on your utilization, this may not be possible, unless UNMAP also works (very limited support at this time). • If NULL_UNMAP is not available for your environment, or does not work correctly, then what? • On a good day, the VMs will simply crash with a write error, or the application inside will fail (depends on array and how it handles a full filesystem). • Databases and the like are worst affected, will most likely require rebuild/repair. • And on a bad day?
  • 22. 2222 Thin Provisioning LUN Level – contd.
  • 23. 2323 Thin Provisioning  There are many reasons to use Thin Provisioning, at both the VM and the LUN level  Thin provisioning INCREASES the management workload of maintaining your environment. You cannot just ignore it
  • 24. 2424 Freeing space on thin provisioned LUNs  ESX 5.0 introduced NULL_UNMAP, a T10 command to mark blocks as “freed” for an underlying storage device. • Disabled in P01 due to major performance issues • Re-enabled in U1, manual only, to prevent sudden spikes in load, or corruption from array firmware issues.  Vmkfstools –y command, set amount of space to try and reclaim.  *VALIDATE IF VMFSTOOLS / Auto unmap made it into 5.5*
  • 25. 2525 Just Because You Could, Doesn’t Mean You Should  Everything we’ve covered so far is based on new technologies  What about the existing environments? The ultimate extension of “Just because you could, doesn’t mean you should,” is what I call “Architecting for Failure”
  • 26. 2626 Architecting for Failure  The ultimate expression of “Just because you could, doesn’t mean you should.” • Many core infrastructure designs are built with tried and true hardware and software, and people assume that things will always work • We all know this isn’t true – Murphy’s law  Architect for the failure. • Consider all of your physical infrastructure. • If any component failed, how would you recover? • If everything failed, how would you recover? • Consider your backup/DR plan as well
  • 27. 2727 Black Box Testing  Software engineering concept • Consider your design, all of the inputs, and all of the expected outputs • Feed it good entries, bad entries, and extreme entries, find the result, and make sure it is sane • If not, make it sane  This can be applied before, or after, you build your environment
  • 28. 2828 Individual Component Failure  Black box: consider each step your IO takes, from VM to physical media  Physical hardware components are generally easy to compensate for • VMware HA and FT both make it possible for a complete system to fail with little/no downtime to the guests in question • Multiple hardware components add redundancy and eliminate single points of failure • Multiple NICs • Multiple storage paths • Traditional hardware (multiple power supplies, etc) • Even with all of this, many environments are not taking advantage of these features  Sometimes, the path that IO takes passes through a single point of failure that you don’t realize is one
  • 29. 2929 What About a Bigger Problem?  You’re considering all the different ways to make sure individual components don’t ruin your day  What if your problem is bigger?
  • 30. 3030 Temporary Total Site Loss  Consider your entire infrastructure during a temporary complete failure • What would happen if you had to bootstrap it cold? • This actually happens more often than would be expected • Hope for the best, prepare for the worst • Consider what each component relies on – do you have any circular dependencies? • Also known as the “Chicken and the Egg” problem, these can increase your RTO significantly • Example: Storage mounted via DNS, all DNS servers on the same storage devices. Restoring VC from backup when all networking is via DVS • What steps are required to bring your environment back to life? • How long will it take? Is that acceptable?
  • 31. 3131 Permanent Site Loss  Permanent site loss is not always an “Act of God” type event • Far more common is a complete loss of a major, critical component • Site infrastructure (power, networking, etc) may be intact, but your data is not • Array failures (controller failure, major filesystem corruption, RAID failure) • Array disaster (thermal event, fire, malice) • Human error – yes, it happens! • Multiple recovery options – which do you have? • Backups • Tested and verified? • What’s your RTO for a full failure? • Array based replication • Site Recovery Manager • Manual DR • How long for a full failover? • Host based replication
  • 32. 3232 Permanent Site Loss – contd.  Consider the RTO of your choice of disaster recovery technology • It equates directly to the amount of time you will be without your virtual machines • How long can you, and your business, be without those services? • A perfectly sound backup strategy is useless, if it cannot return you to operation quickly enough  Architect for the Failure – make sure every portion of your environment can withstand a total failure, and recovery is possible in a reasonable amount of time
  • 33. 3333 Let’s Be Honest for a Moment…  We would all love to have a DR site that is equally capable as your production site. • I’d also like a horsey. Neither of us have all the budget we’d really like for that.  There’s a key DR strategy that is often missed: Data existence is far more important than full operation • Many places decline to implement a DR strategy due to budget, time, or believing that they have to build an equal DR site as their production environment. • For many of these places, having key services ~only~ online, with the rest of the data being protected at the DR site and present if needed, is more than enough.
  • 34. 3434 How to Do It On a Budget of Almost $0  VMware GSS set out to prove that you can build a “protection” site with a lot less than you’d expect  Services required were true tier 1 apps: • Store front end (based on DVD Store) • Exchange • SQL • Oracle RAC • JVM
  • 35. 3535 Hardware  8x Dell D630 Laptops (4g, 150G internal drives)  2x Dell Inspiron 5100 Laptops (8g, 200G internal drives)  4x Dell Precision 390 Workstations (8G/16G, 5x 1TB internal drives)  1 Intel Atom Supermicro (2G / 5x1TB Internal Drives) • Found by the side of the road • No joke  1 Dumb Ethernet Switch (Netgear), purchased from Best Buy  Duct tape, 1 roll (boot drive for one of the 390s is taped to the chassis)  4x Broadcom dual-port 1G cards (borrowed from R710 servers)
  • 36. 3636 Software  ESX 5.1 U1  SRM 5.1.1  Ubuntu + knfsd (low performance storage) • Host based replication target  Datacore San Symphony V • 4x Nodes, high performance, array based replication  Starwind iSCSI • 2x Nodes, high performance, management software.  Lefthand VSA • 4x Nodes, medium/high performance, array based replication
  • 37. 3737 End results  While we were certainly unable to bring up the full environment on the DR site, we were able to replicate all 6TB of data successfully.  Critical services for managing a disaster-caused outage were online • 1/5 of the Exchange mailboxes (executives and IT personnel) • Web front end • SQL database for minimal customer queries/interactions • No major processing • RAC • Online for queries, no processing • JVM • Offline (RAC processing engine).  This certainly isn’t ideal, but everything was intact, we could “manage” the situation, and the “company” hadn’t vanished.
  • 40. 4040 Conclusion  You can make a DR site out of almost anything • Retired servers • Spare parts • Out of warranty equipment  You don’t need high end equipment – hardware a generation or two old is more than enough for basic, or even partial operation  Existing software solutions make it very easy to turn servers alone into extremely capable shared storage solutions
  • 42. 4242 Other VMware Activities Related to This Session  HOL: HOL-SDC-1304 vSphere Performance Optimization  Group Discussions: STO1000-GD Overall Storage with Patrick Carmichael
  • 44.
  • 45. Just Because You Could, Doesn't Mean You Should: Lessons Learned in Storage Best Practices (v2.0) Patrick Carmichael, VMware STO4791 #STO4791