Eliminating the I/O Blender
Realizing “World Class” Storage Performance in
Virtual Server Environments
Welcome and Introduction
• Storage has been the
Achilles Heel of Server
Virtualization for many
years
• Many techniques have
been tried to minimize the
problem
• A successful approach must
address the root cause: the
I/O Blender
Where It All Began
Less HW
to Manage
Dynamic
Provisioning
Fewer Staff
Required
Improved
Resource
Utilization
Lower IT
Costs
Lower
Energy
Bills
More
Up-Time
Fun at
Parties!
Early Challenges
Applications vary in terms of
resource and performance
requirements…
So, server configurations differ.
Storage
LAN
The Highest Common Denominator
To realize the benefits touted by the
hypervisor vendors (VMotion, automatic
server failover, migration for efficiency),
EVERY potential host must be enabled
with connectivity required by the most
demanding application…
Increasing complexity, cost and energy
consumption…
NICsHBAs
Maybe It’s The Storage Itself…or Not
SERVERS
STORAGE
High CPU
Processing
Cycles..
Short
Storage I/O
Queues…
Workarounds
• 2010: vStorage API for Array
Integration (VAAI) introduced in
vSphere 4 – 9 non-standard
primitives enabling unapproved SCSI
commands to offload storage chores
from inefficient ESX servers
• 2011: “Enhanced” and Reissued in
vSphere 5, expanding support for thin
provisioning and NAS
• 2011: VMware vSphere Storage
Appliance – neither a SAN nor NAS,
but a repository for VMDK
• 2014: Virtual SAN
Virtual
Servers
NAS/
iSCSI
FC/SAS
SANs
Enter the I/O Blender Effect
STORAGE I/O
Okay.
RAW I/O
Problem must
be ahead of disk storage
interconnect
I/O Blender in a Nutshell
A1 A2 A3 A4 A5 A6
A1 A2 A3 A4 A5 A6
RANDOM WRITES
TRADITIONAL I/O PATH
APP -> SERVER -> HBA -> DISK
I/O Blender in a Nutshell
RANDOM WRITES
VIRTUAL SERVER I/O PATH
VMs -> HYPERVISOR -> DISK
A1
A2
A3
C1B1 B3
B4
C1
A2
C2
B2
C3
B5 C4
HPERVISOR
A1
B1
B2
B3
C3
C2 A3
C1
A2
A1
B1
B2
B3
C3
C2 A3
Simple Flash Caching Not a Fix
RANDOM WRITES
ADD FLASH
VMs -> HYPERVISOR -> FLASH CACHE <-> DISK
A1
A2
A3
C1B1 B3
B4
C2
B2
C3
B5 C4
HPERVISOR
Smart Caching Provides an Answer
RANDOM WRITES
ADD FLASH
VMs -> HYPERVISOR -> SMART CACHE <-> DISK
HPERVISOR
LSFS
A1
B3
A2
C1
A4
B1
C2
B2
SEQUENTIAL
WRITE
StarWind Software LSFS Paves the Way
DAS DASDAS
Direct Attached
Storage
Direct Attached
Storage
Direct Attached
Storage
NODES
Hypervisor
Agnostic
Write Anywhere
File Layyour (WAFL)
& RAID DP
Cache Accelerated
Sequential Layout (CASL™)
& RAID 6
Log-Structured
File System (LSFS)
& Virtual SAN
Any RAID Level
Any hardware
Key Benefits
• StarWind Software’s Log-Structured File System
brings unique features and functionality to any
primary VM-centric storage system
– Significant performance boost and elimination of the
hypervisor I/O bottleneck
– Support for all RAID parity and striping schemes
– Improved functionality for data protection
– Flash friendly
Without Further Ado..
• Let’s talk to the folks who invented the
StarWind Log Structured File System (LSFS)
– Get the fine points of the StarWind approach
– Explore the way that StarWind LSFS integrates
with the hypervisor stack
– Learn why it is better to be
“complementary” than
competitive with the leading
hypervisor vendor…
Let’s Talk to StarWind Software
Jon Toigo
Managing Partner
Toigo Partners International
Chairman, Data Management Institute
Max Kolomyeytsev
Product Manager
StarWind Software
StarWind Software Background
• Founded: 2003
• Users: 30,000+, including Fortune 500
• Headquarters: Wakefield, MA, USA
• Office Locations: 2 - North America and Europe
• Channel Partners: 270+
• Technology Partners: Microsoft, VMware, HP, IBM,
Dell
What Is a Log-Structured File System
Implementing LSFS in
Virtual Server Environment
slow fast
Implementing LSFS in
Virtual Server Environment
Complimentary not Competitive
Any trade offs?
• Disk space overhead is necessary - 1 TB used is not
always 1 TB consumed
• RAM use is higher - LSFS uses RAM as its clerk to
maintain the Log structure
• Sequential reads can get slower – sequential read is not
what it used to be
Q&A
Jon Toigo
Managing Partner
Toigo Partners International
Chairman, Data Management Institute
Max Kolomyeytsev
Product Manager
StarWind Software
Thank you.
• And for more information
– www.starwindsoftware.com
– info@starwindsoftware.com
– https://twitter.com/starwindsan
– https://www.facebook.com/StarWind.Software
– Get StarWind Virtual SAN trial here:
– http://www.starwindsoftware.com/registration-
starwind-virtual-san

Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtualization workload

  • 1.
    Eliminating the I/OBlender Realizing “World Class” Storage Performance in Virtual Server Environments
  • 2.
    Welcome and Introduction •Storage has been the Achilles Heel of Server Virtualization for many years • Many techniques have been tried to minimize the problem • A successful approach must address the root cause: the I/O Blender
  • 3.
    Where It AllBegan Less HW to Manage Dynamic Provisioning Fewer Staff Required Improved Resource Utilization Lower IT Costs Lower Energy Bills More Up-Time Fun at Parties!
  • 4.
    Early Challenges Applications varyin terms of resource and performance requirements… So, server configurations differ. Storage LAN
  • 5.
    The Highest CommonDenominator To realize the benefits touted by the hypervisor vendors (VMotion, automatic server failover, migration for efficiency), EVERY potential host must be enabled with connectivity required by the most demanding application… Increasing complexity, cost and energy consumption… NICsHBAs
  • 6.
    Maybe It’s TheStorage Itself…or Not SERVERS STORAGE High CPU Processing Cycles.. Short Storage I/O Queues…
  • 7.
    Workarounds • 2010: vStorageAPI for Array Integration (VAAI) introduced in vSphere 4 – 9 non-standard primitives enabling unapproved SCSI commands to offload storage chores from inefficient ESX servers • 2011: “Enhanced” and Reissued in vSphere 5, expanding support for thin provisioning and NAS • 2011: VMware vSphere Storage Appliance – neither a SAN nor NAS, but a repository for VMDK • 2014: Virtual SAN Virtual Servers NAS/ iSCSI FC/SAS SANs
  • 8.
    Enter the I/OBlender Effect STORAGE I/O Okay. RAW I/O Problem must be ahead of disk storage interconnect
  • 9.
    I/O Blender ina Nutshell A1 A2 A3 A4 A5 A6 A1 A2 A3 A4 A5 A6 RANDOM WRITES TRADITIONAL I/O PATH APP -> SERVER -> HBA -> DISK
  • 10.
    I/O Blender ina Nutshell RANDOM WRITES VIRTUAL SERVER I/O PATH VMs -> HYPERVISOR -> DISK A1 A2 A3 C1B1 B3 B4 C1 A2 C2 B2 C3 B5 C4 HPERVISOR A1 B1 B2 B3 C3 C2 A3
  • 11.
    C1 A2 A1 B1 B2 B3 C3 C2 A3 Simple FlashCaching Not a Fix RANDOM WRITES ADD FLASH VMs -> HYPERVISOR -> FLASH CACHE <-> DISK A1 A2 A3 C1B1 B3 B4 C2 B2 C3 B5 C4 HPERVISOR
  • 12.
    Smart Caching Providesan Answer RANDOM WRITES ADD FLASH VMs -> HYPERVISOR -> SMART CACHE <-> DISK HPERVISOR LSFS A1 B3 A2 C1 A4 B1 C2 B2 SEQUENTIAL WRITE
  • 13.
    StarWind Software LSFSPaves the Way DAS DASDAS Direct Attached Storage Direct Attached Storage Direct Attached Storage NODES Hypervisor Agnostic Write Anywhere File Layyour (WAFL) & RAID DP Cache Accelerated Sequential Layout (CASL™) & RAID 6 Log-Structured File System (LSFS) & Virtual SAN Any RAID Level Any hardware
  • 14.
    Key Benefits • StarWindSoftware’s Log-Structured File System brings unique features and functionality to any primary VM-centric storage system – Significant performance boost and elimination of the hypervisor I/O bottleneck – Support for all RAID parity and striping schemes – Improved functionality for data protection – Flash friendly
  • 15.
    Without Further Ado.. •Let’s talk to the folks who invented the StarWind Log Structured File System (LSFS) – Get the fine points of the StarWind approach – Explore the way that StarWind LSFS integrates with the hypervisor stack – Learn why it is better to be “complementary” than competitive with the leading hypervisor vendor…
  • 16.
    Let’s Talk toStarWind Software Jon Toigo Managing Partner Toigo Partners International Chairman, Data Management Institute Max Kolomyeytsev Product Manager StarWind Software
  • 17.
    StarWind Software Background •Founded: 2003 • Users: 30,000+, including Fortune 500 • Headquarters: Wakefield, MA, USA • Office Locations: 2 - North America and Europe • Channel Partners: 270+ • Technology Partners: Microsoft, VMware, HP, IBM, Dell
  • 18.
    What Is aLog-Structured File System
  • 19.
    Implementing LSFS in VirtualServer Environment slow fast
  • 20.
    Implementing LSFS in VirtualServer Environment
  • 21.
  • 22.
    Any trade offs? •Disk space overhead is necessary - 1 TB used is not always 1 TB consumed • RAM use is higher - LSFS uses RAM as its clerk to maintain the Log structure • Sequential reads can get slower – sequential read is not what it used to be
  • 23.
    Q&A Jon Toigo Managing Partner ToigoPartners International Chairman, Data Management Institute Max Kolomyeytsev Product Manager StarWind Software
  • 24.
    Thank you. • Andfor more information – www.starwindsoftware.com – info@starwindsoftware.com – https://twitter.com/starwindsan – https://www.facebook.com/StarWind.Software – Get StarWind Virtual SAN trial here: – http://www.starwindsoftware.com/registration- starwind-virtual-san

Editor's Notes

  • #23 QUESTION: What is the price I pay when moving from traditional data layout to Log Structured systems? ANSWER: To be added