Using flash on the server side

1,104 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,104
On SlideShare
0
From Embeds
0
Number of Embeds
9
Actions
Shares
0
Downloads
22
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Using flash on the server side

  1. 1. Using Flash on The Server Side Or How this Flash Makes Storage Like This Flash
  2. 2. Flash/SSD Form Factors • 2.5” SSD – SATA for laptops, good for servers – SAS Dual ports for dual controller arrays • PCIe – Lower latency, higher bandwidth • SATA Express – 2.5” PCIe frequently with NVMe • Memory Channel Flash (SanDisk UltraDIMM) – Write latency as low as 3µsec – Requires BIOS support
  3. 3. Anatomy of an SSD • Flash Controller – Provides external interface • SATA • SAS • PCIe – Wear leveling – Error correction • DRAM – Write buffer – Metadata • Ultra or other capacitor – Power failure DRAM dump – Enterprise SSDs only
  4. 4. Flash in the Server • Minimizes latency & maximizes bandwidth – No SAN latency/congestion – Dedicated controller • PCIe flash example – 1.6GB/s bandwidth – >50µs read, >20µs write latency • But servers are unreliable – Data on server SSD is captive
  5. 5. Server Side Deployment Models • As direct storage – Very high performance but limited resiliency – Resiliency responsibility of applications • HPC like checkpointing • Web 2.0 app data distribution • SQL Server Always On • As pooled/replicated storage – Virident software – Replicates, pools via RDMA
  6. 6. Server Flash Caching Advantages • Take advantage of lower latency – Especially w/PCIe flash card/SSD • Data written to back end array – So not captive in failure scenario • Works with any array – Or DAS for that matter • Allows focused use of flash – Put your dollars just where needed – Match SSD performance to application • Politics: Server team not storage team solution
  7. 7. Server Side Caching Suppliers • Independent Software Vendors – Proximal Data – PernixData • Array vendors – EMC – NetApp • Server vendors – HP – Dell • Qlogic • SSD vendors – Intel • Nevex – Samsung • Nvelo – SanDisk • Flashsoft – Western Digital • sTec EnhanceIO • Velobit Hypercache – OCZ – Virident
  8. 8. Use Cases • Database servers – Good fit with shared nothing clusters – Look for OS, SSD support • Server virtualization – Higher random I/O rates – Look for: • Dynamic cache assignment • Live migration support • VDI – Write intensive but captive data less of an issue
  9. 9. Caching Boosts Performance! 0 500 1000 1500 2000 2500 3000 3500 Baseline PCIe SSD Cache Low end SSD Cache Published TPC-C results
  10. 10. Write Through and Write Back 0 10000 20000 30000 40000 50000 60000 Baseline Write Through Write Back TPC-C IOPS • 100 GB cache • Dataset 330GB grows to 450GB over 3 hour test
  11. 11. Architectures • Caching software in OS – File or block filter driver • Caching software in Hypervisor – File or block filter • Vendor may need custom hooks • Virtual Storage Appliance • Hardware cache device – RAID Controller for DAS – HBA/CNA
  12. 12. Software Caching Locations
  13. 13. Basic Cache Types • Read • Write Through – Cache populated on write – Data written to back end before ack • Write Back – Data acknowledged on write to cache – Written to back end storage asynchronously
  14. 14. Write Back? • Write back caches speed up writes too • Data must be protected – Flash is non-volatile – Servers are unreliable – Server crash leads to imprisoned data • Move SSD to new server w/cache software • Flush cache before resume • Array snapshot issues
  15. 15. Distributed Cache • Duplicate cached writes across n servers • Eliminates imprisoned data • Allows cache for servers w/o SSD • Products: – PernixData – Virident – Others soon • Qlogic FabicCache caching HBA acts as target & initiator
  16. 16. Live Migration Issues • Does cache allow migration – Through standard workflow • To allow automation like DRS? • Is cache cold after migration? • Cache coherency issues • Guest cache – Cache LUN locksVM to server • Can automate but breaks workflow • Hypervisor cache – Must prepare, warm cache at destination
  17. 17. Copy Cache During Migration • Migration includes cache contents • More data • Extends migration time • Requires new workflow or hypervisor support • Now available from ProximalData
  18. 18. Questions and Contact • Contact info: – Hmarks@deepstorage.net – @DeepStoragenet onTwitter

×