Your SlideShare is downloading. ×
Varrow   datacenter storage today and tomorrow
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Varrow datacenter storage today and tomorrow

2,809

Published on

Presentation given to ITPSC in Columbia, SC on 10/16/2012

Presentation given to ITPSC in Columbia, SC on 10/16/2012

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
2,809
On Slideshare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Datacenter Storage Today and Tomorrow October 16, 2012 Varrow Tony Pittman Technical Consultantt: @pittmantony w: tpittman.wordpress.com
  • 2. HousekeepingContinue the conversation:@pittmantony andtpittman.wordpress.comLimited time today, reach out to me fordeeper discussion of any of these points
  • 3. AgendaDatacenter Storage ConceptsTypical Datacenters TodayNew Technologies and Concepts on the Horizon
  • 4. Types & Use CasesDAS – Direct Attached Storage– Boot / OS volumes– Non-critical, low performing dataSAN – Storage Area Networks– Critical and/or high performing data– Shared storage for clusters (RAC, MS Failover Clustering, VMware)– Boot From SAN – enables replicated OS volumes and statelessness– Array-based replicationNAS – Network Attached Storage– Unstructured data (files and folders)– VMware and HyperV 2012 datastores can use NAS– Database backup destination– Array-based replication
  • 5. DAS - Direct Attached StorageSimplest type of datacenter storageIncludes spinning hard drives and flashConnected by SAS, SATA, USB, PCIe (alsoIDE, SCSI)Limited by number ofdevices, performance, availablity
  • 6. SAN – Storage Area NetworkComposed of:– Storage arrays– Host bus adapters – I/O cards in hosts allowing SAN connectivity– SAN switches – Connect all the pieces together. Purpose built for storage connectivity
  • 7. SAN – Storage Area NetworkStorage Arrays:– Purpose built– Manage large amounts of storage– Presented to multiple hosts– Performance improvements built-in • Tiering across multiple drive types to maximize performance and capacity for a given budget • Read/Write DRAM Cache and caching algorithms– Full redundancy – data, connectivity, management
  • 8. SAN – Storage Area NetworkSAN Switches– FC, iSCSI, or FCoE – The Great Debate– Must be compatible with the storage array, ie: some arrays won’t do some protocols– FC (Fibre Channel) - purpose built for storage, mostly implementing 8 Gb/s but some 16 Gb/s models available.– iSCSI – rides on TCP/IP, *not lossless*, depends on retransmits for packets dropped during heavy load periods. Network design is crucial. Recommend isolating from other network traffic. 10 Gb ethernet getting pretty common. (Is it the future?)– FCoE – rides directly on ethernet, not TCP/IP. Lossless, uses DataCenter Bridging Protocol
  • 9. SAN – Storage Area NetworkSAN Switches - AnalogyFC - similar to railways.Purpose built, connected topredetermined specificendpointsiSCSI - similar to highwaysCan be more flexible. Trafficcan be a problem.
  • 10. SAN – Storage Area NetworkArray-based replication:– Moves replication CPU overhead off of the host– Can improve RPO by maintaining a journal of writes, allowing rollback to a specific point in time– Simplifies management vs separate replications for each database, filesystem or drive– Can be used to populate a test environment or backup server, duplicating the real Production environment
  • 11. SAN – Storage Area NetworkArray-based replication:– Application Integration– Usually required for geographically dispersed clustering
  • 12. NAS – Network Attached StorageNAS appliance – usually purpose built devicerunning a flavor of Linux and serving up fileshares and NFS exports from internal drivesUsually connects to existing server LANOperates via CIFS (SMB v2 and v3) and NFS
  • 13. NAS – Network Attached StorageBackups via NDMP, potentially reducingbackup times for filesysetms with largenumber of filesRead/Writeable checkpointsApplication Integration
  • 14. What’s Next?DASNASSANDrive Types & RAIDCloud / Hybrid Storage
  • 15. Changes to DASPCIe Flash – FusionI/O, VFCache, etc– Local storage, integrated with SAN– Very low response timeVMware Distributed Storage– Aggregates local storage from vSphere hosts in a cluster and presents that storage to all hosts in the cluster as a datastore– Quality of local storage could become more important in the overall design
  • 16. NASHypervisor running on the NAS appliance– VMware vSphere running on Isilon– Very high bandwidth access to storageSMB v3– Not supported on every NAS appliance yet– Usable by HyperV 2012 to store VM’s– Usable by MSSQL to store database filesWindows VM as NAS? VMware VADP ChangeBlock Tracking (CBT) = Fast Backups
  • 17. SANInfiniband becoming more common– New (and existing) array technologies using Infiniband for internal communication. XtremeIO, XIV, etc– New array technologies using Infiniband for “Cache Area Network”, read/write cache shared between clustered hosts (Oracle RAC and SAP use cases)
  • 18. SAN16 or 32 Gb FC and 40 Gb or 100 Gb Ethernet(iSCSI)– FC and Ethernet will continue to leapfrog. Emulex already has an FCoE card that will do 40 Gbe + 16 Gb FCMulti-hop FCoENew startup companies shaking things up– All flash arrays and hybrid arrays– Next year should see acquisitions
  • 19. Drive Architecture ChangesEnterprise Grade MLC Flash– Less expensive per GB– SLC will probably stick around for write performanceSmaller drives going away– Like the 72 GB drives of yesteryear, today’s 300 GB and 1 TB drives will be phased out. 600 GB + and 2 TB + will become the standard for spinning drives
  • 20. Drive Architecture ChangesRAID may no longer be the standard– RAID designed for spinning drives. Workloads that specify RAID type are usually considering head location and locality of reference. RAID still needed for spinning drives.– Flash based arrays doing inline dedupe, pointer based blockmaps and redirect-on-first-access instead of Copy on Write.– Caching algorithms traditionally sequentialize incoming I/O requests to work better with spinning drives. No longer necessary.
  • 21. Cloud Based StorageLots of clouds:PaaS, IaaS, SaaS, DBaaS, BaaS, DRaaS– Most solutions don’t require you to know the nuts and bolts of the underlying storage……BUT, we could soon see solutions involvingall flash arrays on premise, connected toslower cloud-based storage.
  • 22. Questions?My Blog: http://tpittman.wordpress.comTwitter is @pittmantonyMy Email: tpittman@varrow.comVarrow Bloggers: http://www.varrowblogs.com

×