Datacenter Storage Today and Tomorrow October 16, 2012 Varrow Tony Pittman Technical Consultantt: @pittmantony w: tpittman.wordpress.com
HousekeepingContinue the conversation:@pittmantony andtpittman.wordpress.comLimited time today, reach out to me fordeeper discussion of any of these points
AgendaDatacenter Storage ConceptsTypical Datacenters TodayNew Technologies and Concepts on the Horizon
Types & Use CasesDAS – Direct Attached Storage– Boot / OS volumes– Non-critical, low performing dataSAN – Storage Area Networks– Critical and/or high performing data– Shared storage for clusters (RAC, MS Failover Clustering, VMware)– Boot From SAN – enables replicated OS volumes and statelessness– Array-based replicationNAS – Network Attached Storage– Unstructured data (files and folders)– VMware and HyperV 2012 datastores can use NAS– Database backup destination– Array-based replication
DAS - Direct Attached StorageSimplest type of datacenter storageIncludes spinning hard drives and flashConnected by SAS, SATA, USB, PCIe (alsoIDE, SCSI)Limited by number ofdevices, performance, availablity
SAN – Storage Area NetworkComposed of:– Storage arrays– Host bus adapters – I/O cards in hosts allowing SAN connectivity– SAN switches – Connect all the pieces together. Purpose built for storage connectivity
SAN – Storage Area NetworkStorage Arrays:– Purpose built– Manage large amounts of storage– Presented to multiple hosts– Performance improvements built-in • Tiering across multiple drive types to maximize performance and capacity for a given budget • Read/Write DRAM Cache and caching algorithms– Full redundancy – data, connectivity, management
SAN – Storage Area NetworkSAN Switches– FC, iSCSI, or FCoE – The Great Debate– Must be compatible with the storage array, ie: some arrays won’t do some protocols– FC (Fibre Channel) - purpose built for storage, mostly implementing 8 Gb/s but some 16 Gb/s models available.– iSCSI – rides on TCP/IP, *not lossless*, depends on retransmits for packets dropped during heavy load periods. Network design is crucial. Recommend isolating from other network traffic. 10 Gb ethernet getting pretty common. (Is it the future?)– FCoE – rides directly on ethernet, not TCP/IP. Lossless, uses DataCenter Bridging Protocol
SAN – Storage Area NetworkSAN Switches - AnalogyFC - similar to railways.Purpose built, connected topredetermined specificendpointsiSCSI - similar to highwaysCan be more flexible. Trafficcan be a problem.
SAN – Storage Area NetworkArray-based replication:– Moves replication CPU overhead off of the host– Can improve RPO by maintaining a journal of writes, allowing rollback to a specific point in time– Simplifies management vs separate replications for each database, filesystem or drive– Can be used to populate a test environment or backup server, duplicating the real Production environment
SAN – Storage Area NetworkArray-based replication:– Application Integration– Usually required for geographically dispersed clustering
NAS – Network Attached StorageNAS appliance – usually purpose built devicerunning a flavor of Linux and serving up fileshares and NFS exports from internal drivesUsually connects to existing server LANOperates via CIFS (SMB v2 and v3) and NFS
NAS – Network Attached StorageBackups via NDMP, potentially reducingbackup times for filesysetms with largenumber of filesRead/Writeable checkpointsApplication Integration
Changes to DASPCIe Flash – FusionI/O, VFCache, etc– Local storage, integrated with SAN– Very low response timeVMware Distributed Storage– Aggregates local storage from vSphere hosts in a cluster and presents that storage to all hosts in the cluster as a datastore– Quality of local storage could become more important in the overall design
NASHypervisor running on the NAS appliance– VMware vSphere running on Isilon– Very high bandwidth access to storageSMB v3– Not supported on every NAS appliance yet– Usable by HyperV 2012 to store VM’s– Usable by MSSQL to store database filesWindows VM as NAS? VMware VADP ChangeBlock Tracking (CBT) = Fast Backups
SANInfiniband becoming more common– New (and existing) array technologies using Infiniband for internal communication. XtremeIO, XIV, etc– New array technologies using Infiniband for “Cache Area Network”, read/write cache shared between clustered hosts (Oracle RAC and SAP use cases)
SAN16 or 32 Gb FC and 40 Gb or 100 Gb Ethernet(iSCSI)– FC and Ethernet will continue to leapfrog. Emulex already has an FCoE card that will do 40 Gbe + 16 Gb FCMulti-hop FCoENew startup companies shaking things up– All flash arrays and hybrid arrays– Next year should see acquisitions
Drive Architecture ChangesEnterprise Grade MLC Flash– Less expensive per GB– SLC will probably stick around for write performanceSmaller drives going away– Like the 72 GB drives of yesteryear, today’s 300 GB and 1 TB drives will be phased out. 600 GB + and 2 TB + will become the standard for spinning drives
Drive Architecture ChangesRAID may no longer be the standard– RAID designed for spinning drives. Workloads that specify RAID type are usually considering head location and locality of reference. RAID still needed for spinning drives.– Flash based arrays doing inline dedupe, pointer based blockmaps and redirect-on-first-access instead of Copy on Write.– Caching algorithms traditionally sequentialize incoming I/O requests to work better with spinning drives. No longer necessary.
Cloud Based StorageLots of clouds:PaaS, IaaS, SaaS, DBaaS, BaaS, DRaaS– Most solutions don’t require you to know the nuts and bolts of the underlying storage……BUT, we could soon see solutions involvingall flash arrays on premise, connected toslower cloud-based storage.
Questions?My Blog: http://tpittman.wordpress.comTwitter is @pittmantonyMy Email: email@example.comVarrow Bloggers: http://www.varrowblogs.com