Elements of SAN Capacity Planning
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Elements of SAN Capacity Planning

on

  • 1,338 views

 

Statistics

Views

Total Views
1,338
Views on SlideShare
1,337
Embed Views
1

Actions

Likes
0
Downloads
48
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10
  • 04/12/10

Elements of SAN Capacity Planning Presentation Transcript

  • 1. Elements of SAN capacity planning Mark Friedman VP, Storage Technology [email_address] (941) 261-8945
  • 2. DataCore Software Corporation
    • Founded 1998 - Storage networking Software
    • 170+ employees, private - Over $45M raised
      • Top Venture firms - NEA, OneLiberty
      • Funds – VanWagoner, Bank of America, etc
      • Intel Business and Technical collaboration agreement
    • Exec. Team
      • Proven Storage expertise
      • Proven Software company experience
      • Operating systems, high-availability, Caching, networking
      • Enterprise level support and training
    • Worldwide: Ft. Lauderdale HQ, Silicon Valley, Canada France, Germany, U.K., Japan
  • 3. Overview
    • How do we take what we know about storage processor performance and apply it to emerging SAN technology?
    • What is a SAN ?
    • Planning for SANs:
      • SAN performance characteristics
      • Backup and replication performance
  • 4. Evolution Of Disk Storage Subsystems See: Dr. Alexandre Brandwajn, “ A study of cached RAID 5 I/O” CMG Proceedings , 1994. Write-thru Cached subsystems Cached Disk Strings & Farms Spindles Storage Processors
  • 5. What Is A SAN?
    • Storage Area Networks are designed to exploit Fibre Channel plumbing
    • Approaches to simplified networked storage:
      • SAN appliances
      • SAN Metadata Controllers (“out of band”)
      • SAN storage managers (“in band”)
  • 6. The Difference Between NAS and SAN
    • Storage Area Network (SAN) designed to exploit Fibre Channel plumbing require a new infrastructure.
    • Network Attached Storage (NAS) devices plug into the existing networking infrastructure.
      • Networked file access protocols (NFS, SMB, CIFS)
      • TCP/IP stack
    Media Access: Ethernet, FDDI Application: HTTP, RPC Host-to-Host: TCP, UDP Internet Protocol: IP Packet Packet Packet Packet
  • 7. The Difference Between NAS and SAN
    • NAS devices plug into existing TCP/IP networking support.
    • Performance considerations:
      • 1500 byte Ethernet MTU
      • TCP requires acknowledgement of each packet, limiting performance.
    Application Interfaces RPC DCOM Winsock NetBIOS Named Pipes NetBT Redirector Server
      • User Mode
      • Kernel
    TCP UDP IP ARP ICMP IGMP IP Filtering IP Forwarding Packet Scheduler NDIS Wrapper NDIS Miniport NIC Device Driver TDI NDIS
  • 8. The Difference Between NAS and SAN
    • Performance considerations:
      • e.g.,
      • 1.5 KB Ethernet MTU
        • Requires processing 80,000 Host interrupts/sec @ 1 Gb/sec
        • or Jumbo frames, which also requires installing a new infrastructure
      • Which is why Fibre Channel was designed the way it is!
    Source: Alteon Computers, 1999.
  • 9.
    • Universal data sharing is developing ad hoc on top of de facto industry standards designed for network access.
      • Sun NFS
      • HTTP, FTP
      • Microsoft CIFS (and DFS)
        • also known as SMB
        • CIFS-compatible is the the largest and fastest growing category of data
    Competing Network File System Protocols
  • 10. CIFS Data Flow
    • Session-oriented: e.g., call backs
    SMB Request SMB Request Server Network Interface System Cache File Server Redirector MS Word System Cache Network Interface Client
  • 11. What About Performance? NFSD Daemon TCP/IP Driver Remote Procedure Call (RPC) Response Data TCP/IP Network NFS Client NFS Server Media Access: Ethernet, FDDI Application: HTTP, RPC Host-to-Host: TCP, UDP Internet Protocol: IP User Process Client Process TCP/IP Driver
  • 12. What About Performance?
    • Network-attached yields fraction of the performance of direct-attached drives when block size does not match frame size.
    • See ftp://ftp.research.microsoft.com/pub/tr/tr-2000-55.pdf
    SMB Request SMB Request Server Network Interface System Cache File Server Redirector MS Word System Cache Network Interface Client Media Access: Ethernet, FDDI Application: HTTP, RPC Host-to-Host: TCP, UDP Internet Protocol: IP
  • 13.
    • Add a network delay component to interconnect two Central Server models and iterate.
    What about modeling?
  • 14. The Holy Grail!
    • Storage Area Networks
    • Uses low latency, high performance Fibre Channel switching technology (plumbing)
    • 100 MB/sec Full duplex serial protocol over copper or fiber
    • Extended distance using fiber
    • Three topologies:
      • Point-to-Point
      • Arbitrated Loop : 127 addresses, but can be bridged
      • Fabric : 16 MB addresses
  • 15. The Holy Grail!
    • Storage Area Networks
    • FC delivers SCSI commands, but Fibre Channel exploitation requires new infrastructure and driver support
    • Objectives:
      • Extended addressing of shared storage pools
      • Dynamic, hot-plugable interfaces
      • Redundancy, replication & failover
      • Security administration
      • Storage resource virtualization
  • 16. Distributed Storage & Centralized Administration
    • Traditional tethered vs untethered SAN storage
    • Untethered storage can (hopefully) be pooled for centralized administration
    • Disk space pooling (virtualization)
      • Currently, using LUN virtualization
      • In the future, implementing dynamic virtual:real address mapping (e.g., the IBM Storage Tank)
    • Centralized back-up
      • SAN LAN-free backup
  • 17.
    • FC is packet-oriented (designed for routing).
    • FC pushes many networking functions into the hardware layer.
      • e.g.,
      • Packet fragmentation
      • Routing
    Storage Area Networks Upper Level Protocol SCSI IPI-3 HIPPI IP Fc4 Framing Protocol/Flow Control Fc2 8B/10B Encode/Decode Fc1 100MB/s Physical Layer Fc0 Common Services Fc3
  • 18.
    • FC is designed to work with optical fiber and lasers consistent with Gigabit Ethernet hardware
      • 100 MB/sec interfaces
      • 200 MB/sec interfaces
    • This creates a new class of hardware that you must budget for: FC hubs and switches .
    Storage Area Networks
  • 19.
    • Performance characteristics of FC switches:
      • Extremely low latency (  1  sec), except when cascaded switches require frame routing
      • Deliver dedicated 100 MB/sec point-to-point virtual circuit bandwidth
      • Measured 80 MB/sec effective data transfer rates per 100 MB/sec Port
    Storage Area Networks
  • 20.
    • When will IP and SCSI co-exist on the same network fabric?
      • iSCSI
      • Nishan
      • Others?
    Storage Area Networks Upper Level Protocol SCSI IPI-3 HIPPI IP Fc4 Framing Protocol/Flow Control Fc2 8B/10B Encode/Decode Fc1 100MB/s Physical Layer Fc0 Common Services Fc3
  • 21. Storage Area Networks
    • FC zoning is used to control access to resources (security)
    • Two approaches to SAN management:
      • Management functions must migrate to the switch, storage processor, or….
      • OS must be extended to support FC topologies.
  • 22. Approaches to building SANs
    • Fibre Channel-based Storage Area Networks (SANs)
      • SAN appliances
      • SAN Metadata Controllers
      • SAN Storage Managers
    • Architecture (and performance) considerations
  • 23. Approaches to building SANs
    • Where does the logical device:physical device mapping run?
      • Out-of-band : on the client
      • In-band : inside the SAN appliance, transparent to the client
    • Many industry analysts have focused on this relatively unimportant distinction.
  • 24. SAN appliances
    • Conventional storage processors with
    • Fibre Channel interfaces
    • Fibre Channel support
      • FC Fabric
      • Zoning
      • LUN virtualization
  • 25. SAN Appliance Performance
    • Same as before, except faster Fibre Channel interfaces
      • Commodity processors, internal buses, disks, front-end and back-end interfaces
      • Proprietary storage processor architecture considerations
    Internal Bus Multiple Processors Cache Memory FC Interfaces FC Disks Host Interfaces
  • 26. SAN appliances
    • SAN and NAS convergence?
      • Adding Fibre Channel interfaces and Fibre Channel support to a NAS box
      • SAN-NAS hybrids when SAN appliances are connected via TCP/IP.
    • Current Issues:
      • Managing multiple boxes
      • Proprietary management platforms
  • 27. SAN Metadata Controller
    • SAN clients acquire an access token from the Metadata Controller (out-of-band)
    • SAN clients then access disks directly using proprietary distributed file system
    SAN Clients Pooled Storage Resources SAN Metadata Controller 1 3 Token Fibre Channel 2
  • 28. SAN Metadata Controller
    • Performance considerations:
      • MDC latency (low access rate assumed)
      • Additional latency to map client file system request to the distributed file system
    • Other administrative considerations:
      • Requirement for client-side software is a burden!
  • 29. SAN Storage Manager
    • Requires all access to pooled disks through the SAN Storage Manager
    • (in-band)!
    SAN Clients Pooled Storage Resources Fibre Channel Storage Domain Servers
  • 30. SAN Storage Manager
    • SAN Storage Manager adds latency to every I/O request
    • How much latency is involved?
    • Can this latency be reduced using traditional disk caching strategies?
    SAN Clients Pooled Storage Resources Fibre Channel Storage Domain Servers
  • 31. Architecture of a Storage Domain Server
    • Runs on an ordinary Win2K Intel server
    • The SDS intercepts SAN I/O requests, impersonating a SCSI disk
    • Leverages:
      • Native Device drivers
      • Disk management
      • Security
      • Native CIFS support
    Fibre Channel HBA Driver SCSI miniport Driver Disk Driver Diskperf (measurement) Fault Tolerance (Optional) Data Cache Fault Tolerance Initiator/Target Emulation FC Adaptor Polling Threads Security Natives W2K I/O Manager SANsymphony Storage Domain Server Client I/O
  • 32. Sizing the SAN Storage Manager server
    • In-band latency is a function of Intel server front-end bandwidth:
      • Processor speed
      • Number of processors
      • PCI bus bandwidth
      • Number of HBAs
    • and performance of the back-end Disk configuration
  • 33. SAN Storage Manager
    • Can SAN Storage Manager in-band latency be reduced using traditional disk caching strategies?
      • Read hits
      • Read misses
        • Disk I/O + (2 * data transfer)
      • Fast Writes to cache (with mirrored caches)
        • 2 * data transfer
        • Write performance ultimately determined by the disk configuration
  • 34. SAN Storage Manager
    • Read hits (16 KB block):
    • Timings from an FC hardware monitor
    • 1Gbit/s Interfaces
    • No bus arbitration delays!
    140  sec 27  sec Status Frame SCSI Read Command Length = 4000 16x1024 Byte Data Frames
  • 35. Read vs. Write hits (16 KB block) Fibre Channel Latency (16KB Blocks) SCSI Command Write Setup Data Frames SCSI Status
  • 36.
    • How is time being spent inside the server?
    • PCI bus?
    • Host Bus adaptor?
    • Device polling?
    • Software stack?
    Decomposing SAN in-band Latency SCSI Command Write Setup Data Frames SCSI Status
  • 37.
    • 4-way 550 MHz PC
      • Maximum of three
      • FC interface polling threads
    • 3 PCI buses
    • (528MB/s Total)
    • 1, 4, or 8
    • QLogic 2200 HBAs
    Benchmark Configuration Memory Bus 64bit/33MHz PCI 32bit/33MHz PCI 32bit/33MHz PCI 4x550MHz XEON Processors
  • 38.
    • How is time being spent inside the SDS?
    • PCI bus?
    • Host Bus adaptor?
    • Device polling:
      • 1 CPU is capable of 375,000 unproductive polls/sec
      •  2.66  secs per poll
    • Software stack:
      • 3 CPUs are capable of fielding 40,000 Read I/Os per second from cache
      •  73  secs per 512-byte I/O
    Decomposing SAN in-band Latency
  • 39. Decomposing SAN in-band Latency SANsymphony in-band Latency (16KB Blocks) SDS FC Interface Data Transfer
  • 40. Impact Of New Technologies
    • Front-end bandwidth:
      • Different speed Processors
      • Different number of processors
      • Faster PCI Bus
      • Faster HBAs
    • e.g. Next Generation Server
      • 2GHz GHz Processors (4x Benchmark System)
      • 200MB/sec FC interfaces (2x Benchmark System)
      • 4x800MB/s PCI bus (6x Benchmark System)
    • ...
  • 41. Impact Of New Technologies 2GHz CPU & New HBAs 2GHz CPU, New HBAs, 2Gbit Switching Today
  • 42. Sizing the SAN Storage Manager
    • Scalability
      • Processor speed
      • Number of processors
      • PCI bus bandwidth
        • 32bit/33MHz 132MB/sec
        • 64bit/33MHz 267MB/sec
        • 64bit/66MHz 528MB/sec
        • 64bit/100MHz 800MB/s (PCI-X)
      • Infiniband technology???
      • Number of HBAs
        • 200 MB/sec FC interfaces feature faster internal processors
  • 43. Sizing the SAN Storage Manager
    • Entry level system:
      • Dual Processor, single PCI bus, 1 GB RAM
    • Mid-level departmental system:
      • Dual Processor, dual PCI bus, 2 GB RAM
    • Enterprise-class system:
      • Quad Processor, triple PCI bus, 4 GB RAM
  • 44. SAN Storage Manager PC scalability
  • 45. SAN Storage Manager PC scalability Departmental SAN Enterprise class Entry level
  • 46. SANsymphony Performance
    • Conclusions
      • FC switches provide virtually unlimited bandwidth with exceptionally low latency so long as you do not cascade switches
      • General purpose Intel PCs are a great source of inexpensive MIPS.
      • In-band SAN management is not a CPU-bound process.
      • PCI bandwidth is the most significant bottleneck in the Intel architecture.
      • FC Interface cards speeds and feeds are also very significant
  • 47. SAN Storage Manager – Next Steps
    • Cacheability of Unix and NT workloads
      • Domino, MS Exchange
      • Oracle, SQL Server, Apache, IIS
    • Given mirrored writes, what is the effect of different physical disk configurations?
      • JBOD
      • RAID 0 disk striping
      • RAID 5 write penalty
    • Asynchronous disk mirroring over long distances
    • Backup and Replication (snapshot)
  • 48.
    • ?
    Questions
  • 49. www.datacore.com