• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Storage Networking
 

Storage Networking

on

  • 1,882 views

 

Statistics

Views

Total Views
1,882
Views on SlideShare
1,880
Embed Views
2

Actions

Likes
3
Downloads
171
Comments
0

1 Embed 2

http://www.slideshare.net 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Submission Requirements for your SWC 2005 presentation The purpose of your presentation is to educate the end user. Therefore, your presentation must be vendor-neutral and deliver a balanced view of your subject matter. Slides are limited to 40, total. Presentations must be created using MS PowerPoint 2000 or XP. Your slides are due for a first review on April 21, 2006 . The SWC Educational Committee will review your presentation for content and possible bias, and return them to you in a timely manner. Your final slides are due Monday, May 15, 2006. Please understand that because of production constraints, we cannot make exceptions. Please feel free to personalize the abstract and to reorganize the topic list, but be sure to cover the content of the course. If you do change the abstract, please send the revised copy to Lindsey Mitobe lindseym@storageworldconference.com so we can update the website. Your slides will be returned if they are not in the SWC template Exceptions: SNIA course material Copyright note: you still own the material, the SWC copyright is to keep people from copying your material from the handouts.

Storage Networking Storage Networking Presentation Transcript

  • Storage Networking 101 SAN Solutions David J. Mossinghoff Director, Storage Solutions Forsythe Solutions Group June, 2006
  • Overview
    • The focus of this session is architecting networked storage for the enterprise and will include tips on implementing a tiered network storage infrastructure for both local and remote access…
    • In this session you will learn:
    • The current state of networked storage protocols, how they relate to disk technologies and where these SNW technologies will merge and diverge.
    • How to best utilize file or block based storage for Fibre Channel, IP protocols and for your storage network.
    • What are the areas of “storage intelligence” in networked storage today, and what might the future hold?
    • Factors to consider when building a total cost of ownership comparison between the different technologies and protocols.
  • Agenda
    • Information Management: the Big Picture
    • Today’s Storage Challenges
    • Introduction to Storage Networking Architectures
    • The “6 Levels of Storage Networking”
    • Virtualization- with focus on SAN
    • Q & A
  • The Big Picture Main Focus today is on storage networking technology
  • Today’s Storage Challenges
    • Selecting the right mix of storage technology
    • Streamlining the process of managing resources
    • Improving availability of data, with adequate protection/security
    • Providing for your company’s current and future storage needs
    • Delivering cost effective solutions that meet the business needs
  • Value of Storage Networking
    • Key enabler for server/storage consolidation, “tiering”, and virtualization
    • More effective and efficient data availability
      • High operational availability
      • Enhanced Backup & Recovery
      • Enhanced Disaster Recovery & Restart
    • Improved scalability, provisioning, utilization of storage capacity
    • Improved application performance
    • Improved data/file sharing
    • Enabler for significant TCO savings for data & storage management
  • Proof points
    • Top reasons for deploying a SAN*
      • Improved back-up & recovery 46%
      • Server and storage consolidation 40%
      • On-going demands for additional capacity 37%
      • Improved application performance 31%
      • Improved disaster recovery 27%
      • New project or application deployment 23%
    * Source: IDC IT Management Survey - 2005
  • Networked Storage Technologies SAN Storage area networks NAS Network-attached storage CAS Content-addressed storage Content Management, Compliant storage and retrieval Software and product development; fileserver consolidation OLTP, data warehousing, ERP Typical applications Long term retention, integrity assurance Multi-protocol File sharing High Operational availability, Deterministic performance Key requirement Object, fixed content File Block Type of data IP IP (front end), Optional-FC (backend) Fibre Channel (FCP, FICON); IP (iSCSI, FCIP, iFCP) Type of transport
  • Storage & SNW Technology Alternatives
    • High cost-of-ownership-utilization, management
    • Inflexible
    • Sneaker-net management
    • Transmission optimized for I/O block data movement
    • Separates LAN and SAN
    • FC SAN is mature
    • iSCSI is “emerging” and viable
    • Transmission optimized for file or “Object oriented” transactions
    • IO traffic travels over Ethernet
    • For NAS – May also use gateway into FC SAN
    DAS Application Server File System JBOD or Basic Disk SCSI, FC, ATA NAS/CAS RAID File System TCP/IP Ethernet Switch Application Server Application Server SAN FCP Switch/ Director Application Server Application Server iSCSI/IP Ethernet Switch RAID File System File System
  • SAN Characteristics
    • Service level “enablers”
      • Operational availability
      • Reliability and serviceability
      • Performance (response time and throughput)
      • Scalability (with performance)
      • Provisioning ability for new ports/connections
  • SAN Characteristics
    • Related key characteristics
      • Viability of manufacturer/market share
      • Quality of partnership with your company
      • Quality of service and support
      • Certified support of required servers/OS levels
      • Efficient and effective SAN manageability
    • Total cost sensitivity (Price = Cost)
  • Key Priorities: An example
  • Economics - Storage Connectivity Includes director, switch, HBA, cabling costs over 3 yrs.
  • 6 Levels of Storage Networking Direct Attached Storage (DAS) Levels of Data Availability Scalability with Performance
  • Problem: Stranded Storage
    • Poor use of disk capacity
    • Inadequate data protection–may lead to artificial server growth
    • Minimal-to-no disk storage management
    • Difficult to share data between applications
    • Major inhibitor: cost of FC SAN connectivity may be higher than server cost!
    Servers with direct-attached storage Gigabit/100Mb - Ethernet / IP
  • 6 Levels of Storage Networking Levels of Data Availability Scalability with Performance Direct Attached Storage (DAS) FC SAN Switch Based (Local)
  • FC SAN-Switch Design 1Gb,2Gb, 4Gb Switch 1Gb,2Gb, 4Gb Switch Servers Single or Dual FC HBA’s F-port F-port F-port F-port N-ports N-ports Node (N) port Fabric (F) Port Fibre Channel Gigabit/100Mb Ethernet / IP
  • FC SAN with FC Switches
    • Simple design
    • Low cost relative to other FC SAN alternatives
    • Scales well from a few to 100 usable ports
    • Simple to manage
    • Universally supported / certified
    • Multiple manufacturer / HBA provider options
    • Larger environments may have multiple “SAN Islands”
  • FC SAN-Switch Design-Mesh 1 Gb,2Gb, 4Gb Switches 1 Gb,2Gb, 4Gb Switches Servers Single or Dual FC HBA’s Scales up to 100 usable ports ISL’s (“hops”) E-ports used To link Switches & Create ISL’s E-Ports Fibre Channel Gigabit/100Mb Ethernet / IP
  • 6 Levels of Storage Networking Direct Attached Storage (DAS) FC SAN Switch Based (Local) Levels of Data Availability Scalability with Performance Source: IDC- 2006 IP Storage Networks (Local or Distance) (NAS, iSCSI)
    • Data growth, sharing, and server proliferation can be expensive and cumbersome to manage
    • Scalability, data availability, and sharing can be a major problem with a DAS environment
    • Over-provisioning and data protection complexity make DAS increasingly expensive
    • Applications may require a block based (DAS or a SAN) solution
    • Other applications may benefit from a file based (NAS) solution
    • Cost, complexity and lack of expertise can prohibit traditional Fibre Channel SAN implementation
    • IP storage networking (iSCSI & NAS) can address these challenges
    IP Storage – Why the “Buzz?”
  • Advantages of IP Networking
    • Common and well proven technology
      • Low acquisition costs
      • Standards-based solutions
      • Commodity economics
      • Ethernet in every corporation
    • Low management costs
      • Familiar network technology and management tools
      • Proven reliable/interoperable transport infrastructure
    • Local area and wide area network connectivity
      • WAN enables remote data replication and disaster recovery
    • Long-term viability
      • Large R&D investment profile, strong roadmap
      • 10Gb Ethernet emerging – significant for IP storage
  • Why iSCSI is Important
    • NAS has proven IP storage networking viability
    • iSCSI software initiators included with major operating systems eases to deployment of IP SANs
    • Networking capabilities can simplify IP SAN management
    • Lower cost infrastructure broadens reach of IP SAN solutions
    • Leveraging IP networking investments and knowledge base lowers total cost of ownership
    • iSCSI is a viable IP-SAN solution today !
      • (for the right applications)
  • iSCSI Building Blocks
    • iSCSI is SCSI-3 command frames encapsulated in IP packets (Typically over GbE)
      • IETF standard documented RFC3720
    • HOST/INITIATOR
      • iSCSI Software Initiator (NIC)
      • TCP Off-load Engine (TOE)
      • iSCSI Host Bus Adapter (HBA) (some support remote boot)
    • DISK ARRAY/TARGET
      • Handled by iSCSI compatible storage array
        • iSCSI software target driver
        • Standard NIC connectivity
        • iSCSI to FC-SAN bridges available from multiple manufacturers
  • Solution—iSCSI Integration
    • Connects servers via iSCSI to existing fibre channel SAN
      • Low cost per server connection
    • Leverages existing IP network/skills
    • Improved usage and flexibility of storage assets to applications
    • Improved ability for centralized data protection
    Storage consolidated on SAN Servers connected to SAN via iSCSI iSCSI Disk Array (iSCSI - Tape is also possible) FC SAN F1/F2 Fibre Channel Gigabit/100Mb Ethernet / IP iSCSI to FC SAN Bridge
  • Proof Points -Performance
    • Enterprise Strategy Group Validation study (4/04)
      • http://www.netapp.com/tech_library/ftp/analyst/ar1023.pdf
  • Proof Points - Performance
    • Enterprise Strategy Group Validation study (4/04)
      • http://www.netapp.com/tech_library/ftp/analyst/ar1023.pdf
  • 6 Levels of Storage Networking Direct Attached Storage (DAS) IP Storage Networks (Local or Distance) FC SAN Switch Based (Local) (NAS, iSCSI) Levels of Data Availability Scalability with Performance FC SAN- Director Based , Dual Fabric (Local)
  • FC Director Design Options 2 and/or 4 Gb FC link Trunking 256 Port Director 140 Port Director 64 port Director F1 F2 F2 F1 ISL/Hop ISL/Hop Tape and/or Virtual Tape Subsystem(s) Required Connectivity to each Director and Fabric
  • 6 Levels of Storage Networking Direct Attached Storage (DAS) IP Storage Networks (Local or Distance) FC SAN Switch Based (Local) FC SAN- Director Based, Dual Fabric (Local) Levels of Data Availability Scalability with Performance FC SAN “ Core<> Edge” Or “Collapsed Core” Multi-Fabric (Local)
  • Core<>Edge Design Options 256 Port Director 140 Port Director 1 Gb switch F1 F2 F2 F1 F1 ISL’s/Hop ISL’s/Hop ISL’s/Hop Tape and/or Virtual Tape Subsystem(s) Required Connectivity to each Director and Fabric 2/4 Gb switch Tape/V-Tape attached To the core 1 Gb FC link 2 and/or 4 Gb FC link Trunking
  • Large SAN Connectivity Concern
    • Inter-switch links (ISLs) used to link switches and/or directors to build larger SANs
    • Multiple ISLs are typically required for performance
    • Server/storage ports go down; effective price goes up
    • ISL traffic is static; links may be under-utilized
    ISL’s Servers Storage 10% ISL usage 30% 60% 90%
  • Solution—Trunking ( with Automatic Load Balancing)
    • Fabric-data traffic more evenly distributed among ISLs
    • All ISLs share bandwidth
    • Overall bandwidth improved
    • Network design and administration is simplified
    Trunk Servers Storage ~50% usage ~50% usage ~50% usage ~50% usage
  • What is a “Fan-in-Ratio?”
    • When using a “core<>edge” design, it is important to consider the “fan in ratio”
    • Measure of the relative incoming “max bandwidth” to the available ISL bandwidth into the core
    • Example:
      • Servers have 1 Gb FC HBA’s coming into 32-port FC switches (2Gb capable)
      • Switches are connected via ISL’s to core directors (which are also 2 Gb capable -presume ISL Trunking is enabled)
      • If (3) 2Gb ISL’s per switch are used, the “Fan-in-Ratio” is:
        • 32 ports – 3 ports = 29 ports * 1 Gb @ = 29 Gb
        • (3) ISL’s * 2 Gb = 6Gb
        • 29 Gb/6Gb = 5:1 (approx) “fan-in-ratio” … (good R.O.T)
  • Director / Switch Scalability Up to 64 64 to 256 256 to >1,000 Least Complexity Highest Availability Lowest Acquisition Cost Number of Hosts Switches Switches Switches Switches Director Director Switches Director Switches Director Director
  • Advantages of Core:Edge
    • Scalability - Up to (16) switch domains can be attached to each director fabric
      • Director “E-Ports” are auto-sensing
    • Scalability example of usable ports (non-ISL)
      • (2) 64-port directors+edge switches = 650 u-ports
      • (2) 140 –port directors+edge switches = 1,100 u-ports
      • (2) 256-port directors+edge switches = 1,326 u-ports
    • The effective cost per port is reduced vs. an all director solution
      • Due to lower cost per port of FC Switches
      • Scalability within a fabric is increased more economically
    • If designed properly
      • No single points of failure in the SAN (for dual path servers)
      • Performance scales with port count
  • 6 Levels of Storage Networking Direct Attached Storage (DAS) IP Storage Networks (Local or Distance) FC SAN Switch Based (Local) FC SAN- Director Based Dual Fabric (Local) FC SAN “ Core<> Edge” Or “Collapsed Core” Multi-Fabric (Local) (NAS, iSCSI) Levels of Data Availability Scalability with Performance SAN over Distance (Replication, Remote Tape, Extended SAN) Gigaman, OCx, DWDM iFCP, FCiP Bridging/ Routing
  • Extending the SAN over Distance
    • Gateway-to-gateway protocol
      • either FCP, FCIP or iFCP
    • Supports direct Fibre Channel connection to storage
    • Provide either server<> Storage and/or Array<>Array connectivity
    IP routing E_Port termination E_Port termination Fibre Channel dir/switch FCP gateway FCP gateway Fibre Channel dir/switch MAN / WAN
  • FCP
    • High “metro optical network” bandwidth (DWDM)
    • SAN extension through ISLs creates large set of fabrics
    • Propagation of faults across entire fabric
    • Service disruptions from fabric changes impacts all fabrics
    • Custom network configurations supported with SONET or ATM
    MON (dark Fibre) Remote/Replication site Existing SAN FCP/DWDM Gateway FCP/DWDM Gateway Fibre Channel DWDM link SAN F1/F2 SAN F1/F2
  • FCIP
    • SAN extension through ISLs creates large set of fabrics
    • Propagation of faults across entire fabric
    • Service disruptions from fabric changes impacts all fabrics
    • Custom network configurations supported with SONET or ATM
    • Optional data compression and fast write features can result in higher throughput and lower network costs
    Remote/Replication site Existing SAN FCiP Gateway FCiP Gateway IP Network SAN F1/F2 Fibre Channel Gigabit Ethernet / IP SAN F1/F2
  • iFCP Solution
    • iFCP protocol provides “fabric isolation” between sites
    • Prevents fault propagation / zone definition isolation
    • Optional data compression and fast write features can result in higher throughput and lower network costs
    iFCP Gateway IP Network Replication site Existing SAN iFCP Gateway SAN F3/F4 Fibre Channel Gigabit Ethernet / IP SAN F1/F2
  • SAN Cabling Considerations (Speed and Distance Matters) 70 m 4 Gb/s 2 km 4 Gb/s 150 m 4 Gb/s 10 km 2 Gb/s (single-mode) 10 km 1 Gb/s 9 micron 150 m 2 Gb/s (multi-mode) 300 m 1 Gb/s 62.5 micron 300 m 2 Gb/s (multi-mode) 500 m 1 Gb/s 50 micron Filament Core Operating Distance Port Speed Fiber Optic Glass
  • 6 Levels of Storage Networking Direct Attached Storage (DAS) IP Storage Networks (Local or Distance) FC SAN Switch Based (Local) FC SAN- Director Based Dual Fabric (Local) FC SAN “ Core<> Edge” Or “Collapsed Core” Multi-Fabric (Local) SAN over Distance (Replication, Remote Tape, Extended SAN) Gigaman, OCx, DWDM FCP, iFCP, FCiP Bridging/ Routing (NAS, iSCSI) Virtualization Levels of Data Availability Scalability with Performance
  • Storage and SAN Virtualization
    • Environment profile where this applies
      • Multiple, heterogeneous storage arrays
      • Multiple “SAN islands”
      • Volatile environment, rapid growth rates
      • “ Multi-tenancy” (and potentially formal chargeback) is important
      • Need for improved QoS for storage resources
      • Total cost of ownership sensitivity
    • SAN is required for heterogeneous storage virtualization
      • And the “storage-V” intelligence may exist in the SAN
    • The SAN itself may be virtualized
      • VSAN’s (logical), SAN partitions (physical)
  • Virtualization – What Problems are we solving?
    • Enables lower TCO and higher effectiveness and efficiency of the enterprise storage resources
      • Standardize and simplify the “storage operating environments”
      • Common local replication
      • Common remote replication
      • Simplifies and speeds provisioning
      • May provide “concurrent data movement” in the storage hierarchy (key for implementing ILM and tiered storage)
      • Single pane of glass storage management
      • SAN and storage “multi-tenancy”
      • Improved QoS and chargeback
  • Advanced STORAGE Virtualization
    • Where can “Storage Virtualization” occur?
      • Host server
      • Server Appliance
      • SAN Appliance/Blade
      • Storage Array Controller
    • Focus in this presentation is SAN based virtualization
  • SAN Based Storage Virtualization 2 and/or 4 Gb link Trunking FC Director SAN Virtualization engine (Blade or Appliance) ISL Required Connectivity to each Director and Fabric Virtualization Control Workstation (Metadata) Ethernet All “virtualized” IO goes through the Virtualization Engine Prior to going to disk
  • SAN Virtualization
    • Dynamic Partitioning:
    • Partitioning enables a “virtual director”
    • Each director four partitions (V-directors)
    • Partitions own a subset of the ports on the system
    • Partitions are managed independently and remain isolated from other partitions
    • Common SAN management console for all partitions
    Web services Financial application ERP
  • Virtual SANs (VSANs)
    • Similar concept to partitioning
      • Defined logically (less physical segregation)
      • Up to (4) VSAN’s/director are logically defined
    • Inter-VSAN Routing
      • Allows sharing of centralized storage services, such as tape libraries and disks, across VSANs
      • Distributed, scaleable, and highly resilient architecture
      • Transparent to third-party switches
    • Quality-of-Service (QoS) Advanced Traffic Management
      • Example: Prioritizing latency-sensitive OLTP transactions over throughput-intensive data-warehousing or B/U traffic
  • Conclusion:
    • The focus of this session was architecting networked storage for the enterprise and included tips on implementing a tiered network storage infrastructure for both local and remote access…
    • In this session, (hopefully) you learned:
    • The current state of networked storage protocols, how they relate to disk technologies and where these SNW technologies will merge and diverge.
    • How to best utilize file or block based storage for Fibre Channel, IP protocols and for your storage network.
    • What are the areas of “storage intelligence” in networked storage today, and what might the future hold?
    • Factors to consider when building a total cost of ownership comparison between the different technologies and protocols.
  • Questions? David J. Mossinghoff Forsythe Solutions Group (913) 323-6857 [email_address]