Presentation to dm as november 2007 with dynamic provisioning information

572 views

Published on

Provisioning

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
572
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • On to customer successes...
  • On to customer successes...
  • On to customer successes...
  • On to customer successes...
  • On the previous Hitachi Universal Storage Platform™ model, all of the disks attached to a back-end director PCB were on eight shared loops. A transfer to the last disk on a loop (could be 32 or 48 disks per loop) would have to pass through bypass logic on all of the disks in front of it. The Universal Storage Platform VM introduces a switched back-end, where the drives are still on arbitrated loops (all Fibre Channel disks use Fibre Channel Arbitrated Loops (FC-AL, not FC-SW fabrics), but the switch logic bypasses all targets on that loop except for the one disk being addressed. This reduces the propagation time by some microseconds (not noticeable), but the primary effect of this change is improved accessibility to individual disks. Each HDU has four such FSW switches, two on the front side (32 disks) and two more on the back (32 disks). Each pair of switches is either an “upper” or a “lower” switch as seen in the figure above.
  • On to customer successes...
  • Note: On the graphic above, CHA stands for channel adapter
  • Cache memory (CM) is installed in BASE-PCB It is only used for rack model, and is not compatible with FC model. • 4G CM: install DIMM in which 512Mbit DDR-SDRAM is mounted. • 8G CM: install DIMM in which 1Gbit DDR-SDRAM is mounted. CM capacity -------- min.4GB – max.32GB addition unit: 4GB/4DIMM CM capacity -------- min.8GB – max.64GB addition unit: 8GB/4DIMM . Cache memory (CM) is installed in CMA-PCB The following 2 types of CM are supported. • DKC-F610I-C4G: consists of DIMM in which 512Mbit DDR2-SDRAM is installed. CM capacity -------- min.4GB – max.32GB addition unit: 4GB/4DIMM • DKC-D610I-C8G: consists of DIMM in which 1Gbit DDR2-SDRAM is installed. CM capacity -------- min.8GB – max.64GB addition unit: 8GB/4DIMM • Memory intermingling specification------ intermingling within a CMA-PCB is impossible. CM (CM-DIMM) is installed on the CMA PCB. 1 set of 2CMA PCB/set can be installed in the subsystem. Up to 16 CM-DIMM can be mounted on 1 CMA PCB, and up to 32GB(16GB when using C4G) can be installed on 1 PCB. CMA is not preinstalled in DKC as a standard component, so the order for it is indispensable. NSC55 USP VM
  • On to customer successes...
  • ST9985V delivers a Updated Storage Platform that is a follow-on to the ST9985. Because it takes on some of the new technology within the ST9990V it has been consolidated into the SUN ST9900 family. Introduces support for 750GB SATA II HDDs as a new lower cost storage tier. Ideal for Tier 2 storage such as File (large) based data or short term tape backup cache. 11.25TB Raw storage in a single disk drive enclosure. SUN ST9900 Tiered Storage Manager takes advantage of the capacity!. ST9985V built-in RAID 6 ensures reliability 4Gb/s FC ‘End-to-End’. Switch FC architecture on the back-end. 15-20% faster than ST9985 (4GB/s architecture, CPU, etc.) SUN ST9900 Dynamic Provisioning software (“Thin Provisioning”). SUN ST9900 Dynamic Provisioning was introduced on the ST9990V in May. The ST9985V also enjoys Internal Thin Provisioning of internal storage as well. ST9985V and SUN ST9900 Dynamic Provisioning also introduce thin provisioning for external storage as well. SUN ST9900 is the 1 st vendor to deliver this flexibility to maximize storage resources while simplifying storage management. ST9985V also provides ‘Day 1’ Support for VMware ESX on internal storage as well as virtualized external storage for true ‘end-to-end’ IT virtualization. Again, SUN ST9900 is the first company to deliver this capability to help maximize resources. VMware ESX Server support combined with SUN ST9900 Dynamic Provisioning enables IT environments the ultimate in flexibility to align resources to business needs. Simplified software packaging – discussed in more detail shortly. ST9985V Configuration Flexibility - Choice of either: Storage controller and high performance storage for Tier 1 applications Storage controller only for virtualization services Replacement / Follow-on to ST9985.
  • On to customer successes...
  • On to customer successes...
  • On to customer successes...
  • All internal processors are 2x the performance of the USP At the core of the USP V is the third generation, non-blocking, switched architecture. The Universal Star Network is a fully fault tolerant, high performance, non-blocking, switched architecture. Each USP V model uses the same Universal Star Network architecture, but will have different performance characteristics based on the installed components. The data Cache system has the same path speeds and counts. The Shared Memory system has been significantly upgraded over the USP version, with 256 (up from 192) paths operating at 150MB/s (up from 83MB/s). On the USP, the CHA PCBs had 8 Shared Memory paths, and the DKAs had 16. Now, all PCBs have 16 Shared Memory paths. The Cache memory only contains user data blocks, whereas the Shared Memory system holds all of the metadata about the internal Array Groups, LDEVs, external LDEVs, and runtime tables for various software products. There can be up to 256GB of Cache and 32GB of Shared Memory. BED Switched Loop Details On the previous USP model, all of the disks attached to a BED PCB were on eight shared Loops. A transfer to the last disk on a loop (could be 32 or 48 disks per loop) would have to pass through bypass logic on all of the disks in front of it. The USP V introduces a switched back end, where the drives are still on Arbitrated Loops (all FC disks use FC-AL, not FC-SW fabrics), but the switch logic bypasses all targets on that loop except for the one disk being addressed. This reduces the propagation time by some microseconds (not noticeable), but the primary effect of this change is improved accessibility to individual disks. Each HDU has four such FSW switches, two on the front side (32 disks) and two more on the back (32 disks). Each pair of switches is either an “upper” or a “lower” switch as seen in Figure 21.
  • On to customer successes...
  • These are very recent analyst quotes from Bob Passmore of Gartner Group, he can ignore Hitachi’s storage virtualization solutions no longer. Passmore even states that Hitachi has “no competition”.
  • Self explanatory
  • TDMF is non-disruptive for mainframe only VxVM requires additional paths and HBA’s for host-level mirror if performance or redundancy is not to be impacted (most of the time) TDMF is a host-based software migration solution IBM SVC and EMC InVista are appliance/switch-based migration solutions. VxVM is an LVM host-based solution HiCommand Tiered Storage Manager offers the deepest selection criteria and tiered storage user interface IBM Multiple Device Manager and the SVC GUI provide user interface, however data migration is enabled by CLI only at this time. HiCommand Tiered Storage Manager integrated with HiCommand Device Manager database IBM Multiple Device Manager supports virtualization (SVC), device management (array config) and replication (FC, PPRC, etc.). HiCommand Tiered Storage Manager supports deep selection criteria, one of which is LDEVs by host/application TDMF provides a ISPF dialog interface and extensive selection criteria (i.e. VOLSER, UCB device addr, etc.) IBM Multiple Device Manager and SVC GUI support visualization of the host, however data migration enabled by CLI only VxVM provides a single host-based interface only HiCommand Tiered Storage Manager offers the most extensive selection criteria. This is enabled by the base USP/UVM/HDvM architecture TDMF scales for Mainframe only as TDMF Open Systems Edition only scales at the host level, due to TDMF being host-based HiCommand Tiered Storage Manager does not support Mainframe environments today. Volume Migration does however
  • On to customer successes...
  • Presentation to dm as november 2007 with dynamic provisioning information

    1. 1. <ul><li>Ken Ow-Wing, ST9900 Product Line Manger </li></ul>ST9900 Program DMA Update November 14, 2007 Broomfield, CO Modifed June 2, 2008
    2. 2. Roadmap
    3. 3. Microcode Releases <ul><li>V02 : 12-3-07 </li></ul><ul><li>V02 +1 End of Dec. 2007 (means January) </li></ul><ul><li>V02 +1, +2, +3 (Figure this means 3QFY07) </li></ul><ul><li>V03 Mid- 2008 </li></ul>
    4. 4. Dynamic Provision Support <ul><li>Dynamic Provisioning Support for External Storage </li></ul><ul><ul><li>Targeted for V02 or V02 +1 </li></ul></ul><ul><li>Tiered Storage Manager February 2008 </li></ul><ul><li>TrueCopy V02 +3 </li></ul><ul><li>Universal Replicator V02 +3 </li></ul><ul><ul><li>P-Vol and S-Vol (no disk journal) </li></ul></ul><ul><li>Universal Volume Manager V02 (Pool-Vol) </li></ul><ul><li>Copy on Write V02+2 (P-Vol) </li></ul>
    5. 5. Dynamic Provision Support <ul><li>Volume Migration V02 +1 </li></ul><ul><li>Volume Shredder V02 </li></ul><ul><li>For More Completeness and Detail: </li></ul><ul><ul><li>Token 510266, ST 9990V Starter Kit </li></ul></ul><ul><ul><li>Scroll down to the 15 th file for HDP Compatibility Chart </li></ul></ul>
    6. 6. Disk <ul><li>400 GB 10K FC HDD </li></ul><ul><ul><li>For ST 9985V, ST 9990V, ST 9990, ST 9985 </li></ul></ul><ul><ul><li>Released </li></ul></ul><ul><li>1 TB FC-SATA HDD </li></ul><ul><ul><li>RR=6-24-08, GA 6-17-08 </li></ul></ul><ul><ul><li>For ST 9990 V, ST 9985 V </li></ul></ul><ul><ul><li>NOT for ST 9990, NOT for ST 9985 </li></ul></ul>
    7. 7. Future – Data Cache <ul><li>Increase Data Cache to 512 GB </li></ul><ul><ul><li>16 GB Data Cache boards.RR=6-24-08, GA 6-17-08 </li></ul></ul>
    8. 8. Mainframe Releases of Interest <ul><li>Universal Replicator 3DC 4X4X4 Cascade 10-15-07 </li></ul><ul><li>Business Continuity Manager 5.3 support 4X4X4 Multi-Target 10-15-07 </li></ul><ul><li>Business Continuity 5.6 will support 4X4X4 Cascade February 2008 </li></ul><ul><li>HyperPAV Target Sun GA of 12-4-07 </li></ul><ul><ul><li>Last minute release by HDS driven by specific customer </li></ul></ul>
    9. 9. Program Notes
    10. 10. Tools Now Available <ul><li>Hi Stat: Topology Discovery </li></ul><ul><li>Hi Cast: Cache sizing </li></ul><ul><li>Hi Past: Access Density and Array Design </li></ul><ul><li>Weight and Power </li></ul><ul><li>E-mail [email_address] .com </li></ul><ul><ul><li>We will give you User Name and Password </li></ul></ul><ul><ul><li>Internal HDS website; process does not scale so making tools available to DMAs only for proliferation </li></ul></ul>
    11. 11. Pre-Sales Performance alias <ul><li>[email_address] </li></ul><ul><li>Eight volunteers worldwide </li></ul><ul><li>We will expand this </li></ul><ul><li>We have asked those requesting their assistance to state their configuration in clear terms to help our volunteers </li></ul>
    12. 12. Appendix
    13. 13. ST9990V SPC-1 Performance Results October 1, 2007 News Release: The ST9990V platform achieves the highest SPC-1 benchmark result in enterprise storage history , eclipsing results of every single enterprise storage system ever tested with 200,245K SPC IOPS. This nearly doubles the performance levels achieved by competing enterprise storage systems with a single controlle r. Leads industry with 3.5M peak cache IOPS. A 75% - 300% advantage over other enterprise systems in the market. # 1 in Performance
    14. 14. ST9990V SPC-1 Performance Results What this new SPC-1 benchmark result means for customers is: Greater Productivity : Sustain significantly more business transactions than any other enterprise storage system in the market, boosting sales and profitability Increased Efficiency : Greatly improve application response time to single digit millisecond levels and s upport more users, more applications, and handle more capacity on a centrally managed platform that leverages years of mature and reliable microcode Lower TCO : Run a multitude of applications, such as Microsoft Exchange and Oracle Database 11 g , concurrently on a single storage controller Reduced Risk : If your applications don’t perform well then your business suffers—performance definitively matters to business success .
    15. 15. <ul><li>End </li></ul><ul><li>Thank You </li></ul>
    16. 16. Appendix
    17. 17. New/Interesting
    18. 18. ST 9985 V: What is new/interesting <ul><li>Switched FCAL (same as ST 9990V) </li></ul><ul><ul><li>FSW contains meta data which allows only the target HDD to be addressed. </li></ul></ul><ul><ul><li>Diagnosability is improved via isolation </li></ul></ul><ul><ul><li>Performance is improved by avoiding multiple targets. </li></ul></ul><ul><li>Multitasking on the Front End Director (FED) and Back End Director (BED) </li></ul><ul><ul><li>Under utilized MP (microprocessors) can take workload from other MPs (e.g. Task scheduling to BED) </li></ul></ul><ul><ul><li>MP’s moved from 400 Mhz to 800 Mhz, </li></ul></ul><ul><ul><li>Combine this with end to end 4 Gbit = better performance </li></ul></ul>
    19. 19. ST9985 V Switched Loop Details ST9985 V introduces a switched back-end, where the drives are still on arbitrated loops but the switch logic bypasses all targets on that loop except for the one disk being addressed
    20. 20. ST 9985 V: What is new/interesting <ul><li>Control Bandwidth </li></ul><ul><ul><li>37% greater (4.8 GB/sec now, vs. 3.8 GB/sec before) </li></ul></ul><ul><li>Shared Memory </li></ul><ul><ul><li>260% greater (16 GB now, vs. 6 GB before </li></ul></ul><ul><ul><li>Data bandwidth remains the same </li></ul></ul><ul><ul><li>Take-Way: Investment was in improving control bandwidth and memory to support sophisticated new applications such as Dynamic Provisioning. Data bandwidth was seen as more than sufficient for customer requirements. </li></ul></ul>
    21. 21. ST 9985 V Without Disks <ul><li>Positioned as a virtualization engine for heterogeneous environments </li></ul><ul><ul><li>Compete as a Virtualization Engine against InVista and SVC </li></ul></ul><ul><ul><li>Go in with a light configuration and grow the system as customers add more functionality </li></ul></ul><ul><li>Some questions we always get....... </li></ul><ul><ul><li>Is Diskless really diskless? </li></ul></ul><ul><ul><ul><li>Yes, it is really diskless. You can operate without any disk. When diskless first came out, it required some disk, but this has changed </li></ul></ul></ul><ul><ul><li>Is Diskless a completely different product? </li></ul></ul><ul><ul><ul><li>No, its just another configuration. </li></ul></ul></ul><ul><ul><li>Is a BED required even if there are no disks? </li></ul></ul><ul><ul><ul><li>Yes, a BED is really needed even if there are no disks. </li></ul></ul></ul><ul><ul><li>Workload (such as task scheduling) distributed from Front End Directors to Back End Directors to improve performance </li></ul></ul><ul><ul><li>Does DP work with Diskless? Not at this time . Look toward 4QCY07 </li></ul></ul><ul><ul><li>Is UR supported with Diskless? Yes. Performance measurement activity in queue. </li></ul></ul>
    22. 22. Architecture
    23. 23. Hardware Comparison of ST9985V vs ST9985 – Hi-Star Net Data Data path bandwidth of Crossbar Switch architecture Net: 8.5GB/s Data path bandwidth of Crossbar Switch Architecture Net: 8.5GB/s ST9985 ST9985 V
    24. 24. Hardware Comparison of Crossbar Switch Architecture Net Control Control path bandwidth of Crossbar Switch Architecture Net: 3.6GB/s The number of paths from CHA/MIX to each SM is 12. Shared memory is installed on BASE PCB Control path bandwidth of Crossbar Switch Architecture Net : 4.8GB/s The number of paths from channel adapter/disk adapter to each SMA is increased from 12 to 16. Shared memory is installed on SMA PCB. ST9985 ST9985 V
    25. 25. Hardware Comparison – Cache Memory In order to improve access performance to CM, CMA, ST9985V raised the transfer rate of the memory path, advanced memory control method, and achieved twice the hardware performance as compared with Network Storage Controller Model ST9985 . ST9985 ST9985 V
    26. 26. <ul><li>ST9990V Updates </li></ul><ul><li>SPC-1 Performance </li></ul><ul><li>Enhancements </li></ul>Punchstock 1872016
    27. 27. Competition
    28. 28. ST9985V/90V vs Competition EMC DMX-4 & IBM DS 8xxx / 68xx   VMware ESX Server / Thin Provisioning   VMware ESX Server / External Storage   VMware ESX Server / Internal Storage   Thin Provisioning / External Storage   Thin Provisioning / Internal Storage   Tier 1 Storage & Virtualization  64 ports  64 ports  16 ports  112 ports ESCON Connectivity  128 ports  48 ports  16 ports  112 ports FICON Connectivity 1024/ 128 360 240 1152 Max HDD's SVC Invista!?  FC-SAN – Storage Virtualization Controller only     Tier 1 FC SAN (FC & SATA) IBM DS 8xxx / 68xx EMC DMX-4 950 ST9985V ST9990V
    29. 29. ST9990V SPC-1 Performance Results October 1, 2007 News Release: The ST9990V platform achieves the highest SPC-1 benchmark result in enterprise storage history , eclipsing results of every single enterprise storage system ever tested with 200,245K SPC IOPS. This nearly doubles the performance levels achieved by competing enterprise storage systems with a single controlle r. Leads industry with 3.5M peak cache IOPS. A 75% - 300% advantage over other enterprise systems in the market. # 1 in Performance
    30. 30. ST9990V SPC-1 Performance Results What this new SPC-1 benchmark result means for customers is: Greater Productivity : Sustain significantly more business transactions than any other enterprise storage system in the market, boosting sales and profitability Increased Efficiency : Greatly improve application response time to single digit millisecond levels and s upport more users, more applications, and handle more capacity on a centrally managed platform that leverages years of mature and reliable microcode Lower TCO : Run a multitude of applications, such as Microsoft Exchange and Oracle Database 11 g , concurrently on a single storage controller Reduced Risk : If your applications don’t perform well then your business suffers—performance definitively matters to business success .
    31. 31. Program Notes
    32. 32. Tool Release Expanded <ul><li>Tools </li></ul><ul><ul><li>Hi Stat: Topology Discovery </li></ul></ul><ul><ul><li>Hi Cast: Cache sizing </li></ul></ul><ul><ul><li>Hi Past: Access Density and Array Design </li></ul></ul><ul><ul><li>Weight and Power </li></ul></ul><ul><li>Request Access and Support </li></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>Include the following: Last Name, First Name, e-mail address, Geo (APAC, Americas, EMEA) </li></ul></ul><ul><li>Go Easy, these guys are NOT field support </li></ul>
    33. 33. Performance Peer Review Alias <ul><li>Size ST 9900s from a pre-sales perspective. </li></ul><ul><li>ST_9900_Performance_Pre-Sales-Peer_Review@sun.com </li></ul><ul><li>Please provide your current configuration. </li></ul><ul><li>Help these Volunteers Help You </li></ul><ul><li>A.Please be concise. We have about 8 volunteers Worldwide in Asia, US, Europe who are volunteering their time for this. </li></ul><ul><li># of volunteers will be increased </li></ul>
    34. 34. Thank You Questions? Photos.com 34606297
    35. 35. Competition
    36. 37. Links to Original Data: Teneja Group <ul><li>Article </li></ul><ul><ul><li>http://www.infostor.com/articles/article_display.cfm?Section=ARTCL&C=Feat&ARTICLE_ID=282032&KEYWORDS=virtualization&p=23 </li></ul></ul><ul><li>Presentation </li></ul><ul><ul><li>http://www.infoworld.com/event/virtualization/docs/Mon GB 11.45 Taneja.ppt </li></ul></ul>
    37. 38. EMC
    38. 39. ST 9985 Advantage: EMC and IBM <ul><li>Partitions </li></ul><ul><ul><li>ST 9985 – 8 partitions </li></ul></ul><ul><ul><li>ST 9990 – 32 partitions </li></ul></ul><ul><ul><li>Match cache in ST 9900 and external storage </li></ul></ul><ul><ul><li>Dedicate resources when consolidating </li></ul></ul><ul><li>106 GB Aggregate bandwidth </li></ul><ul><ul><li>Point to point, non-contention </li></ul></ul><ul><li>Scalability Integrated in Single device </li></ul><ul><ul><li>ST 9985 – 96 PB </li></ul></ul><ul><ul><li>ST 9990 – 247 PB </li></ul></ul><ul><li>Avoid appliance/sprawl </li></ul>
    39. 40. The Sun StorageTek ST9990V Platform High Availability Architecture <ul><li>Cluster of 128 processors connected by a crossbar switch sharing a global cache </li></ul>4Gb/sec end-to-end internal architecture including front-end, disks, and back-end
    40. 41. IBM
    41. 42. Recent Analyst Quotes – re: Hitachi Storage Virtualization <ul><li>“ When Hitachi decided to offer a heterogeneous storage virtualization product, it made a smart move . Unlike competitors that started from scratch to write new controller software, it chose instead to add external storage support to its most robust , most highly featured , most market proven storage array.” </li></ul><ul><li>“ Hitachi's Universal Volume Manager brings external disk support to its high-end storage array. As a result, it has no direct competition for its integrated heterogeneous virtualization features in the high-end storage array market.” </li></ul><ul><li>“ IBM does offer the SVC, which like the USP/NSC family is a storage controller that supports external disk, but the SVC is a midrange product.” </li></ul>S ource: Gartner research report - Storage Virtualization: HDS Universal Volume Manager Publication - date: 22 December 2006 ID Number: G00141799, author: Robert Passmore
    42. 43. Virtualization and Tiered Storage - Sales Tips <ul><li>“ Intelligent” Tiered Storage </li></ul><ul><ul><li>Ask your customers to challenge the scalability of competitive virtualization solutions’ non-disruptive volume migration capabilities </li></ul></ul><ul><ul><li>The resulting feedback should include </li></ul></ul><ul><ul><ul><li>No support for application group volume selection, only volume selection capability </li></ul></ul></ul><ul><ul><ul><li>Application volume selection is a manual process as opposed to an automated process </li></ul></ul></ul><ul><ul><ul><li>Limited concurrent volume migration capability </li></ul></ul></ul><ul><ul><ul><li>Universal Storage Platform’s 128 concurrent volume migration capability outscales all competitors </li></ul></ul></ul>Don’t Sell Product, Understand Customers Business Problems, Architect Solutions
    43. 44. The Competition How They Stack Up! (see notes page for detailed annotations) NO NO NO YES #15 YES Scalability NO YES YES YES ?   ?   ?   ? NO #4 YES EMC InVista Dynamic Volume Mobility NO YES YES NO NO YES #13 NO NO NO #5 YES #2 VERITAS VxVM NO   YES YES #16 MF support YES YES YES Open Systems support YES YES YES Heterogeneous support YES   NO YES Leverage virtualization platform ( required for efficient data volume migration ) NO   YES YES #14 Extensive selection criteria ( enabled by device mgmt. integration ) YES #12 YES #11 YES #10 Application/host level support ( required for deep selection criteria ) YES #9 NO   YES #8 Integrated with configuration manager ( supports deep selection criteria ) YES #7 NO   YES #6 Tiered storage support ( mgmt interface for data volume migration ) NO #4 NO #3 YES Storage controller based migration ( based on virtualization architecture ) YES   YES #1 YES Non–disruptive IBM SVC Migration Softek TDMF HiCommand Tiered Storage Manager Migration Function
    44. 45. IBM SAN Volume Controller Migration How it works? <ul><li>IBM SVC Migration copies data from source to target volumes (between vdisks and/or vdisks to native). </li></ul><ul><li>SVC mapping table entries are dynamically swapped and as a result, the host/application is not disrupted. </li></ul><ul><li>Volumes to be migrated are identified by vdisk number or name, no logical grouping support or any other robust meta-data selection criteria. </li></ul>
    45. 46. IBM SAN Volume Controller Migration Strengths <ul><li>100% non-disruptive for host/application </li></ul><ul><li>Does not consume host resources </li></ul><ul><li>Storage vendor agnostic </li></ul>
    46. 47. IBM SAN Volume Controller Migration Weaknesses <ul><li>Issue with scalability. Resources are already constrained - adding a large migration would have serious impact. Constrained resources include; </li></ul><ul><ul><ul><li>cache resources (16GB per IO group) , ports (4), overall throughout. </li></ul></ul></ul><ul><li>SVC Migration is a migration tool and not a tiered storage management solution. </li></ul><ul><li>An SVC I/O group (2 SVC node pair) will only support 8 concurrent volume migrations (versus 64 on USP/NSC going to 128). </li></ul><ul><li>Rudimentary vdisk name or number as selection criteria only, no robust meta-data selection criteria </li></ul><ul><li>Additional hardware is required – when additional SVC I/O groups need to be added, vdisk migrations between I/O groups require that applications be quiesced. </li></ul><ul><li>No MF support </li></ul>
    47. 49. EMC Invista Dynamic Volume Mobility How it works? <ul><li>Invista initiates the copying of data from source to target in the background </li></ul><ul><li>Invista mapping table entries are dynamically swapped and as a result, the host/application is not disrupted </li></ul><ul><li>Volumes to be migrated are identified by virtual disk number or name, no logical grouping support or any other robust meta-data selection criteria. </li></ul>
    48. 50. EMC Invista Dynamic Volume Mobility Strengths <ul><li>100% non-disruptive for host/application </li></ul><ul><li>Does not consume host resources </li></ul><ul><li>Storage vendor agnostic </li></ul><ul><li>Should probably scale better than in-band appliance or smart switch solutions, due to out-of-band architecture </li></ul>
    49. 51. EMC Invista Dynamic Volume Mobility Weaknesses <ul><li>Although EMC Invista will be more scalable than in-band appliance or switch solutions, there are still many moving parts compared to the Hitachi solution. </li></ul><ul><li>Rudimentary virtual disk name or number as selection criteria only, no robust meta-data selection criteria. </li></ul><ul><li>Is a migration tool and not a tiered storage management solution. </li></ul><ul><li>Additional hardware is required </li></ul><ul><li>No MF support </li></ul>
    50. 52. Program Notes

    ×