This document provides updates on the ST9900 product line and program from Ken Ow-Wing. It includes details on microcode releases, new features like dynamic provisioning support, and performance enhancements. It also discusses tools now available, pre-sales performance assistance, and competition from EMC and IBM storage arrays.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the HP Z440 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
A workstation that runs coolly and uses less power is a great asset to workers and the companies they work for. In our tests, both when idle and when under load, the Lenovo ThinkStation P500 generally ran at lower surface temperatures and used less power than the HP Z440 Workstation. These findings show that the Lenovo ThinkStation P500 could meet the needs of those who want to provide a reliable, comfortable work environment while using less power.
IBM SAN Volume Controller Performance Analysisbrettallison
Introduction
Storage Problems and Limitations with Native Storage
SVC Overview
SVC Physical and Logical Overview
Performance and Scalability Implications
Types of Problems
Performance Analysis Techniques
Performance Analysis Tools for SVC
Performance Analysis Metrics for SVC
Online Banking Example
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
Ibm power systems e870 and e880 technical overview and introductionDiego Alberto Tamayo
This IBM® Redpaper™ publication is a comprehensive guide covering the IBM Power System E870 (9119-MME) and IBM Power System E880 (9119-MHE) servers that support IBM AIX®, IBM i, and Linux operating systems. The objecti
ve of this paper is to introduce the major innovative Power E870 and Power E880 offerings and their relevant functions:
In the Principled Technologies labs, the space-efficient FX2 solution enabled with SanDisk DAS Cache supported over four times as many VMs than the Dell PowerEdge R820 with CacheCade supported. Because each VM delivered greater performance, this FX2 solution delivered up to 43 times the total performance of a Dell PowerEdge R820 server.
Consolidating your Dell PowerEdge R820 servers onto with a new Dell PowerEdge FX2 enclosure with an FC830 server, powered by the Intel Xeon processor E5-4600 v3, and FD332 storage blocks using SanDisk DAS Cache can give you a significant performance boost while saving precious data center space. A company can optimize precious data center space by replacing older servers with the Dell PowerEdge FX2 converged architecture, which takes up just 2U, and simultaneously achieve greater VM performance.
As our tests show, investing in the powerful new Dell PowerEdge R920 running Oracle Database 12c pluggable databases achieves cost savings without compromising performance. In our testing, a single Dell PowerEdge R920 was able to do nine times the work of a single HP ProLiant DL385 G6 server while the power and cooling costs dropped by 64 percent when compared to the nine servers it could replace. At 17 percent less, three-year software licensing savings were so dramatic that they paid back the new server costs in just six months, and over three years could save just under $300,000.
As organizations deploy converged infrastructure environments, entry costs play a significant role in hardware selection. Choosing a solution that provides easy upgrade paths when increased performance and capacity are necessary is another important factor. However, as our analysis demonstrates, it is equally important to consider the future costs associated with those upgrades. Selecting hardware based solely on initial acquisition costs can lead to substantially higher costs for future bandwidth increases.
We compare the total list pricing for each tier of the Cisco UCS solution and the IBM Flex System solution to highlight the differences in the cost of bandwidth between each environment. Not only does the Cisco UCS solution have a 22.3 percent lower initial investment cost, but the costs to increase bandwidth above the baseline configuration are significantly lower than doing so on the IBM Flex System.
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
Scope - The primary focus of this presentation is on the methodology we use for managing performance in a very large shared Storage Area Network environment with a Primary focus on Distributed Systems and IBM Enterprise Storage Server. The focus on this presentation is methodology and NOT measurement. There are numerous excellent presentations already out there on measurement. However, there are several references in the back of the presentation to measurement tools.
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...Symantec
In this technical session we will share a few customer tested blueprints for implementing DR strategies with NetBackup appliances showing support for onsite and offsite disaster recovery. This includes the architecture design with Symantec best practices, down to execution of the wizards and command lines needed to implement the solution.
Watch the recording of this Google+ Hangout: http://bit.ly/13oTjvp
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
IBM SAN Volume Controller Performance Analysisbrettallison
Introduction
Storage Problems and Limitations with Native Storage
SVC Overview
SVC Physical and Logical Overview
Performance and Scalability Implications
Types of Problems
Performance Analysis Techniques
Performance Analysis Tools for SVC
Performance Analysis Metrics for SVC
Online Banking Example
Why hitachi virtual storage platform does so well in a mainframe environment ...Hitachi Vantara
Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.
Ibm power systems e870 and e880 technical overview and introductionDiego Alberto Tamayo
This IBM® Redpaper™ publication is a comprehensive guide covering the IBM Power System E870 (9119-MME) and IBM Power System E880 (9119-MHE) servers that support IBM AIX®, IBM i, and Linux operating systems. The objecti
ve of this paper is to introduce the major innovative Power E870 and Power E880 offerings and their relevant functions:
In the Principled Technologies labs, the space-efficient FX2 solution enabled with SanDisk DAS Cache supported over four times as many VMs than the Dell PowerEdge R820 with CacheCade supported. Because each VM delivered greater performance, this FX2 solution delivered up to 43 times the total performance of a Dell PowerEdge R820 server.
Consolidating your Dell PowerEdge R820 servers onto with a new Dell PowerEdge FX2 enclosure with an FC830 server, powered by the Intel Xeon processor E5-4600 v3, and FD332 storage blocks using SanDisk DAS Cache can give you a significant performance boost while saving precious data center space. A company can optimize precious data center space by replacing older servers with the Dell PowerEdge FX2 converged architecture, which takes up just 2U, and simultaneously achieve greater VM performance.
As our tests show, investing in the powerful new Dell PowerEdge R920 running Oracle Database 12c pluggable databases achieves cost savings without compromising performance. In our testing, a single Dell PowerEdge R920 was able to do nine times the work of a single HP ProLiant DL385 G6 server while the power and cooling costs dropped by 64 percent when compared to the nine servers it could replace. At 17 percent less, three-year software licensing savings were so dramatic that they paid back the new server costs in just six months, and over three years could save just under $300,000.
As organizations deploy converged infrastructure environments, entry costs play a significant role in hardware selection. Choosing a solution that provides easy upgrade paths when increased performance and capacity are necessary is another important factor. However, as our analysis demonstrates, it is equally important to consider the future costs associated with those upgrades. Selecting hardware based solely on initial acquisition costs can lead to substantially higher costs for future bandwidth increases.
We compare the total list pricing for each tier of the Cisco UCS solution and the IBM Flex System solution to highlight the differences in the cost of bandwidth between each environment. Not only does the Cisco UCS solution have a 22.3 percent lower initial investment cost, but the costs to increase bandwidth above the baseline configuration are significantly lower than doing so on the IBM Flex System.
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
Scope - The primary focus of this presentation is on the methodology we use for managing performance in a very large shared Storage Area Network environment with a Primary focus on Distributed Systems and IBM Enterprise Storage Server. The focus on this presentation is methodology and NOT measurement. There are numerous excellent presentations already out there on measurement. However, there are several references in the back of the presentation to measurement tools.
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...Symantec
In this technical session we will share a few customer tested blueprints for implementing DR strategies with NetBackup appliances showing support for onsite and offsite disaster recovery. This includes the architecture design with Symantec best practices, down to execution of the wizards and command lines needed to implement the solution.
Watch the recording of this Google+ Hangout: http://bit.ly/13oTjvp
This presentation provides an introduction to the current activities leading to software architectures and methodologies for new NVM technologies, including the activities of the SNIA Non-Volatile Memory (NVM) Technical Working Group. This session includes a review and discussion of the impacts of the SNIA NVM Programming Model (NPM). We will preview the current work on new technologies, including remote access, high availability, clustering, atomic transactions, error management, and current methodologies for dealing with NVM.
Flash for the Real World – Separate Hype from RealityHitachi Vantara
Join us for a live webcast and hear Hu Yoshida, Chief Technology Officer of Hitachi Data Systems, discuss the real world criteria for making an effective decision when evaluating flash storage. With all the noise in the market it can be difficult to separate fact from fiction in order to evaluate the performance, efficiency and economic trade-offs for flash storage.
Specifically, you’ll learn how to determine if flash storage will help you:
Actually achieve the performance you need as you compare technology options.
Realize efficiency gains that extend beyond the promise of flash performance.
Make the economic case for real-world business decisions before taking the leap.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
In this video from the DDN User Group at SC16, Sven Oehme Chief Research Strategist, IBM, presents "Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program."
Watch the video presentation: http://wp.me/p3RLHQ-g52
Sign up for our insideHPC Newsletter: http://wp.me/p3RLHQ-g52
Deep Dive On Intel Optane SSDs And New Server PlatformsNEXTtour
CLOSE
As enterprises embrace software defined and hyperconverged infrastructure, original methods for defining infrastructure ingredients becomes more complex. Maintaining a balanced platform with a diverse set of workloads is required to maximize TCO. Defining configurations at the solution level helps ease the challenges of implementing HCI, while optimizing TCO. Come to this session to learn about Intel’s view on how HCI configurations will be using technologies like Optane SSDs, the newest server platforms, and new SSD form factors to continue HCI TCO scaling.
NetApp enterprise All Flash Storage
This presentation provides the key messages and differentiation, value propositions, and promotional programs for AFF.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
13. ST9990V SPC-1 Performance Results October 1, 2007 News Release: The ST9990V platform achieves the highest SPC-1 benchmark result in enterprise storage history , eclipsing results of every single enterprise storage system ever tested with 200,245K SPC IOPS. This nearly doubles the performance levels achieved by competing enterprise storage systems with a single controlle r. Leads industry with 3.5M peak cache IOPS. A 75% - 300% advantage over other enterprise systems in the market. # 1 in Performance
14. ST9990V SPC-1 Performance Results What this new SPC-1 benchmark result means for customers is: Greater Productivity : Sustain significantly more business transactions than any other enterprise storage system in the market, boosting sales and profitability Increased Efficiency : Greatly improve application response time to single digit millisecond levels and s upport more users, more applications, and handle more capacity on a centrally managed platform that leverages years of mature and reliable microcode Lower TCO : Run a multitude of applications, such as Microsoft Exchange and Oracle Database 11 g , concurrently on a single storage controller Reduced Risk : If your applications don’t perform well then your business suffers—performance definitively matters to business success .
19. ST9985 V Switched Loop Details ST9985 V introduces a switched back-end, where the drives are still on arbitrated loops but the switch logic bypasses all targets on that loop except for the one disk being addressed
23. Hardware Comparison of ST9985V vs ST9985 – Hi-Star Net Data Data path bandwidth of Crossbar Switch architecture Net: 8.5GB/s Data path bandwidth of Crossbar Switch Architecture Net: 8.5GB/s ST9985 ST9985 V
24. Hardware Comparison of Crossbar Switch Architecture Net Control Control path bandwidth of Crossbar Switch Architecture Net: 3.6GB/s The number of paths from CHA/MIX to each SM is 12. Shared memory is installed on BASE PCB Control path bandwidth of Crossbar Switch Architecture Net : 4.8GB/s The number of paths from channel adapter/disk adapter to each SMA is increased from 12 to 16. Shared memory is installed on SMA PCB. ST9985 ST9985 V
25. Hardware Comparison – Cache Memory In order to improve access performance to CM, CMA, ST9985V raised the transfer rate of the memory path, advanced memory control method, and achieved twice the hardware performance as compared with Network Storage Controller Model ST9985 . ST9985 ST9985 V
29. ST9990V SPC-1 Performance Results October 1, 2007 News Release: The ST9990V platform achieves the highest SPC-1 benchmark result in enterprise storage history , eclipsing results of every single enterprise storage system ever tested with 200,245K SPC IOPS. This nearly doubles the performance levels achieved by competing enterprise storage systems with a single controlle r. Leads industry with 3.5M peak cache IOPS. A 75% - 300% advantage over other enterprise systems in the market. # 1 in Performance
30. ST9990V SPC-1 Performance Results What this new SPC-1 benchmark result means for customers is: Greater Productivity : Sustain significantly more business transactions than any other enterprise storage system in the market, boosting sales and profitability Increased Efficiency : Greatly improve application response time to single digit millisecond levels and s upport more users, more applications, and handle more capacity on a centrally managed platform that leverages years of mature and reliable microcode Lower TCO : Run a multitude of applications, such as Microsoft Exchange and Oracle Database 11 g , concurrently on a single storage controller Reduced Risk : If your applications don’t perform well then your business suffers—performance definitively matters to business success .
44. The Competition How They Stack Up! (see notes page for detailed annotations) NO NO NO YES #15 YES Scalability NO YES YES YES ? ? ? ? NO #4 YES EMC InVista Dynamic Volume Mobility NO YES YES NO NO YES #13 NO NO NO #5 YES #2 VERITAS VxVM NO YES YES #16 MF support YES YES YES Open Systems support YES YES YES Heterogeneous support YES NO YES Leverage virtualization platform ( required for efficient data volume migration ) NO YES YES #14 Extensive selection criteria ( enabled by device mgmt. integration ) YES #12 YES #11 YES #10 Application/host level support ( required for deep selection criteria ) YES #9 NO YES #8 Integrated with configuration manager ( supports deep selection criteria ) YES #7 NO YES #6 Tiered storage support ( mgmt interface for data volume migration ) NO #4 NO #3 YES Storage controller based migration ( based on virtualization architecture ) YES YES #1 YES Non–disruptive IBM SVC Migration Softek TDMF HiCommand Tiered Storage Manager Migration Function
On the previous Hitachi Universal Storage Platform™ model, all of the disks attached to a back-end director PCB were on eight shared loops. A transfer to the last disk on a loop (could be 32 or 48 disks per loop) would have to pass through bypass logic on all of the disks in front of it. The Universal Storage Platform VM introduces a switched back-end, where the drives are still on arbitrated loops (all Fibre Channel disks use Fibre Channel Arbitrated Loops (FC-AL, not FC-SW fabrics), but the switch logic bypasses all targets on that loop except for the one disk being addressed. This reduces the propagation time by some microseconds (not noticeable), but the primary effect of this change is improved accessibility to individual disks. Each HDU has four such FSW switches, two on the front side (32 disks) and two more on the back (32 disks). Each pair of switches is either an “upper” or a “lower” switch as seen in the figure above.
On to customer successes...
Note: On the graphic above, CHA stands for channel adapter
Cache memory (CM) is installed in BASE-PCB It is only used for rack model, and is not compatible with FC model. • 4G CM: install DIMM in which 512Mbit DDR-SDRAM is mounted. • 8G CM: install DIMM in which 1Gbit DDR-SDRAM is mounted. CM capacity -------- min.4GB – max.32GB addition unit: 4GB/4DIMM CM capacity -------- min.8GB – max.64GB addition unit: 8GB/4DIMM . Cache memory (CM) is installed in CMA-PCB The following 2 types of CM are supported. • DKC-F610I-C4G: consists of DIMM in which 512Mbit DDR2-SDRAM is installed. CM capacity -------- min.4GB – max.32GB addition unit: 4GB/4DIMM • DKC-D610I-C8G: consists of DIMM in which 1Gbit DDR2-SDRAM is installed. CM capacity -------- min.8GB – max.64GB addition unit: 8GB/4DIMM • Memory intermingling specification------ intermingling within a CMA-PCB is impossible. CM (CM-DIMM) is installed on the CMA PCB. 1 set of 2CMA PCB/set can be installed in the subsystem. Up to 16 CM-DIMM can be mounted on 1 CMA PCB, and up to 32GB(16GB when using C4G) can be installed on 1 PCB. CMA is not preinstalled in DKC as a standard component, so the order for it is indispensable. NSC55 USP VM
On to customer successes...
ST9985V delivers a Updated Storage Platform that is a follow-on to the ST9985. Because it takes on some of the new technology within the ST9990V it has been consolidated into the SUN ST9900 family. Introduces support for 750GB SATA II HDDs as a new lower cost storage tier. Ideal for Tier 2 storage such as File (large) based data or short term tape backup cache. 11.25TB Raw storage in a single disk drive enclosure. SUN ST9900 Tiered Storage Manager takes advantage of the capacity!. ST9985V built-in RAID 6 ensures reliability 4Gb/s FC ‘End-to-End’. Switch FC architecture on the back-end. 15-20% faster than ST9985 (4GB/s architecture, CPU, etc.) SUN ST9900 Dynamic Provisioning software (“Thin Provisioning”). SUN ST9900 Dynamic Provisioning was introduced on the ST9990V in May. The ST9985V also enjoys Internal Thin Provisioning of internal storage as well. ST9985V and SUN ST9900 Dynamic Provisioning also introduce thin provisioning for external storage as well. SUN ST9900 is the 1 st vendor to deliver this flexibility to maximize storage resources while simplifying storage management. ST9985V also provides ‘Day 1’ Support for VMware ESX on internal storage as well as virtualized external storage for true ‘end-to-end’ IT virtualization. Again, SUN ST9900 is the first company to deliver this capability to help maximize resources. VMware ESX Server support combined with SUN ST9900 Dynamic Provisioning enables IT environments the ultimate in flexibility to align resources to business needs. Simplified software packaging – discussed in more detail shortly. ST9985V Configuration Flexibility - Choice of either: Storage controller and high performance storage for Tier 1 applications Storage controller only for virtualization services Replacement / Follow-on to ST9985.
On to customer successes...
On to customer successes...
On to customer successes...
All internal processors are 2x the performance of the USP At the core of the USP V is the third generation, non-blocking, switched architecture. The Universal Star Network is a fully fault tolerant, high performance, non-blocking, switched architecture. Each USP V model uses the same Universal Star Network architecture, but will have different performance characteristics based on the installed components. The data Cache system has the same path speeds and counts. The Shared Memory system has been significantly upgraded over the USP version, with 256 (up from 192) paths operating at 150MB/s (up from 83MB/s). On the USP, the CHA PCBs had 8 Shared Memory paths, and the DKAs had 16. Now, all PCBs have 16 Shared Memory paths. The Cache memory only contains user data blocks, whereas the Shared Memory system holds all of the metadata about the internal Array Groups, LDEVs, external LDEVs, and runtime tables for various software products. There can be up to 256GB of Cache and 32GB of Shared Memory. BED Switched Loop Details On the previous USP model, all of the disks attached to a BED PCB were on eight shared Loops. A transfer to the last disk on a loop (could be 32 or 48 disks per loop) would have to pass through bypass logic on all of the disks in front of it. The USP V introduces a switched back end, where the drives are still on Arbitrated Loops (all FC disks use FC-AL, not FC-SW fabrics), but the switch logic bypasses all targets on that loop except for the one disk being addressed. This reduces the propagation time by some microseconds (not noticeable), but the primary effect of this change is improved accessibility to individual disks. Each HDU has four such FSW switches, two on the front side (32 disks) and two more on the back (32 disks). Each pair of switches is either an “upper” or a “lower” switch as seen in Figure 21.
On to customer successes...
These are very recent analyst quotes from Bob Passmore of Gartner Group, he can ignore Hitachi’s storage virtualization solutions no longer. Passmore even states that Hitachi has “no competition”.
Self explanatory
TDMF is non-disruptive for mainframe only VxVM requires additional paths and HBA’s for host-level mirror if performance or redundancy is not to be impacted (most of the time) TDMF is a host-based software migration solution IBM SVC and EMC InVista are appliance/switch-based migration solutions. VxVM is an LVM host-based solution HiCommand Tiered Storage Manager offers the deepest selection criteria and tiered storage user interface IBM Multiple Device Manager and the SVC GUI provide user interface, however data migration is enabled by CLI only at this time. HiCommand Tiered Storage Manager integrated with HiCommand Device Manager database IBM Multiple Device Manager supports virtualization (SVC), device management (array config) and replication (FC, PPRC, etc.). HiCommand Tiered Storage Manager supports deep selection criteria, one of which is LDEVs by host/application TDMF provides a ISPF dialog interface and extensive selection criteria (i.e. VOLSER, UCB device addr, etc.) IBM Multiple Device Manager and SVC GUI support visualization of the host, however data migration enabled by CLI only VxVM provides a single host-based interface only HiCommand Tiered Storage Manager offers the most extensive selection criteria. This is enabled by the base USP/UVM/HDvM architecture TDMF scales for Mainframe only as TDMF Open Systems Edition only scales at the host level, due to TDMF being host-based HiCommand Tiered Storage Manager does not support Mainframe environments today. Volume Migration does however