SlideShare a Scribd company logo
1 of 32
Download to read offline
IBM Global Technology Services
December 2007




                                 Softek TDMF z/OS extended
                                 distance migration.
                                 For a data center relocation or consolidation
Softek TDMF z/OS extended
distance migration.
Page 2




                                           Introduction
                   Contents                The purpose of this paper is to give a brief overview of Softek™ TDMF™
                                           (Transparent Data Migration Facility) z/OS®, outline some of the benefits a
     2   Introduction                      client may derive from using the product and show how, when combined with
     4   TDMF z/OS capabilities            available channel extension technology, TDMF may be used for migrating
     5   Provision of “guaranteed”         data over virtually unlimited distances.
         data integrity
     6   TDMF extended distance            In April of 1997, Softek announced the availability of a software-based data
         data migration                    migration solution called TDMF that allows users to transparently migrate
     17 A typical data center relocation   data in an IBM Multiple Virtual Storage (MVS) environment to a designated
     22 Alternate method — a com-          target disk from a source disk that is in use by other applications. With TDMF
         bination of local and remote      software, two types of migrations may be performed.
         migrations
     24 Alternate method — use of          The first type is a swap in which the VOLSER initially associated with the
         TDMF TCP/IP option                source disk is dynamically switched to the target disk. Applications using this
     27 Alternate method — use             VOLSER may be left running throughout the migration and upon completion
         LDMF software to isolate an
                                           of the swap are unaware that the data they are accessing is located on a differ-
         application for relocation
                                           ent physical disk.
     27 Other Softek TDMF solutions
     29 Other Softek migration solutions   The second type of migration is a point-in-time migration. In this scenario,
     29 TDMF definitions
                                           TDMF software may be used to create a copy of a source volume onto a target
     30 Summary
                                           disk while the source disk is still in use by other applications. A point-in-time
                                           migration does not perform dynamic VOLSER switching. Upon completion of
                                           the point-in-time copy, applications continue to access data from the initial
                                           source disk.
Softek TDMF z/OS extended
distance migration.
Page 3




                                         The relocation of workloads to a new location involves a disruption in service
                   Highlights            to end users. To reduce the impact on business, the operational switch from
                                         one data center to the other must take place in the shortest possible time
                                         with minimal outage to revenue-generating applications. The desired
                                         reduced impact cannot be accomplished if operational data has to be
                                         backed up to tape, physically transported and then restored onto the remote
                                         system at the alternate location.

     When using the right software,      Using TDMF software, the service disruption is limited to the time required
     data transfer from the local to     to close the application on the local site, switch network access from the local
     the remote site can take place      to the remote site and then re-IPL (initial program load) the system on the
     without disruption during normal    remote site. Data transfer from the local to the remote site can take place
     system operations.                  without disruption during normal system operations.

     When migrating data with TDMF       To migrate data using TDMF software, it is necessary to establish a session.
     software, the master LPAR does      A TDMF session may be created by running a copy of TDMF software in each
     the actual migration of data from   of the logical partitions (LPARs) that have access to the source and target Direct
     a source to a target disk.          Access Storage Device (DASD) subsystems involved in the migrations. If multiple
                                         LPARs are involved, TDMF software in one of the LPARs will be designated as
                                         the master and the other LPARs will be designated as agents. The master and
                                         its associated agents communicate and coordinate migration activities using a
                                         common SYSCOM data set. The master does the actual migration of data from
                                         a source to a target disk. If only one LPAR has access to the source and target
                                         volumes, it runs as a master with no associated agents. A session consists of one
                                         master and up to 31 agents each running in different LPARs and all sharing the
                                         same SYSCOM data set. An interactive system productivity facility (ISPF)-based
                                         TDMF monitor may be brought up to view and communicate with TDMF ses-
                                         sions in progress. The monitor may also be used to look at performance data
                                         from both current and past sessions via the SYSCOM data set.
Softek TDMF z/OS extended
distance migration.
Page 4




                                             TDMF capabilities
                  Highlights                 Design
                                             Softek TDMF z/OS was designed as a fully integrated product that is simple
     Softek TDMF software is simple to       to install, simple to use and simple to maintain.
     use, enables access to data during
     migration and is not vendor specific.   Transparency
                                             One of the most important requirements of data processing today is continu-
                                             ous availability of data. Traditionally, one of the downsides of the backup or
                                             migration of data has been that it requires an interruption of the data’s avail-
                                             ability to the end user. TDMF software’s ability to transfer data transparently
                                             can greatly reduce and, in many cases, eliminate periods of data unavailabil-
                                             ity. This results in increased productivity and reduced operational costs. In
                                             addition to its transparent use, TDMF software may be installed dynamically,
                                             requiring no IPLs.

                                             Non-vendor specific
                                             Because TDMF software is completely MVS-based, it does not require special
                                             micro code or hardware features in the DASD control unit. This means that
                                             it may be used in a multivendor environment. TDMF software supports all
                                             extended count key data (ECKD) control units. Device support includes stan-
                                             dard size 3380 and 3390 images as well as hyper volumes.

                                             Tuning parameters
                                             To adjust performance impact on applications during a migration, TDMF software
                                             provides the ability to control migration rate using the following parameters:

                                             •	 FASTCOPY — copies	only	allocated	tracks	on	a	volume
                                             •	 FULLSPEED — doubles	buffer	I/O	blocks,	nearly	doubling	TDMF	performance
                                             •	 CONCURRENT — establishes	the	number	of	concurrent	volume	migrations
Softek TDMF z/OS extended
distance migration.
Page 5




                                               •	 SYNCgoal—in	a	swap	migration,	establishes	how	long	the	swap	is	allowed	to	take
                   Highlights                  •	 PACING — establishes	dynamic	or	fixed,	increased	or	decreased	size	of	the	
                                                  I/O	TDMF	transfers

                                               Migration groups
     TDMF software provides the ability        TDMF software supports the definition and concurrent migration of online
     to control the data migration rate.       volumes. Because migration projects may involve several hundred volumes
                                               supporting a range of application types, TDMF software provides the ability to
                                               logically group volumes for efficient operational control.

                                               Many group migration parameters can be independently configured and con-
                                               trolled to best suit the business applications supported.

                                               Monitoring features
     TDMF software enables logical             TDMF software provides a Time Sharing Option (TSO) ISPF monitor for manag-
     grouping of volumes for better oper-      ing and viewing a migration process from start to finish. Statistical information
     ational control. And it provides a view   includes details such as elapsed time, copy rate, percentage complete and so forth.
     into the entire migration process.
                                               Shared DASD
                                               TDMF software works in a shared DASD environment. The DASD can be
                                               shared by individual LPARs running on multiple physical CPUs, shared
                                               within a sysplex or shared across multiple sysplexes.

                                               Provision of “guaranteed” data integrity
                                               Softek TDMF software was designed to maintain physical data integrity. Source
                                               volumes remain untouched by the Softek TDMF session and current data is
                                               always available up to the point of the workload transfer when a swap migration
                                               is initiated. For a point-in-time copy, the source volume is never touched.
Softek TDMF z/OS extended
distance migration.
Page 6




                                        Using the DYNAMIC SUSPEND feature of Softek TDMF software, a migration
                  Highlights            may be temporarily stopped in the event of a hardware or network failure. The
                                        Softek TDMF software will continue to monitor the source volume for updates
                                        until the migration can be continued from where it was suspended. If, for what-
     The integrity of the data on the   ever reason, the migration is aborted or terminated, then the integrity of the
     source volumes can be main-        data on the source volume(s) is maintained. Unlike other migration tools, Softek
     tained even in the rare event of   TDMF software enables a migration for transferring workloads to be terminated
     an aborted migration.              at any time.

                                        TDMF extended distance data migration
                                        TDMF software may be used with existing channel extension technology to
                                        implement the product’s benefits between data centers separated by thousands
                                        of miles. Common uses include remote backup, disaster recovery and data center
                                        relocations. Essentially, the use of TDMF software remains unchanged with
                                        the exception that some kind of extended distance network connection is set up
                                        between the data centers containing the source and target DASD subsystems
                                        involved in the migrations. The connection may consist of one or more point-to-
                                        point leased lines such as DS-3 (T3) links or may be achieved by connecting to
                                        a tariff-based packet switching network such as an asynchronous transfer mode
                                        (ATM) “cloud.”

                                        In addition to channel extenders or tariff-based packet switching, the latest release
                                        of TDMF software can also support the migration of data via TCP/IP. This white
                                        paper does not include any topic involving data movement over TCP/IP.

                                        TDMF software can be set up to perform either a push or a pull operation.
                                        The terms “push” and “pull” define the direction data is flowing across the
                                        network with respect to the TDMF master application.
Softek TDMF z/OS extended
distance migration.
Page 7




                                           In a push operation, the TDMF master writes data across the network out to
                  Highlights               the target subsystem(s) residing in the remote data center. In this environ-
                                           ment, the master resides on an LPAR in the local data center along with the
                                           source subsystem(s).

                                           In a pull operation, the TDMF master resides on an LPAR running at the
                                           remote data center and reads data across the network from the source
                                           subsystem(s) residing in the local data center.

     The use of TDMF software remains      Diagram 1 illustrates an example of a push operation for a single LPAR envi-
     essentially unchanged by distance.    ronment existing in the local data center. The extended distance link used is
                                           a DS-3 (T3) point-to-point leased line. A push operation is useful when target
                                           10708885 TDMF for z/OS Extended Distance Migration
                                           Diagram 1
                                           DASD subsystems are running in a remote data center that does not yet have
                                           an operational MVS LPAR.


                                                                          Single LPAR “PUSH” environment
     Softek can be set up to either push         Local data center                                                  Remote data center
     or pull data across the network.              TDMF MASTER
     When pushing, the master resides                   LPAR “1”
     locally with a source subsystem.
                                                        CHPIDS
                                                                          Channel        Migrate         Channel
                                                   10   11   20    21     extender        data           extender
                                                                             21                             21
                                                                                           DS-3
                                                                             20                             20
                                                   10   11                                                             20   21


                                                                             Extended distance connection
                                                         Source              (“T3” point-to-point leased line)              Target
                                                          disk                                                               disk



                                                        SYSCOM
                                                        data set

                                                  Source subsystem                                                    Target subsystem


                                           Diagram 1
                                           Single LPAR push environment
Softek TDMF z/OS extended
distance migration.
Page 8




                                        Diagram 2 illustrates an example of a pull operation for a multiple LPAR
                  Highlights            environment. The extended distance link used is a DS-3 (T3) point-to-point
                                         10708885 TDMF for z/OS Extended Distance Migration
                                        leased line. In this example, both the local and the remote data centers each
                                         Diagram 2
     When pulling, the master resides   have an operational MVS LPAR.
     in the remote data center.

                                                                      Multiple LPAR “PULL” environment
                                              Local data center                                                    Remote data center
                                                  TDMF AGENT                                                         TDMF MASTER

                                                      LPAR “1”                                                            LPAR “2”

                                                      CHPIDS                                                              CHPIDS
                                                                         Channel        Migrate         Channel
                                                 10   11   20    21      extender        data           extender     30   31   40    41

                                                                            21                             30
                                                                                          DS-3
                                                                            20                             31

                                                                            30                             21

                                                                            31                             20



                                                 10   11   30    31                                                  20   21   40    41

                                                                            Extended distance connection
                                                       Source               (“T3” point-to-point leased line)              Target
                                                        disk                                                                disk



                                                      SYSCOM
                                                      data set

                                                Source subsystem                                                     Target subsystem


                                        Diagram 2
                                        Multiple LPAR pull environment
Softek TDMF z/OS extended
distance migration.
Page 9




                                           When relocating from an existing local data center with an established work-
                  Highlights               load to a new remote data center being brought up for the first time, certain
                                           things can be done to reduce both the time and the cost of the relocation. Set-
     When relocating a data center, cer-   ting up a pull operation means the TDMF master can take full advantage of
     tain operations can reduce both the   the CPU resources installed in the remote data center without impacting the
     time and cost of relocation.          application workload still running in the local data center. This will reduce
                                           the amount of time required to perform the relocation.

                                           If the relocation is coordinated by application, it is possible to reduce the net-
                                           work bandwidth needed to move the data. When point-in-time migrations are
                                           performed, genning and connecting the local LPAR(s) to the remote target
                                           storage (CHIPIDS 20 and 21) is not required. The removal of this requirement
                                           can result in less traffic between the data centers, further reducing the network
                                           bandwidth needed to migrate the data.

                                           Note: Although not recommended, for swap type migrations, each of the
                                           target subsystems must be genned and connected to each of the agent LPARs
                                           involved in a session.

                                           Diagram 3 shows a pull operation involving multiple source and target subsys-
                                           tems connected to a pair of DS-3 (T3) links. In this example, the migration will
                                           be coordinated by application to reduce the number of DS-3 links required.
                                           Volumes related by application will be moved as a group migration requiring a
                                           prompt to signal TDMF software of a point-in-time.
Softek TDMF z/OS extended
distance migration.
Page 10




                                            10708885 TDMF for z/OS Extended Distance Migration
                                            Diagram 3



                   Highlights                                                  Multiple LPAR “PULL” environment
                                                                            (Group “point-in-time” backup with prompt)
                                                                                   (Coordinated by application)

     Softek simplifies relocations, which                  Local data center                                              Remote data center
     can result in less traffic between                                APPLICATIONS

     data centers. This can reduce the
                                                    TDMF       Appl         Appl      Appl                        TDMF          Slot       Slot         Slot
     network bandwidth needed for                  AGENT        A            B         C                         MASTER          A          B            C

     migrating data.
                                                                   LPAR “1”                                                     LPAR “2”

                                                                       CHPIDS                                                   CHPIDS
                                                    10   11   20   21                                                30   31   40   41   50   51   60       61



                                                              10       11                                                                     50       51

                                                                                       Ext #1                 Ext #2
                                                                   A                                                                               A
                                                                             30          30                      30
                                                                                                     DS-3
                                                                   B         31          31                      31                                B
                                                                                                    Link #1
                                                                   C                                                                               C
                                                                            S1                                                                              S1
                                                              Source                               Migrate                                    Target
                                                                                                    data
                                                              20       21                                                                     60       61


                                                                   A         40          40                      40                                A
                                                                                                     DS-3
                                                                             41          41                      41
                                                                   B                                Link #2                                        B
                                                                                       Ext #3                 Ext #4
                                                                   C                                                                               C
                                                                                                                                                            S1
                                                                                                                                              Target
                                                              SYSCOM                         High speed “T3” links
                                                                            S2                with compression
                                                              Source


                                            Diagram 3
                                            Multiple LPAR pull environment (Group point-in-time backup with prompt)
                                            (Coordinated by application)
Softek TDMF z/OS extended
distance migration.
Page 11




                                         With applications APPL A, APPL B, and APPL C running on LPAR 1 in the
                   Highlights            local data center, a group point-in-time migration session will be started for
                                         the source volumes used by APPL A (depicted as A in source subsystems S1
     Group point-in-time migration       and S2) out to the target volumes chosen to receive them (shown as A on
     begins with source volumes          subsystems T1 and T2). When the master reaches the point where it is ready
     used by the first application.      to end the group A migrations, it will issue a prompt to the TDMF monitor. At
                                         this point, the user would bring down APPL A on LPAR 1, then respond to the
                                         prompt to complete the migrations for all of the group A volumes.

                                         Before bringing up APPL A in SLOT A of LPAR 2, it will be necessary to
     Re-clipping target volumes to the   vary the source volumes offline to LPAR 2, re-clip the target volumes to the
     original VOLSERs and varying        original source VOLSERs, vary the target volumes online to LPAR 2 and
     the source and target volumes are   catalog data sets on the target volumes as needed. The same procedure would
     necessary before moving to the      be repeated for migrating APPL B and APPL C. Since Diagram 3 is set up for
     next application for migration.     point-in-time migrations, it is not necessary to gen or connect the target sub-
                                         systems (T1 and T2) in the remote data center to LPAR 1 running the TDMF
                                         agent in the local data center.

                                         If the extended distance network is set up so that source and target volumes
                                         are accessible to hosts running in both data centers as in Diagram 2, the
                                         relocation of volumes coordinated by application could be carried out using
     Using TDMF swap migrations          TDMF swap migrations. This would eliminate the vary offline/online and
     can eliminate offline/online and    reclip steps described for the point-in-time based procedure since the target
     re-clip procedures of point-in-     volumes would already be online with the correct VOLSERs to the remote
     time procedures.                    host following completion of the swap migrations. Catalog issues for LPAR(s)
                                         in the remote data center would still need to be addressed.
Softek TDMF z/OS extended
distance migration.
Page 12




                                             If the application’s load libraries were copied prior to moving its database
                      Highlights             volumes, and the appropriate VTAM and/or TCP/IP network(s) were in place,
                                             it would be possible to temporarily run the application out of the remote data
     With TDMF and the right circum-         center accessing database volumes in the local data center and then migrate
     stances, it’s also possible to          the database volumes over to the remote data center with no further interrup-
     migrate database volumes with           tion to the application.
     no further application interruptions.
                                             It must be noted, when doing a swap migration over channel extenders, the
                                             user could be exposed to server performance degradation after the swap of a
     Using channel extenders can reduce      volume. It is for this reason Softek always recommends doing a point-in-time
     performance degradation. Channel        migrations involving channel extenders.
     extenders compress data flowing
     across a link.                          A DS-3 (T3) link has a raw (actual) bandwidth of 44.736 Mbps (Mega bits per
                                             second). The DS-3 rate divided by eight yields approximately 5.5 MBps (Mega
                                             bytes per second) as compared to the effective bandwidth of 17.5 MBps for
                                             ESCON channels. The channel extender boxes attached to either end of a link
                                             are able to compress the data flowing across the link. The compression ratio
                                             achieved is dependent upon the data patterns being moved and will determine
                                             the effective bandwidth of the link. Effective bandwidths yielded through
                                             compression are typically two to three times the raw bandwidth of a link. For
                                             DASD applications, it is usually safer to use a 2-to-1 compression ratio for
                                             estimating throughput.

                                             The key to efficiently migrating data across high speed DS-3 links is setting
                                             up an environment that allows the capacity of the link(s) to be saturated as
                                             nearly as possible. The percent utilization of a link’s capacity can be moni-
                                             tored via the channel extender or derived by dividing the actual data rate
                                             across the link by the link’s effective bandwidth (taking compression into
Softek TDMF z/OS extended
distance migration.
Page 13




                                          account). In a favorable environment, between 90 and 95 percent of a link’s
                   Highlights             capacity can be utilized. If the links depicted in Diagram 3 could be made to
                                          sustain a 91 percent average utilization factor, the actual data rate across both
     To efficiently migrate data across   links could be estimated as follows:
     high-speed DS-3 links, you need
     to use 90 to 95 percent of a         •	 Link	utilization	factor	=	0.91
     link’s capacity.                     •	 Raw	bandwidth/DS-3	link	=	5.5	MBps/link
                                          •	 Compression	ratio	=	2/1
                                          •	 Number	of	DS-3	links	=	2	
                                          •	 0.91	x	(5.5	MBps/link)	x	(2/1)	x	(2	links)	=	20	MBps

                                          Many factors can cause the percent utilization of a link’s capacity to change over
                                          time. By monitoring the link(s), it is possible to tune migration performed to
                                          help sustain a high percentage of link utilization over the course of one or more
                                          migration sessions. Leased point-to-point lines such as DS-3 circuits are most
                                          cost-effective when they are being utilized at near capacity. The user pays the
                                          same amount to lease the line whether it is actually in use or not. These circuits
                                          are ideal in situations such as data center relocations where large amounts of
                                          data are being moved within a fixed period of time.

     Multiplexed T1 circuits may be       In situations where smaller amounts of data are being migrated, multiplexed T1
     used for smaller data migrations.    circuits may be used. If data compression is to occur in the channel extenders
     Additional hardware may be needed    for data being sent across T1 links, additional hardware in the form of inverse
     if channel extenders are used.       multiplexors may be required. The raw bandwidth of a T1 link is 1.544 Mbps.
                                          Approximately 28 T1 links equals the raw bandwidth of a single DS-3 (T3)
                                          circuit (44.736 Mbps).
Softek TDMF z/OS extended
distance migration.
Page 14




                                     Diagram 4 shows the use of n number of T1 circuits being multiplexed into a
                  Highlights         high-speed serial interface (HSSI) for the purpose of transporting compressed
                                     data packets. Compression ratio estimates for T1 and T3 circuits carrying
                                     DASD data are comparable (typically 2:1). If compression is not required, the
                                     inverse multiplexors are not needed, and the T1 circuits may be connected to
                                      10708885 TDMF for z/OS Extended Distance Migration
                                      Diagram 4
                                     the channel extenders via modem using a V.35 interface.


     Diagram 4 shows compressed                                     Multiple LPAR “PULL” environment
     data packets that are being                               (Multiplexed “T1” links using data compression)
     transported via a multiplexed          Local data center                                                     Remote data center
     high-speed serial interface.              TDMF AGENT                                                           TDMF MASTER

                                                   LPAR “1”                                                              LPAR “2”
                                                                                      Migrate
                                                                                       data
                                                   CHPIDS                                                                CHPIDS
                                                                      Channel                        Channel
                                              10   11   20    21      extender                       extender       30   31   40    41
                                                                                      “T1” (#1)
                                                                         21                               30
                                                                                      “T1” (#2)
                                                                         20   HSSI                 HSSI   31

                                                                         30                               21
                                                                                      “T1” (#n)
                                                                         31                               20

                                                                      Inverse                        Inverse
                                              10   11   30    31     multiplexor                    multiplexor     20   21   40    41



                                                    Source
                                                                          Extended distance connection                    Target
                                                     disk                (“T1” point-to-point leased lines)                disk



                                                   SYSCOM
                                                   data set

                                             Source subsystem                                                       Target subsystem



                                     Diagram 4
                                     Multiple LPAR pull environment (Multiplexed “T1” links using data compression)
Softek TDMF z/OS extended
distance migration.
Page 15




                                         Diagram 5 illustrates a solution for users who have the need to migrate vary-
                  Highlights             ing amounts of data over extended distances on an ongoing basis. This may
                                         be achieved by connecting the data centers involved to a tariff-based packet
                                         switching network referred to as an ATM “cloud.” The cost of migrating data
                                         depends on the number and size of data packets sent. During periods of little
                                         or no migrationfor z/OS Extended Distance Migrationthe high cost of leased line(s).
                                          10708885 TDMF activity, the user does not incur
                                         Another 5
                                          Diagram advantage of the ATM cloud network is that it can be used to connect
                                         more than two data centers together.


     Diagram 5 depicts an ATM cloud                                    Multiple LPAR “PULL” environment
     network that can be useful for            Local data center                                                 Remote data center
     users who need to migrate varying             TDMF AGENT                                                      TDMF MASTER

     amounts of data over distances on                 LPAR “1”                                                         LPAR “2”
     an ongoing basis.
                                                       CHPIDS                                                           CHPIDS
                                                                        Channel        Migrate       Channel
                                                  10   11   20    21    extender        data         extender      30   31   40    41

                                                                           21                           30
                                                                                        ATM
                                                                           20          CLOUD            31

                                                                           30                           21

                                                                           31                           20



                                                  10   11   30    31                                               20   21   40    41

                                                                            Extended distance connection
                                                        Source               (Packet switching network)                  Target
                                                         disk                                                             disk



                                                       SYSCOM
                                                       data set

                                                 Source subsystem                                                  Target subsystem



                                         Diagram 5
                                         Multiple LPAR pull environment (ATM “cloud” packet switching network)
Softek TDMF z/OS extended
distance migration.
Page 16




                                          ATM clouds may be implemented on SONET (Synchronous Optical NETwork)
                  Highlights              fiber networks capable of running at 155 Mbps. The throughput yielded from
                                          connecting to an ATM network will depend on the backplane capacity of the
                                          channel extenders being used. User data throughput across an ATM cloud
                                          implemented on a SONET fiber network typically ranges from 80 Mbps to 135
                                          Mbps (10 MBps to 17 MBps). Consult the channel extender vendor for data
                                          rate(s) supported by the channel extender equipment to be used.

                                          Sample estimate timings
     Achieving a 2:1 compression ratio    Table 1 shows the number of gigabytes that may be moved with no compression
     with 85 percent efficiency doubles   (1:1 ratio) and an efficiency rate of 85 percent.
     the Gigabytes transferred.
                                           Bandwidth      Number of links   Gigabytes/1 hr   Gigabytes/12 hrs   Gigabytes/24 hrs

                                           E3             1                 13               156                312
                                                          2                 22               265                530
                                                          3                 33               397                795
                                                          4                 44               530                1061
                                           DS3            1                 16               210                403
                                                          2                 28               343                686
                                                          3                 42               515                1030
                                                          4                 57               686                1373
                                           OC3            1                 59               711                1422
                                                          2                 118              1422               2845
                                                          3                 177              2134               4268
                                                          4                 237              2845               5691




                                          Please note that DS3 is the formatted digital signal used on T3 lines.
Softek TDMF z/OS extended
distance migration.
Page 17




                                           If it is possible to achieve a 2:1 compression ratio with 85 percent efficiency,
                     Highlights
                                           then the Gigabytes transferred in the table above can be doubled.

                                           For specific estimates on the transfer rate that can be achieved in an environ-
                                           ment using channel extenders, please contact the hardware vendor.

                                           A typical data center relocation
     This extended distance migration      The following is an extended distance data migration scenario using TDMF
     example allows relocation with        software. It would allow the user to relocate a data center and minimize down-
     minimal downtime.                     time to the amount of time required to bring down LPAR(s) operating in the
                                           local (originating) data center, switch over the network and IPL LPAR(s) in the
                                           remote (destination) data center.

                                           The actual migration of data would be performed while the user’s production
                                           workload is running. The key to this relocation technique is in providing
                                           sufficient extended channel bandwidth that would enable the user to run pro-
                                           duction applications in the remote data center that would temporarily access
                                           data residing in the local data center.

     The TDMF master would be run in       Essentially, the user’s data would be migrated in two stages:
     the remote data center. The stage 1
     environment can be set up without     •	 Stage	1	would	be	to	migrate	static	data	(system	packs	and	application		
     disrupting data processing at the        loadlibs	and	parameters)	using	TDMF	software’s	point-in-time	option.	
     local center.                         •	 Stage	2	would	be	to	migrate	dynamic	data	(application	data)	using		
                                             TDMF	software’s	swap	option.	

                                           The TDMF master would be run in the remote data center. The reason for
                                           this is that point-in-time migrations do not require target DASD subsystems to
                                           be genned to LPARs running agent copies of TDMF software. This allows the
Softek TDMF z/OS extended
distance migration.
Page 18




                                             environment for Stage 1 of the relocation to be set up without disrupting data
                   Highlights
                                             processing in the local data center, because a new gen would not be required
                                             for the production LPAR(s).

                                             Outlined below are the steps required to perform the data center relocation:

     The first four steps for performing     1.	Setup	initial	remote	data	center	environment.	This	includes:
     a data center relocation include        •	 Creating	a	skeleton	MVS	in	remote	data	center	capable	of	running	the	
     setting up the remote data center          TDMF	master
     environment, performing a freeze to     •	 Installing	target	DASD	subsystems	in	remote	data	center	intended	to	receive	
     static data in the local center, per-      the	data.
     forming Stage 1 of the relocation,
                                             •	 Genning	source	DASD	in	local	data	center	and	target	DASD	in	remote	data	
     and preparing the production envi-
                                                center	to	the	skeleton	MVS.
     ronment in the remote data center.
                                             •	 Establishing	extended	channel	connectivity	between	the	skeleton	MVS	in	the	
                                                remote	data	center	and	the	source	DASD	subsystems	in	the	local	data	center.

                                             2.	Perform	a	freeze	on	update	activity	to	static	data	in	the	local	data	center	
                                                which	will	be	needed	to	setup	a	production	environment	in	the	remote	data	
                                                center.	This	data	includes	system	packs	as	well	as	volumes	containing	appli-
                                                cation	loadlibs	and	parameters.

                                             3.	Perform	Stage	1	of	the	relocation	(use	TDMF	software’s	point-in-time	option	
                                                to	migrate	static	data	from	the	local	(original)	data	center	to	the	remote	
                                                data	center).

                                             4.	Prepare	a	production	environment	in	the	remote	(destination)	data	center	
                                                using	the	static	data	migrated	from	step	3.	This	includes:	
                                             •	 Setting	up	system	definitions	for	production	LPAR(s)	to	run	in	the	remote	
                                                data	center.	Cataloging	source	VOLSERs	that	will	be	required	by	applica-
                                                tions	that	are	to	run	in	remote	data	center	LPAR(s).
Softek TDMF z/OS extended
distance migration.
Page 19




                                           •	 Testing	the	functionality	of	applications	brought	over	from	the	local	data	center.
                  Highlights               •	 Adding	the	target	DASD	subsystems	to	the	I/O	gen(s)	of	production	LPAR(s)	
                                              intended	to	run	in	the	remote	(destination)	data	center.
                                           •	 Setting	up	network	access	to	applications	to	be	brought	up	in	the	remote	
                                              data	center.

     The last two steps include bring-     5.	Bring	down	production	LPAR(s)	in	the	local	(original)	data	center,	switch	over	
     ing down production LPAR(s) in           the	network,	and	IPL	production	LPAR(s)	in	the	remote	(destination)	data	
     the local data center and perform-       center.	Note:	At	this	point,	the	user	would	be	running	production	applications	
     ing Stage 2 of the relocation using      in	the	remote	data	center	accessing	“dynamic”	data	across	extended	channel	
     TDMF software’s swap option.             connections	in	the	local	(original)	data	center.	It	is	important	that	the	user	has	
                                              set	up	sufficient	extended	channel	bandwidth	to	prevent	a	bottleneck	between	
                                              data	centers.

                                           6.	Perform	Stage	2	of	the	relocation	[use	TDMF	software’s	swap	option	to	migrate	
                                              dynamic	application	data	from	source	DASD	subsystems	in	the	local	(original)	
                                              data	center	to	target	DASD	subsystems	in	the	remote	(destination)	data	center].	
                                              Because	these	migrations	are	transparent	to	applications	using	the	data,	they	
                                              may	be	broken	up	into	smaller	groups	of	volumes	to	reduce	the	use	of	extended	
                                              channel	bandwidth	being	shared	with	other	applications.	When	all	of	the	data	
                                              has	migrated,	extended	channel	connections	to	the	local	(original)	data	center	
                                              may	be	removed	thus	completing	the	data	center	relocation.

                                           Data center relocation considerations
     Watch the network requirements        Below are some considerations to keep in mind when planning data center
     between the two locations and keep    relocation using TDMF software:
     the communication links to a mini-
     mum of T3/E3 links.                   1.	Network	requirements	between	current	and	new	location.
                                           •	 There	are	really	only	a	few	players	in	the	channel	extension	business	today.	
                                              The	average	lead	time	for	building	the	extenders	is	approximately	two	months
Softek TDMF z/OS extended
distance migration.
Page 20




                                         based	on	requirements.	The	same	channel	extenders	do	not	support	parallel	
                  Highlights             channels.	Usually	the	minimum	contract	is	two	months	if	the	user	is	plan-
                                         ning	on	leasing.
                                      •	 Communication	links	should	be	at	the	very	minimum	T3/E3	links.	T1	links	
                                         are	too	slow.	Using	existing	networks	within	the	user’s	environment	is	not	
                                         recommended	as	the	traffic	could	impact	production	applications.	There	
                                         could	be	a	lead	time	associated	with	the	lease	of	these	links.	Additionally,	
                                         the	term	of	the	lease	could	come	into	play.

     Ensure that you have the right   2.	CPU	memory	requirements	to	complete	the	Softek	TDMF	point-in-time	
     CPU memory for Softek TDMF          copies	to	the	new	location.
     point-in-time copies.            •	 The	total	amount	of	memory	necessary	for	a	migration	is	dependent	upon	the	
                                         number	of	LPARs	participating	in	the	sessions	and	the	number	of	volumes	to	
                                         be	moved.	The	intent	is	to	move	the	largest	amount	of	volumes	using	the	least	
                                         amount	of	memory.	To	this	end,	it	is	important	to	consider	whether	or	not	any	
                                         LPARs	(such	as	test	LPARs)	might	be	shut	down	during	the	migration.	Also,	it	
                                         is	suggested	that	a	review	of	all	volumes	be	done	to	identify	any	volumes	that	
                                         need	not	be	moved,	such	as	page	packs,	work	packs,	etc.
                                      •	 It	may	be	that	there	is	not	enough	memory	available	in	the	production	environ-
                                         ment.	Having	the	test	LPAR(s)	shut	down	would	make	that	storage	potentially	
                                         available	either	by	DRM	if	all	storage	is	dynamically	re-configurable	or	a	
                                         power-on-reset	(POR)	may	be	necessary	in	order	to	access	that	storage.
                                      •	 If	the	total	memory	requirement	is	more	than	any	one	LPAR	could	handle,	
                                         what	can	be	done	in	this	situation	is	to	balance	the	load	across	the	partici-
                                         pating	LPARs	so	that	no	one	partition	is	saturated.	This	would	result	in	
                                         more	than	one	LPAR	running	master	sessions	in	a	push	scenario	as	laid	out	
                                         in	a	worksheet.
Softek TDMF z/OS extended
distance migration.
Page 21




                                           3.	Swap	migration	versus	point-in-time	copy.
                    Highlights             •	 It	is	recommended	that	the	point-in-time	copy	function	be	used	rather	than	
                                              the	swap	function.	The	point-in-time	function	provides	a	fall	back	position	
     Make certain to use the point-in-        in	the	event	of	a	problem.	Additionally,	this	would	provide	the	ability	
     time function rather than the            to	ensure	that	system	and	application	feature/function	work	at	the	new	site	
     swap function.                           without	impacting	production	at	the	current	location	(assumes	net-new	or	
                                              asset	swap	equipment).

                                           4.	Long-term	sessions.
     When a long-term session occurs,      •	 Determining	the	duration	of	the	migration	is	dependent	on	the	bandwidth	
     keep in mind that there is no func-      available	on	the	links,	the	channel	configuration	over	the	channel-extenders	
     tion within the product to pick up       and	the	amount	of	transferred	data	being	moved.	Using	Softek	TDMF	soft-
     where it left off.                       ware	will	result	in	only	allocated	tracks	being	transferred;	that	is,	if	only	70	
                                              percent	of	a	volume	is	allocated,	then	only	70	percent	of	the	volume	needs	
                                              to	be	migrated.
                                           •	 Keep	in	mind	that	during	the	data	migration,	Softek	TDMF	software	will	
                                              not	survive	an	IPL	of	one	of	the	participating	systems;	all	sessions	
                                              affected	would	have	to	be	started	from	the	beginning.	There	is	no	function	
                                              within	the	product	to	pick	up	where	it	was	left	off.

                                           5.	Push	versus	pull	process.
     Determine whether you want a          •	 A	push	operation	is	where	all	the	sessions	are	at	the	current	location	and		
     push or a pull process.                  the	master	sessions	are	pushing	the	data	to	the	new	location.	No	other	
                                              LPARs	participate	outside	of	the	current	location.
                                              –	 The	benefit	of	a	push	operation	is	that	there	are	no	extra	systems	to	be	
                                                 included	in	a	session.	
                                              –	 The	target	devices	would	be	defined	via	channel-extension	to	the	LPAR(s)	
                                                 executing	the	master	sessions.
                                              –	 Agent	sessions	would	be	run	on	all	LPARs	participating	in	the	migration.
                                              –	 If	the	communications	link(s)	fail	for	whatever	reason,	the	sessions	may	be	
                                                 suspended	and	then	continued	when	the	links	are	reestablished.
Softek TDMF z/OS extended
distance migration.
Page 22




                                           •	 A	pull	operation	is	where	the	master	session(s)	are	executed	in	the	new	loca-
                   Highlights                 tion,	and	the	LPARs	at	the	current	location	participate	as	agent	systems.
                                              –	 This	assumes	that	there	are	new	mainframe	server(s)	at	the	new	location	
                                                 with	nothing	executing	on	them	outside	of	the	LPARs	executing	the	Softek	
                                                 TDMF	sessions	and,	therefore,	there	would	be	sufficient	memory	available	
                                                 for	the	master	sessions.
                                              –	 The	benefit	would	be	that	the	current	production	LPARs	would	only	be	
                                                 acting	as	Agents	systems	and	would	significantly	reduce	the	memory	
                                                 requirements	on	these	LPARs.
                                              –	 The	target	devices	would	be	defined	only	to	the	new	LPARs.	The	source	
                                                 devices	would	be	defined	via	channel-extension	to	this	environment.	The	
                                                 agent	systems	are	not	required	to	have	the	target	volume	defined	in	a	
                                                 point-in-time	situation.
                                              –	 If	the	communication	link(s)	fail,	a	15-minute	limitation	would	come	into	
                                                 play	as	the	master	system(s)	would	not	be	able	to	communicate	to	the	
                                                 communications	data	set	(COMMDS).	If	the	link(s)	have	not	been	reestab-
     A multi-hop migration with loaner           lished	within	the	15-minute	limit,	the	sessions	will	fail.
     storage equipment may be more
     costly but can allow users to use     Alternate method — a combination of local and remote migrations
     a vendor’s remote copy facility.      Although this may be a more costly solution, organizations have used a combina-
     Benefits of multi-hop migration       tion of TDMF local swap migrations along with hardware controller-based remote
     include exercising the new environ-   replication to accomplish a data center relocation. Some have referred to this as a
     ment prior to cutting over to it.     multi-hop migration using loaner equipment (storage) at the current site.
Softek TDMF z/OS extended
distance migration.
Page 23




                                        The basic reason for using TDMF software in this scenario is to place the data
                  Highlights                 10708885 TDMF for z/OS Extended Distance Migration
                                        on a storage subsystem that is compatible with a remote storage subsystem.
                                             Diagram 6
                                        Users now have the ability to use a vendor’s remote copy facility such as IBM’s
                                        Peer-to-Peer Remote Copy (PPRC).


                                                                 MVS/TDMF                              MVS

                                                                 Local TDMF              Remote PPRC

                                                      Vendor B                Vendor A                 Vendor A




                                                                  Dallas                               Chicago



                                        There are some advantages with this type of migration. Assuming that the new
                                        storage is identical at both the current and new locations, there is an oppor-
                                        tunity to exercise the new storage with the production environment prior to
                                        cutting over to the new remote site. That is, do a TDMF swap migration to the
                                        new local disk and allow PPRC to do the remote copy to the new location. After
                                        the local migration is complete, the user can continue to run production while
                                        monitoring the new subsystem at the current site for both functionality and per-
                                        formance. When satisfied that everything is working fine, the user can schedule
                                        a shutdown at the local site and bring up production at the new remote site.

     Another variation might use        A variation of the above could be to use TDMF software’s perpetual point-
     TDMF software’s perpetual point-   in-time option. In this scenario, it is possible to create a local replication of
     in-time option.                    the production DASD using perpetual point-in-time and allow PPRC to copy
                                        the data to the remote site. At some point, a controlled point-in-time mirror
Softek TDMF z/OS extended
distance migration.
Page 24




                                         image of the production environment can be created at the new location. Stop
                    Highlights           the PPRC session and test the relocation procedure at the remote site. When
                                         satisfied that everything will work, start a new fresh PPRC session and create a
                                         new point-in-time mirror image for the final migration. Using perpetual point-
                                         in-time, it is not necessary to recopy all the DASD from the start but just copy
                                         the delta changes since the last perpetual point-in-time copy was created. It is
                                         possible to repeat this testing for an unlimited number of times.

                                         Alternate method — use of TDMF TCP/IP option
     Using the TDMF TCP/IP option        In some environments, the use of the TDMF TCP/IP option may be a cost-
     may be a cost-effective method of   effective method of doing a data center relocation. The cost savings is gained
     moving data.                        by not having to acquire expensive channel extenders or establishing costly
                                         hardware replication features such as EMC’s SRDF.

                                         It should be noted that, generally speaking, the transmission bandwidth is
                                         much lower and more difficult to achieve parallel data transmission, thereby
                                                         10708885 TDMF for z/OS Extended Distance Migration
                                         extending the length of time a migration will take. Also, if the TCP/IP lines
                                                         Diagram 7
                                         are shared with other functions such as online communications then the over-
                                         all time it takes for a TDMF migration will be impacted.



                                                                  MVS/TDMF              MVS/TDMF
                                                                               TDMF
                                                                               TCP/IP

                                                                    Vendor B             Vendor A




                                                                     Dallas              Chicago
Softek TDMF z/OS extended
distance migration.
Page 25




                                           With a TCP/IP replication, it not possible to do a swap migration because as
                   Highlights              a volume completes the copy/synchronization phase it is not possible (nor
                                           desirable) to physically swap that volume to the new remote location while
                                           production continues to run at the current local site. Using TCP/IP does
                                           require a point-in-time replication.

     With this type of replication, no     Note that with this type of replication there is no hardware in place for either
     hardware is available to allow the    the local or remote operating system to have access to the DASD at the other
     local or remote operating system      location. The way TDMF software accomplishes this replication is by having a
     access to the DASD.                   TDMF master read the data off the local source volume and then pass the data
                                           to a TDMF master running at the remote location. The remote TDMF master
                                           will, in turn, write the data to the target volume.

     Shutdowns would be necessary to       When all required volumes are replicated to the remote site, it is necessary
     create the final TDMF copies of the   to shut down all applications, create the final TDMF point-in-time copy of
     source DASD.                          the source DASD onto the target DASD and then shut down MVS at both
                                           locations. The shutdown is required because it is necessary to clip or re-label
                                           the volumes at the new remote location to the VOLID of the original source
                                           volumes. TDMF software can create the ICKDSF control statements to re-label
                                           the volumes.

                                           One of the drawbacks to this type of relocation is that in order to test the relo-
                                           cation effort, it is necessary to terminate the TDMF session, clip the volumes
                                           at the new location, and test and then restart the TDMF relocation effort again
                                           from the beginning. Production at the current location does continue to run at
                                           all times other than a short outage to synchronize all local and remote storage
                                           just prior to terminating the TDMF session.
Softek TDMF z/OS extended
distance migration.
Page 26




                                          There is an option that could help facilitate the testing effort without restart-
                   Highlights             ing the entire migration again from the beginning. It is possible to use a
                                          hardware mirror facility such as IBM’s PPRC to create a second copy of the
     It’s possible to use a hardware      DASD at the remote location. This would allow the user to continue produc-
     mirror facility to create a second   tion in the original local data center, continue to keep replicating current
     copy of the DASD, allowing contin-   source volumes to the remote target volumes, and test the relocation effort
     ued production.                      using the second PPRC copy. Note that it would still be necessary to quiesce
                                          the applications in the production environment to create a consistent copy
                                          of the test volumes. To further assist in this effort, one method is to use the
                                          optional perpetual point-in-time feature. This would allow the creation of as
                                          many point-in-timefor z/OS Extended Distance Migration
                                               10708885 TDMF copies of the source DASD as required, all of which can
                                               Diagram 8
                                          be accomplished without restarting the TDMF replication from the beginning
                                          but rather just sending the updates that occurred since the last point-in-time.


                                                                              LPAR A                LPAR B
                                                 MVS/TDMF                    MVS/TDMF            MVS/TDMF/TEST
                                                                 TDMF                                   Optional
                                                                 TCP/IP                                IBM PPRC

                                                   Vendor B                   Vendor A                  Vendor C




                                                    Dallas                    Chicago


                                          When satisfied that the testing went well, the user can execute the procedures
                                          as outlined above and complete the final TDMF TCP/IP relocation copy. Again,
                                          this could all be accomplished by starting the TDMF replication once and then
                                          just sending the new source updates to the target when required between testing
                                          period and the final relocation cutover.
Softek TDMF z/OS extended
distance migration.
Page 27




                                             Alternate method — use LDMF software to isolate an application for relocation
                   Highlights                In some cases, relocating a data center is not granular enough when all that is
                                             really required is an application or two to be relocated. The issue most of the time
     When only an application needs          is that the application(s) is not nicely located on its own volumes but, rather, inter-
     to be relocated, difficulties arise     mixed with many other applications that are not being relocated. Some examples
     because the application resides         of this could be a business unit is being relocated or sold or an application that
     intermixed with applications that       needs to be moved from a test/development environment to a production environ-
     don’t need to be relocated.             ment located in a different data center.

     Using Softek LDMF and TDMF soft-        Using a combination of Softek Logical Data Migration Facility (LDMF ™) and
     ware can help relocate an application   TDMF software can accomplish the relocation with little effort and no outage
     with little effort and virtually no     impact to the application while it continues to run production until the final
     outage impact.                          TDMF cutover. At cutover, the outage will be only for a very short period.

                                             Once the files of the application have been identified, LDMF software can be used
                                             to migrate those datasets onto dedicated volumes. (See the white paper Imple-
                                             menting	LDMF	z/OS	for	simple,	effective,	and	nondisruptive	data	migrations.)
                                             Now that the application(s) is isolated on its own volumes, there are multiple
                                             options for relocating the application. One way would be use the traditional tape
                                             dump/restore, but that would require an extended outage. Another choice is to
                                             use TDMF software as discussed previously to relocate the application volumes to
                                             a new remote location with only a minimal application outage required.

                                             Other Softek TDMF solutions
                                             Softek TDMF z/OS technology is designed for volume-level migrations involving
                                             movement over local or remote distances (also known as “global migrations”).
                                             Data can be moved across a TCP/IP network (LAN or WAN) or channel extenders.
Softek TDMF z/OS extended
distance migration.
Page 28




                                         Reduction of routine database backup time
                    Highlights           One of the benefits offered by TDMF software is its ability to significantly
                                         reduce routine out-of-service database backup time. TDMF software’s ability
     A benefit of using Softek TDMF      to perform a point-in-time backup with a prompt, coupled with its ability to
     z/OS technology is its ability to   handle a group of related volumes as a single entity, enables it to perform the
     reduce out-of-service database      majority of backups while the database region is active and available to users.
     backup time.                        The region would have to be down for only a small percentage of time at the
                                         end of the backup, because database transactions are buffered in CPU memory
                                         before being written out to disk. When TDMF software signals with a prompt
     TDMF software can perform back-     that it is ready to complete the migrations for a group of related volumes, the
     ups while the database is active    database region may be brought down for a short period, which will allow any
     and available to users.             remaining buffered updates to be written out to disk. This ensures that the logi-
                                         cal and physical images of each of the source volumes match.

                                         Once the region is down, TDMF software’s prompt may be responded to, allow-
                                         ing TDMF software to finish backing up the relatively few remaining updates
                                         contained on the source volumes. When the backup is complete, the region may
                                         be brought up and put back into service. The point-in-time copy of the database
                                         created on the set of target volumes may then be written to tape using the user’s
                                         conventional disk-to-tape package.

                                         Maintenance
                                         Users may use TDMF software to migrate off of packs that have not yet failed
                                         but are giving indications that maintenance is required.
     TDMF software has multiple other
     uses including migrations for       Performance tuning
     packs that need maintenance and     As the dynamics of various DASD subsystems within a shop change, TDMF
     for rebalancing workloads for       software can be used to nondisruptively rebalance workloads for better over-
     better performance.                 all performance.
Softek TDMF z/OS extended
distance migration.
Page 29




                                        Lease considerations
                  Highlights            For users who lease DASD, TDMF software can be used to help manage and
                                        significantly reduce lease overlap between old and new equipment.

                                        Application testing
     The software can also be used to   TDMF software may be used to create copies of production data that application
     help reduce lease overlap and to   programmers may use to develop modifications required for processing data.
     create copies of production data
     for various reasons.               Other Softek migration solutions
                                        Softek LDMF software
                                        Softek LDMF z/OS has the ability to nondisruptively switch files from a source
                                        volume to a target volume. LDMF software moves the applications dynamically
                                        onto new storage. This switchover feature, which occurs under user control,
                                        results in redirection of an application’s I/O function (e.g., from the original
                                        source to the target). This occurs without any disruption to the application.
                                        Softek LDMF software is designed and optimized specifically for local data
     Softek LDMF software can assist    migrations at the file extent level. Softek LDMF and TDMF software can both
     with data migration as well. And   be used in the same environment to address different migration project require-
     Softek TDMF for open systems       ments. LDMF and TDMF software provide a very fast, easy, optimized migration
     works with several platforms       solution for data migrations
     including IBM AIX, Windows NT
     and other environments.            Softek TDMF software for open systems
                                        For open systems, Softek also offers TDMF software for the open systems
                                        platforms. The platforms include IBM AIX®, HP-UX, Sun Solaris, Linux® and
                                        Microsoft® Windows® NT®, 2000, 2003.

                                        Softek Replicator for open systems
                                        Softek Replicator is versatile multiplatform data replication software that
                                        enables local and offsite disaster recovery and eliminates backup windows.
Softek TDMF z/OS extended
distance migration.
Page 30




                                             Summary
                   Highlights                Today’s business-critical applications must be available 24x7, with no down-
                                             time window for data migration or relocation. Softek TDMF software, the
                                             standard in data migration, gives users the freedom and the power to move
                                             data from any storage to any storage, on any platform, over any distance, at
                                             anytime — with no interruption to active applications.

     Softek TDMF software helps with         Softek TDMF z/OS is easy to use, flexible and transparent to applications.
     your data migration needs while         Whether the need is to upgrade to new storage and/or new server hardware
     maintaining application availability.   (from any vendor, to any vendor), consolidate or relocate a data center, or find
                                             a more effective way to migrate data based on cost/performance, TDMF soft-
                                             ware provides the solution, all while maintaining application availability.

                                             TDMF definitions
                                             The following terms are commonly used when discussing a migration strategy
                                             for transferring workloads:

                                             TDMF session: A migration group(s) and/or migration pairs to be processed
                                             in a single Softek TDMF execution.

                                             Master: The Softek TDMF system running as an MVS batch job that is
                                             responsible for the data copy function. There can be only one master system
                                             in a Softek TDMF session.

                                             Agent: An associated Softek TDMF MVS image running in a shared storage
                                             environment with the master. To ensure data integrity, any MVS LPAR that
                                             has access to the source or target volumes must be running the TDMF master
                                             or one of the agent systems. The master and associated agent systems commu-
                                             nicate via a shared system communications data set (COMMDS).
Softek TDMF z/OS extended
distance migration.
Page 31




                                   Source: The originating DASD volume(s) containing the data to be migrated.
                  Highlights
                                   Target: The destination DASD volume(s) receiving the migrated data.

     These are some of the terms   Migration pair: Source and target volumes for a single migration.
     used when discussing a data
     migration strategy.           Migration group: A group of volume pairs with the same group name.

                                   Synchronization: The collection of the final set of updates from all source
                                   volumes in a group or session applied to the target volumes. For a point-in-
                                   time copy, synchronization involves quiescing any source application systems
                                   to ensure that all buffers are flushed.

                                   Swap migration: A migration session in which the source VOLSER is
                                   switched to the target volume at the end of the session.

                                   Point-in-time migration: A migration session in which the VOLSER on the
                                   source volume is not switched, and remains on-line to the application on the
                                   original volume.

                                   Local: Equivalent to source in an extended distance migration.

                                   Remote: Equivalent to target in an extended distance migration.

                                   Push migration: An extended distance migration in which the Softek TDMF
                                   master runs on the local system and makes point-in-time copies to remote vol-
                                   umes using channel extenders and communication links. The remote volumes
                                   do not have to be attached to any processor.
Pull migration: An extended distance migration in which the Softek TDMF    © Copyright IBM Corporation 2007

master runs on the remote system and makes point-in-time copies from the     IBM Global Services
                                                                             Route 100
local volumes to the remote volumes using channel extenders and communi-     Somers, NY 10589
cation links. The local system(s) run only a Softek TDMF agent.              U.S.A.

                                                                             Produced in the United States of America
Gen/Genning: To generate or produce something according to an algorithm      12-07
                                                                             All Rights Reserved
or program or set of rules (the opposite of parse).
                                                                             IBM, the IBM logo, AIX, LDMF, Softek, TDMF and
                                                                             z/OS are trademarks or registered trademarks of
Mbps: megabits per second                                                    International Business Machines Corporation in
                                                                             the United States, other countries, or both.

MBps: megabytes per second                                                   Microsoft, Windows, and Windows NT are trade-
                                                                             marks of Microsoft Corporation in the United States,
                                                                             other countries, or both.
Loadlibs: load libraries
                                                                             Linux is a registered trademark of Linus Torvalds
                                                                             in the United States, other countries, or both.
For more information
                                                                             Other company, product and service names may
For more information about TDMF z/OS, visit:                                 be trademarks or service marks of others.

                                                                             References in this publication to IBM products or
ibm.com/services/storage                                                     services do not imply that IBM intends to make them
                                                                             available in all countries in which IBM operates.

                                                                             Performance/capacity results or other technical
                                                                             statistics appearing in this document are provided
                                                                             by the author solely for the purposes of illustrating
                                                                             specific technical concepts relating to the products
                                                                             discussed herein. The performance/capacity
                                                                             results or other technical statistics published herein
                                                                             do not constitute or represent a warranty as to mer-
                                                                             chantability, operation, or fitness of any product for
                                                                             any particular purpose.




                                                                             GTW01276-USEN-01

More Related Content

What's hot

Hacking Telco equipment: The HLR/HSS, by Laurent Ghigonis
Hacking Telco equipment: The HLR/HSS, by Laurent GhigonisHacking Telco equipment: The HLR/HSS, by Laurent Ghigonis
Hacking Telco equipment: The HLR/HSS, by Laurent GhigonisP1Security
 
LTE1406 Extended VoLTE Talk Time.pptx
LTE1406 Extended VoLTE Talk Time.pptxLTE1406 Extended VoLTE Talk Time.pptx
LTE1406 Extended VoLTE Talk Time.pptxkuldeep288490
 
SANS Windows Artifact Analysis 2012
SANS Windows Artifact Analysis 2012SANS Windows Artifact Analysis 2012
SANS Windows Artifact Analysis 2012Rian Yulian
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OSAJAL A J
 
Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01
Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01
Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01Dawood Aqlan
 
Fast Start Failover DataGuard
Fast Start Failover DataGuardFast Start Failover DataGuard
Fast Start Failover DataGuardBorsaniya Vaibhav
 
Demystifying extended distance ficon, por Stephen R. Guendert
Demystifying extended distance ficon, por Stephen R. GuendertDemystifying extended distance ficon, por Stephen R. Guendert
Demystifying extended distance ficon, por Stephen R. GuendertJoao Galdino Mello de Souza
 
GMPLS (generalized mpls)
GMPLS (generalized mpls)GMPLS (generalized mpls)
GMPLS (generalized mpls)Netwax Lab
 
VoLTE Flows and CS network
VoLTE Flows and CS networkVoLTE Flows and CS network
VoLTE Flows and CS networkKarel Berkovec
 
FD.io VPP tap-inject with sample_plugins
FD.io VPP tap-inject with sample_pluginsFD.io VPP tap-inject with sample_plugins
FD.io VPP tap-inject with sample_pluginsNaoto MATSUMOTO
 
Lte x2-handover-sequence-diagram
Lte x2-handover-sequence-diagramLte x2-handover-sequence-diagram
Lte x2-handover-sequence-diagramPrashant Sengar
 
Monitores-sistemas operativos
Monitores-sistemas operativosMonitores-sistemas operativos
Monitores-sistemas operativosDaniel Vargas
 
nokia_netact_brochure
nokia_netact_brochurenokia_netact_brochure
nokia_netact_brochureAna Sousa
 
Chapter 6-Consistency and Replication.ppt
Chapter 6-Consistency and Replication.pptChapter 6-Consistency and Replication.ppt
Chapter 6-Consistency and Replication.pptsirajmohammed35
 

What's hot (20)

Hacking Telco equipment: The HLR/HSS, by Laurent Ghigonis
Hacking Telco equipment: The HLR/HSS, by Laurent GhigonisHacking Telco equipment: The HLR/HSS, by Laurent Ghigonis
Hacking Telco equipment: The HLR/HSS, by Laurent Ghigonis
 
NAS Concepts
NAS ConceptsNAS Concepts
NAS Concepts
 
LTE1406 Extended VoLTE Talk Time.pptx
LTE1406 Extended VoLTE Talk Time.pptxLTE1406 Extended VoLTE Talk Time.pptx
LTE1406 Extended VoLTE Talk Time.pptx
 
SANS Windows Artifact Analysis 2012
SANS Windows Artifact Analysis 2012SANS Windows Artifact Analysis 2012
SANS Windows Artifact Analysis 2012
 
EMBEDDED OS
EMBEDDED OSEMBEDDED OS
EMBEDDED OS
 
Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01
Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01
Base station-subsystem-introduction-to-gprs-egprs-130915133623-phpapp01
 
Mvs commands
Mvs commandsMvs commands
Mvs commands
 
Fast Start Failover DataGuard
Fast Start Failover DataGuardFast Start Failover DataGuard
Fast Start Failover DataGuard
 
DMRC SDH theory
DMRC SDH theoryDMRC SDH theory
DMRC SDH theory
 
Demystifying extended distance ficon, por Stephen R. Guendert
Demystifying extended distance ficon, por Stephen R. GuendertDemystifying extended distance ficon, por Stephen R. Guendert
Demystifying extended distance ficon, por Stephen R. Guendert
 
Komponen utama cpu
Komponen utama cpuKomponen utama cpu
Komponen utama cpu
 
Ebs clone r12.2.4
Ebs clone r12.2.4Ebs clone r12.2.4
Ebs clone r12.2.4
 
GMPLS (generalized mpls)
GMPLS (generalized mpls)GMPLS (generalized mpls)
GMPLS (generalized mpls)
 
VoLTE Flows and CS network
VoLTE Flows and CS networkVoLTE Flows and CS network
VoLTE Flows and CS network
 
FD.io VPP tap-inject with sample_plugins
FD.io VPP tap-inject with sample_pluginsFD.io VPP tap-inject with sample_plugins
FD.io VPP tap-inject with sample_plugins
 
Lte x2-handover-sequence-diagram
Lte x2-handover-sequence-diagramLte x2-handover-sequence-diagram
Lte x2-handover-sequence-diagram
 
Monitores-sistemas operativos
Monitores-sistemas operativosMonitores-sistemas operativos
Monitores-sistemas operativos
 
Less11 auditing
Less11 auditingLess11 auditing
Less11 auditing
 
nokia_netact_brochure
nokia_netact_brochurenokia_netact_brochure
nokia_netact_brochure
 
Chapter 6-Consistency and Replication.ppt
Chapter 6-Consistency and Replication.pptChapter 6-Consistency and Replication.ppt
Chapter 6-Consistency and Replication.ppt
 

Similar to Softek TDMF z/OS extended distance migration.

Implementing nondisruptivedata migrations for UNIX usingTDMF technology.
Implementing nondisruptivedata migrations for UNIX usingTDMF technology.Implementing nondisruptivedata migrations for UNIX usingTDMF technology.
Implementing nondisruptivedata migrations for UNIX usingTDMF technology.IBM India Smarter Computing
 
Srdf overview latency_v.5
Srdf overview latency_v.5Srdf overview latency_v.5
Srdf overview latency_v.5jas3399
 
Srdf overview latency_v.52
Srdf overview latency_v.52Srdf overview latency_v.52
Srdf overview latency_v.52jas3399
 
Srdf overview latency_v.5
Srdf overview latency_v.5Srdf overview latency_v.5
Srdf overview latency_v.5jas3399
 
What is cloud technology?
What is cloud technology?What is cloud technology?
What is cloud technology?MRPeasy
 
tibco online training
tibco online trainingtibco online training
tibco online trainingsapbest
 
Introducing MFX for z/OS 2.1 & ZPSaver Suite
Introducing MFX for z/OS 2.1 & ZPSaver SuiteIntroducing MFX for z/OS 2.1 & ZPSaver Suite
Introducing MFX for z/OS 2.1 & ZPSaver SuitePrecisely
 
CENTERA_MIGRATOR_PPT_NOTES
CENTERA_MIGRATOR_PPT_NOTESCENTERA_MIGRATOR_PPT_NOTES
CENTERA_MIGRATOR_PPT_NOTESDaniel Moshief
 
123126804 scada
123126804 scada123126804 scada
123126804 scadathangbd
 
Application layer and protocols of application layer
Application layer and protocols of application layerApplication layer and protocols of application layer
Application layer and protocols of application layerTahmina Shopna
 

Similar to Softek TDMF z/OS extended distance migration. (20)

Implementing nondisruptivedata migrations for UNIX usingTDMF technology.
Implementing nondisruptivedata migrations for UNIX usingTDMF technology.Implementing nondisruptivedata migrations for UNIX usingTDMF technology.
Implementing nondisruptivedata migrations for UNIX usingTDMF technology.
 
Implementing Softek zDMF technology.
Implementing Softek zDMF technology.Implementing Softek zDMF technology.
Implementing Softek zDMF technology.
 
Best Practices for Migration
Best Practices for MigrationBest Practices for Migration
Best Practices for Migration
 
Best Practices for Migration
Best Practices for MigrationBest Practices for Migration
Best Practices for Migration
 
Storage Migration Made Simple
Storage Migration Made SimpleStorage Migration Made Simple
Storage Migration Made Simple
 
Srdf overview latency_v.5
Srdf overview latency_v.5Srdf overview latency_v.5
Srdf overview latency_v.5
 
Srdf overview latency_v.52
Srdf overview latency_v.52Srdf overview latency_v.52
Srdf overview latency_v.52
 
Srdf overview latency_v.5
Srdf overview latency_v.5Srdf overview latency_v.5
Srdf overview latency_v.5
 
Upper layer protocol
Upper layer protocolUpper layer protocol
Upper layer protocol
 
What is cloud technology?
What is cloud technology?What is cloud technology?
What is cloud technology?
 
H n q & a
H n q & aH n q & a
H n q & a
 
tibco online training
tibco online trainingtibco online training
tibco online training
 
RUGGEDCOM ELAN software
RUGGEDCOM ELAN softwareRUGGEDCOM ELAN software
RUGGEDCOM ELAN software
 
Introducing MFX for z/OS 2.1 & ZPSaver Suite
Introducing MFX for z/OS 2.1 & ZPSaver SuiteIntroducing MFX for z/OS 2.1 & ZPSaver Suite
Introducing MFX for z/OS 2.1 & ZPSaver Suite
 
CENTERA_MIGRATOR_PPT_NOTES
CENTERA_MIGRATOR_PPT_NOTESCENTERA_MIGRATOR_PPT_NOTES
CENTERA_MIGRATOR_PPT_NOTES
 
What is over-the-air programming
What is over-the-air programmingWhat is over-the-air programming
What is over-the-air programming
 
123126804 scada
123126804 scada123126804 scada
123126804 scada
 
Data and storage management on z/OS
Data and storage management on z/OSData and storage management on z/OS
Data and storage management on z/OS
 
Utilities
UtilitiesUtilities
Utilities
 
Application layer and protocols of application layer
Application layer and protocols of application layerApplication layer and protocols of application layer
Application layer and protocols of application layer
 

More from IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

More from IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Recently uploaded

04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 

Recently uploaded (20)

04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 

Softek TDMF z/OS extended distance migration.

  • 1. IBM Global Technology Services December 2007 Softek TDMF z/OS extended distance migration. For a data center relocation or consolidation
  • 2. Softek TDMF z/OS extended distance migration. Page 2 Introduction Contents The purpose of this paper is to give a brief overview of Softek™ TDMF™ (Transparent Data Migration Facility) z/OS®, outline some of the benefits a 2 Introduction client may derive from using the product and show how, when combined with 4 TDMF z/OS capabilities available channel extension technology, TDMF may be used for migrating 5 Provision of “guaranteed” data over virtually unlimited distances. data integrity 6 TDMF extended distance In April of 1997, Softek announced the availability of a software-based data data migration migration solution called TDMF that allows users to transparently migrate 17 A typical data center relocation data in an IBM Multiple Virtual Storage (MVS) environment to a designated 22 Alternate method — a com- target disk from a source disk that is in use by other applications. With TDMF bination of local and remote software, two types of migrations may be performed. migrations 24 Alternate method — use of The first type is a swap in which the VOLSER initially associated with the TDMF TCP/IP option source disk is dynamically switched to the target disk. Applications using this 27 Alternate method — use VOLSER may be left running throughout the migration and upon completion LDMF software to isolate an of the swap are unaware that the data they are accessing is located on a differ- application for relocation ent physical disk. 27 Other Softek TDMF solutions 29 Other Softek migration solutions The second type of migration is a point-in-time migration. In this scenario, 29 TDMF definitions TDMF software may be used to create a copy of a source volume onto a target 30 Summary disk while the source disk is still in use by other applications. A point-in-time migration does not perform dynamic VOLSER switching. Upon completion of the point-in-time copy, applications continue to access data from the initial source disk.
  • 3. Softek TDMF z/OS extended distance migration. Page 3 The relocation of workloads to a new location involves a disruption in service Highlights to end users. To reduce the impact on business, the operational switch from one data center to the other must take place in the shortest possible time with minimal outage to revenue-generating applications. The desired reduced impact cannot be accomplished if operational data has to be backed up to tape, physically transported and then restored onto the remote system at the alternate location. When using the right software, Using TDMF software, the service disruption is limited to the time required data transfer from the local to to close the application on the local site, switch network access from the local the remote site can take place to the remote site and then re-IPL (initial program load) the system on the without disruption during normal remote site. Data transfer from the local to the remote site can take place system operations. without disruption during normal system operations. When migrating data with TDMF To migrate data using TDMF software, it is necessary to establish a session. software, the master LPAR does A TDMF session may be created by running a copy of TDMF software in each the actual migration of data from of the logical partitions (LPARs) that have access to the source and target Direct a source to a target disk. Access Storage Device (DASD) subsystems involved in the migrations. If multiple LPARs are involved, TDMF software in one of the LPARs will be designated as the master and the other LPARs will be designated as agents. The master and its associated agents communicate and coordinate migration activities using a common SYSCOM data set. The master does the actual migration of data from a source to a target disk. If only one LPAR has access to the source and target volumes, it runs as a master with no associated agents. A session consists of one master and up to 31 agents each running in different LPARs and all sharing the same SYSCOM data set. An interactive system productivity facility (ISPF)-based TDMF monitor may be brought up to view and communicate with TDMF ses- sions in progress. The monitor may also be used to look at performance data from both current and past sessions via the SYSCOM data set.
  • 4. Softek TDMF z/OS extended distance migration. Page 4 TDMF capabilities Highlights Design Softek TDMF z/OS was designed as a fully integrated product that is simple Softek TDMF software is simple to to install, simple to use and simple to maintain. use, enables access to data during migration and is not vendor specific. Transparency One of the most important requirements of data processing today is continu- ous availability of data. Traditionally, one of the downsides of the backup or migration of data has been that it requires an interruption of the data’s avail- ability to the end user. TDMF software’s ability to transfer data transparently can greatly reduce and, in many cases, eliminate periods of data unavailabil- ity. This results in increased productivity and reduced operational costs. In addition to its transparent use, TDMF software may be installed dynamically, requiring no IPLs. Non-vendor specific Because TDMF software is completely MVS-based, it does not require special micro code or hardware features in the DASD control unit. This means that it may be used in a multivendor environment. TDMF software supports all extended count key data (ECKD) control units. Device support includes stan- dard size 3380 and 3390 images as well as hyper volumes. Tuning parameters To adjust performance impact on applications during a migration, TDMF software provides the ability to control migration rate using the following parameters: • FASTCOPY — copies only allocated tracks on a volume • FULLSPEED — doubles buffer I/O blocks, nearly doubling TDMF performance • CONCURRENT — establishes the number of concurrent volume migrations
  • 5. Softek TDMF z/OS extended distance migration. Page 5 • SYNCgoal—in a swap migration, establishes how long the swap is allowed to take Highlights • PACING — establishes dynamic or fixed, increased or decreased size of the I/O TDMF transfers Migration groups TDMF software provides the ability TDMF software supports the definition and concurrent migration of online to control the data migration rate. volumes. Because migration projects may involve several hundred volumes supporting a range of application types, TDMF software provides the ability to logically group volumes for efficient operational control. Many group migration parameters can be independently configured and con- trolled to best suit the business applications supported. Monitoring features TDMF software enables logical TDMF software provides a Time Sharing Option (TSO) ISPF monitor for manag- grouping of volumes for better oper- ing and viewing a migration process from start to finish. Statistical information ational control. And it provides a view includes details such as elapsed time, copy rate, percentage complete and so forth. into the entire migration process. Shared DASD TDMF software works in a shared DASD environment. The DASD can be shared by individual LPARs running on multiple physical CPUs, shared within a sysplex or shared across multiple sysplexes. Provision of “guaranteed” data integrity Softek TDMF software was designed to maintain physical data integrity. Source volumes remain untouched by the Softek TDMF session and current data is always available up to the point of the workload transfer when a swap migration is initiated. For a point-in-time copy, the source volume is never touched.
  • 6. Softek TDMF z/OS extended distance migration. Page 6 Using the DYNAMIC SUSPEND feature of Softek TDMF software, a migration Highlights may be temporarily stopped in the event of a hardware or network failure. The Softek TDMF software will continue to monitor the source volume for updates until the migration can be continued from where it was suspended. If, for what- The integrity of the data on the ever reason, the migration is aborted or terminated, then the integrity of the source volumes can be main- data on the source volume(s) is maintained. Unlike other migration tools, Softek tained even in the rare event of TDMF software enables a migration for transferring workloads to be terminated an aborted migration. at any time. TDMF extended distance data migration TDMF software may be used with existing channel extension technology to implement the product’s benefits between data centers separated by thousands of miles. Common uses include remote backup, disaster recovery and data center relocations. Essentially, the use of TDMF software remains unchanged with the exception that some kind of extended distance network connection is set up between the data centers containing the source and target DASD subsystems involved in the migrations. The connection may consist of one or more point-to- point leased lines such as DS-3 (T3) links or may be achieved by connecting to a tariff-based packet switching network such as an asynchronous transfer mode (ATM) “cloud.” In addition to channel extenders or tariff-based packet switching, the latest release of TDMF software can also support the migration of data via TCP/IP. This white paper does not include any topic involving data movement over TCP/IP. TDMF software can be set up to perform either a push or a pull operation. The terms “push” and “pull” define the direction data is flowing across the network with respect to the TDMF master application.
  • 7. Softek TDMF z/OS extended distance migration. Page 7 In a push operation, the TDMF master writes data across the network out to Highlights the target subsystem(s) residing in the remote data center. In this environ- ment, the master resides on an LPAR in the local data center along with the source subsystem(s). In a pull operation, the TDMF master resides on an LPAR running at the remote data center and reads data across the network from the source subsystem(s) residing in the local data center. The use of TDMF software remains Diagram 1 illustrates an example of a push operation for a single LPAR envi- essentially unchanged by distance. ronment existing in the local data center. The extended distance link used is a DS-3 (T3) point-to-point leased line. A push operation is useful when target 10708885 TDMF for z/OS Extended Distance Migration Diagram 1 DASD subsystems are running in a remote data center that does not yet have an operational MVS LPAR. Single LPAR “PUSH” environment Softek can be set up to either push Local data center Remote data center or pull data across the network. TDMF MASTER When pushing, the master resides LPAR “1” locally with a source subsystem. CHPIDS Channel Migrate Channel 10 11 20 21 extender data extender 21 21 DS-3 20 20 10 11 20 21 Extended distance connection Source (“T3” point-to-point leased line) Target disk disk SYSCOM data set Source subsystem Target subsystem Diagram 1 Single LPAR push environment
  • 8. Softek TDMF z/OS extended distance migration. Page 8 Diagram 2 illustrates an example of a pull operation for a multiple LPAR Highlights environment. The extended distance link used is a DS-3 (T3) point-to-point 10708885 TDMF for z/OS Extended Distance Migration leased line. In this example, both the local and the remote data centers each Diagram 2 When pulling, the master resides have an operational MVS LPAR. in the remote data center. Multiple LPAR “PULL” environment Local data center Remote data center TDMF AGENT TDMF MASTER LPAR “1” LPAR “2” CHPIDS CHPIDS Channel Migrate Channel 10 11 20 21 extender data extender 30 31 40 41 21 30 DS-3 20 31 30 21 31 20 10 11 30 31 20 21 40 41 Extended distance connection Source (“T3” point-to-point leased line) Target disk disk SYSCOM data set Source subsystem Target subsystem Diagram 2 Multiple LPAR pull environment
  • 9. Softek TDMF z/OS extended distance migration. Page 9 When relocating from an existing local data center with an established work- Highlights load to a new remote data center being brought up for the first time, certain things can be done to reduce both the time and the cost of the relocation. Set- When relocating a data center, cer- ting up a pull operation means the TDMF master can take full advantage of tain operations can reduce both the the CPU resources installed in the remote data center without impacting the time and cost of relocation. application workload still running in the local data center. This will reduce the amount of time required to perform the relocation. If the relocation is coordinated by application, it is possible to reduce the net- work bandwidth needed to move the data. When point-in-time migrations are performed, genning and connecting the local LPAR(s) to the remote target storage (CHIPIDS 20 and 21) is not required. The removal of this requirement can result in less traffic between the data centers, further reducing the network bandwidth needed to migrate the data. Note: Although not recommended, for swap type migrations, each of the target subsystems must be genned and connected to each of the agent LPARs involved in a session. Diagram 3 shows a pull operation involving multiple source and target subsys- tems connected to a pair of DS-3 (T3) links. In this example, the migration will be coordinated by application to reduce the number of DS-3 links required. Volumes related by application will be moved as a group migration requiring a prompt to signal TDMF software of a point-in-time.
  • 10. Softek TDMF z/OS extended distance migration. Page 10 10708885 TDMF for z/OS Extended Distance Migration Diagram 3 Highlights Multiple LPAR “PULL” environment (Group “point-in-time” backup with prompt) (Coordinated by application) Softek simplifies relocations, which Local data center Remote data center can result in less traffic between APPLICATIONS data centers. This can reduce the TDMF Appl Appl Appl TDMF Slot Slot Slot network bandwidth needed for AGENT A B C MASTER A B C migrating data. LPAR “1” LPAR “2” CHPIDS CHPIDS 10 11 20 21 30 31 40 41 50 51 60 61 10 11 50 51 Ext #1 Ext #2 A A 30 30 30 DS-3 B 31 31 31 B Link #1 C C S1 S1 Source Migrate Target data 20 21 60 61 A 40 40 40 A DS-3 41 41 41 B Link #2 B Ext #3 Ext #4 C C S1 Target SYSCOM High speed “T3” links S2 with compression Source Diagram 3 Multiple LPAR pull environment (Group point-in-time backup with prompt) (Coordinated by application)
  • 11. Softek TDMF z/OS extended distance migration. Page 11 With applications APPL A, APPL B, and APPL C running on LPAR 1 in the Highlights local data center, a group point-in-time migration session will be started for the source volumes used by APPL A (depicted as A in source subsystems S1 Group point-in-time migration and S2) out to the target volumes chosen to receive them (shown as A on begins with source volumes subsystems T1 and T2). When the master reaches the point where it is ready used by the first application. to end the group A migrations, it will issue a prompt to the TDMF monitor. At this point, the user would bring down APPL A on LPAR 1, then respond to the prompt to complete the migrations for all of the group A volumes. Before bringing up APPL A in SLOT A of LPAR 2, it will be necessary to Re-clipping target volumes to the vary the source volumes offline to LPAR 2, re-clip the target volumes to the original VOLSERs and varying original source VOLSERs, vary the target volumes online to LPAR 2 and the source and target volumes are catalog data sets on the target volumes as needed. The same procedure would necessary before moving to the be repeated for migrating APPL B and APPL C. Since Diagram 3 is set up for next application for migration. point-in-time migrations, it is not necessary to gen or connect the target sub- systems (T1 and T2) in the remote data center to LPAR 1 running the TDMF agent in the local data center. If the extended distance network is set up so that source and target volumes are accessible to hosts running in both data centers as in Diagram 2, the relocation of volumes coordinated by application could be carried out using Using TDMF swap migrations TDMF swap migrations. This would eliminate the vary offline/online and can eliminate offline/online and reclip steps described for the point-in-time based procedure since the target re-clip procedures of point-in- volumes would already be online with the correct VOLSERs to the remote time procedures. host following completion of the swap migrations. Catalog issues for LPAR(s) in the remote data center would still need to be addressed.
  • 12. Softek TDMF z/OS extended distance migration. Page 12 If the application’s load libraries were copied prior to moving its database Highlights volumes, and the appropriate VTAM and/or TCP/IP network(s) were in place, it would be possible to temporarily run the application out of the remote data With TDMF and the right circum- center accessing database volumes in the local data center and then migrate stances, it’s also possible to the database volumes over to the remote data center with no further interrup- migrate database volumes with tion to the application. no further application interruptions. It must be noted, when doing a swap migration over channel extenders, the user could be exposed to server performance degradation after the swap of a Using channel extenders can reduce volume. It is for this reason Softek always recommends doing a point-in-time performance degradation. Channel migrations involving channel extenders. extenders compress data flowing across a link. A DS-3 (T3) link has a raw (actual) bandwidth of 44.736 Mbps (Mega bits per second). The DS-3 rate divided by eight yields approximately 5.5 MBps (Mega bytes per second) as compared to the effective bandwidth of 17.5 MBps for ESCON channels. The channel extender boxes attached to either end of a link are able to compress the data flowing across the link. The compression ratio achieved is dependent upon the data patterns being moved and will determine the effective bandwidth of the link. Effective bandwidths yielded through compression are typically two to three times the raw bandwidth of a link. For DASD applications, it is usually safer to use a 2-to-1 compression ratio for estimating throughput. The key to efficiently migrating data across high speed DS-3 links is setting up an environment that allows the capacity of the link(s) to be saturated as nearly as possible. The percent utilization of a link’s capacity can be moni- tored via the channel extender or derived by dividing the actual data rate across the link by the link’s effective bandwidth (taking compression into
  • 13. Softek TDMF z/OS extended distance migration. Page 13 account). In a favorable environment, between 90 and 95 percent of a link’s Highlights capacity can be utilized. If the links depicted in Diagram 3 could be made to sustain a 91 percent average utilization factor, the actual data rate across both To efficiently migrate data across links could be estimated as follows: high-speed DS-3 links, you need to use 90 to 95 percent of a • Link utilization factor = 0.91 link’s capacity. • Raw bandwidth/DS-3 link = 5.5 MBps/link • Compression ratio = 2/1 • Number of DS-3 links = 2 • 0.91 x (5.5 MBps/link) x (2/1) x (2 links) = 20 MBps Many factors can cause the percent utilization of a link’s capacity to change over time. By monitoring the link(s), it is possible to tune migration performed to help sustain a high percentage of link utilization over the course of one or more migration sessions. Leased point-to-point lines such as DS-3 circuits are most cost-effective when they are being utilized at near capacity. The user pays the same amount to lease the line whether it is actually in use or not. These circuits are ideal in situations such as data center relocations where large amounts of data are being moved within a fixed period of time. Multiplexed T1 circuits may be In situations where smaller amounts of data are being migrated, multiplexed T1 used for smaller data migrations. circuits may be used. If data compression is to occur in the channel extenders Additional hardware may be needed for data being sent across T1 links, additional hardware in the form of inverse if channel extenders are used. multiplexors may be required. The raw bandwidth of a T1 link is 1.544 Mbps. Approximately 28 T1 links equals the raw bandwidth of a single DS-3 (T3) circuit (44.736 Mbps).
  • 14. Softek TDMF z/OS extended distance migration. Page 14 Diagram 4 shows the use of n number of T1 circuits being multiplexed into a Highlights high-speed serial interface (HSSI) for the purpose of transporting compressed data packets. Compression ratio estimates for T1 and T3 circuits carrying DASD data are comparable (typically 2:1). If compression is not required, the inverse multiplexors are not needed, and the T1 circuits may be connected to 10708885 TDMF for z/OS Extended Distance Migration Diagram 4 the channel extenders via modem using a V.35 interface. Diagram 4 shows compressed Multiple LPAR “PULL” environment data packets that are being (Multiplexed “T1” links using data compression) transported via a multiplexed Local data center Remote data center high-speed serial interface. TDMF AGENT TDMF MASTER LPAR “1” LPAR “2” Migrate data CHPIDS CHPIDS Channel Channel 10 11 20 21 extender extender 30 31 40 41 “T1” (#1) 21 30 “T1” (#2) 20 HSSI HSSI 31 30 21 “T1” (#n) 31 20 Inverse Inverse 10 11 30 31 multiplexor multiplexor 20 21 40 41 Source Extended distance connection Target disk (“T1” point-to-point leased lines) disk SYSCOM data set Source subsystem Target subsystem Diagram 4 Multiple LPAR pull environment (Multiplexed “T1” links using data compression)
  • 15. Softek TDMF z/OS extended distance migration. Page 15 Diagram 5 illustrates a solution for users who have the need to migrate vary- Highlights ing amounts of data over extended distances on an ongoing basis. This may be achieved by connecting the data centers involved to a tariff-based packet switching network referred to as an ATM “cloud.” The cost of migrating data depends on the number and size of data packets sent. During periods of little or no migrationfor z/OS Extended Distance Migrationthe high cost of leased line(s). 10708885 TDMF activity, the user does not incur Another 5 Diagram advantage of the ATM cloud network is that it can be used to connect more than two data centers together. Diagram 5 depicts an ATM cloud Multiple LPAR “PULL” environment network that can be useful for Local data center Remote data center users who need to migrate varying TDMF AGENT TDMF MASTER amounts of data over distances on LPAR “1” LPAR “2” an ongoing basis. CHPIDS CHPIDS Channel Migrate Channel 10 11 20 21 extender data extender 30 31 40 41 21 30 ATM 20 CLOUD 31 30 21 31 20 10 11 30 31 20 21 40 41 Extended distance connection Source (Packet switching network) Target disk disk SYSCOM data set Source subsystem Target subsystem Diagram 5 Multiple LPAR pull environment (ATM “cloud” packet switching network)
  • 16. Softek TDMF z/OS extended distance migration. Page 16 ATM clouds may be implemented on SONET (Synchronous Optical NETwork) Highlights fiber networks capable of running at 155 Mbps. The throughput yielded from connecting to an ATM network will depend on the backplane capacity of the channel extenders being used. User data throughput across an ATM cloud implemented on a SONET fiber network typically ranges from 80 Mbps to 135 Mbps (10 MBps to 17 MBps). Consult the channel extender vendor for data rate(s) supported by the channel extender equipment to be used. Sample estimate timings Achieving a 2:1 compression ratio Table 1 shows the number of gigabytes that may be moved with no compression with 85 percent efficiency doubles (1:1 ratio) and an efficiency rate of 85 percent. the Gigabytes transferred. Bandwidth Number of links Gigabytes/1 hr Gigabytes/12 hrs Gigabytes/24 hrs E3 1 13 156 312 2 22 265 530 3 33 397 795 4 44 530 1061 DS3 1 16 210 403 2 28 343 686 3 42 515 1030 4 57 686 1373 OC3 1 59 711 1422 2 118 1422 2845 3 177 2134 4268 4 237 2845 5691 Please note that DS3 is the formatted digital signal used on T3 lines.
  • 17. Softek TDMF z/OS extended distance migration. Page 17 If it is possible to achieve a 2:1 compression ratio with 85 percent efficiency, Highlights then the Gigabytes transferred in the table above can be doubled. For specific estimates on the transfer rate that can be achieved in an environ- ment using channel extenders, please contact the hardware vendor. A typical data center relocation This extended distance migration The following is an extended distance data migration scenario using TDMF example allows relocation with software. It would allow the user to relocate a data center and minimize down- minimal downtime. time to the amount of time required to bring down LPAR(s) operating in the local (originating) data center, switch over the network and IPL LPAR(s) in the remote (destination) data center. The actual migration of data would be performed while the user’s production workload is running. The key to this relocation technique is in providing sufficient extended channel bandwidth that would enable the user to run pro- duction applications in the remote data center that would temporarily access data residing in the local data center. The TDMF master would be run in Essentially, the user’s data would be migrated in two stages: the remote data center. The stage 1 environment can be set up without • Stage 1 would be to migrate static data (system packs and application disrupting data processing at the loadlibs and parameters) using TDMF software’s point-in-time option. local center. • Stage 2 would be to migrate dynamic data (application data) using TDMF software’s swap option. The TDMF master would be run in the remote data center. The reason for this is that point-in-time migrations do not require target DASD subsystems to be genned to LPARs running agent copies of TDMF software. This allows the
  • 18. Softek TDMF z/OS extended distance migration. Page 18 environment for Stage 1 of the relocation to be set up without disrupting data Highlights processing in the local data center, because a new gen would not be required for the production LPAR(s). Outlined below are the steps required to perform the data center relocation: The first four steps for performing 1. Setup initial remote data center environment. This includes: a data center relocation include • Creating a skeleton MVS in remote data center capable of running the setting up the remote data center TDMF master environment, performing a freeze to • Installing target DASD subsystems in remote data center intended to receive static data in the local center, per- the data. forming Stage 1 of the relocation, • Genning source DASD in local data center and target DASD in remote data and preparing the production envi- center to the skeleton MVS. ronment in the remote data center. • Establishing extended channel connectivity between the skeleton MVS in the remote data center and the source DASD subsystems in the local data center. 2. Perform a freeze on update activity to static data in the local data center which will be needed to setup a production environment in the remote data center. This data includes system packs as well as volumes containing appli- cation loadlibs and parameters. 3. Perform Stage 1 of the relocation (use TDMF software’s point-in-time option to migrate static data from the local (original) data center to the remote data center). 4. Prepare a production environment in the remote (destination) data center using the static data migrated from step 3. This includes: • Setting up system definitions for production LPAR(s) to run in the remote data center. Cataloging source VOLSERs that will be required by applica- tions that are to run in remote data center LPAR(s).
  • 19. Softek TDMF z/OS extended distance migration. Page 19 • Testing the functionality of applications brought over from the local data center. Highlights • Adding the target DASD subsystems to the I/O gen(s) of production LPAR(s) intended to run in the remote (destination) data center. • Setting up network access to applications to be brought up in the remote data center. The last two steps include bring- 5. Bring down production LPAR(s) in the local (original) data center, switch over ing down production LPAR(s) in the network, and IPL production LPAR(s) in the remote (destination) data the local data center and perform- center. Note: At this point, the user would be running production applications ing Stage 2 of the relocation using in the remote data center accessing “dynamic” data across extended channel TDMF software’s swap option. connections in the local (original) data center. It is important that the user has set up sufficient extended channel bandwidth to prevent a bottleneck between data centers. 6. Perform Stage 2 of the relocation [use TDMF software’s swap option to migrate dynamic application data from source DASD subsystems in the local (original) data center to target DASD subsystems in the remote (destination) data center]. Because these migrations are transparent to applications using the data, they may be broken up into smaller groups of volumes to reduce the use of extended channel bandwidth being shared with other applications. When all of the data has migrated, extended channel connections to the local (original) data center may be removed thus completing the data center relocation. Data center relocation considerations Watch the network requirements Below are some considerations to keep in mind when planning data center between the two locations and keep relocation using TDMF software: the communication links to a mini- mum of T3/E3 links. 1. Network requirements between current and new location. • There are really only a few players in the channel extension business today. The average lead time for building the extenders is approximately two months
  • 20. Softek TDMF z/OS extended distance migration. Page 20 based on requirements. The same channel extenders do not support parallel Highlights channels. Usually the minimum contract is two months if the user is plan- ning on leasing. • Communication links should be at the very minimum T3/E3 links. T1 links are too slow. Using existing networks within the user’s environment is not recommended as the traffic could impact production applications. There could be a lead time associated with the lease of these links. Additionally, the term of the lease could come into play. Ensure that you have the right 2. CPU memory requirements to complete the Softek TDMF point-in-time CPU memory for Softek TDMF copies to the new location. point-in-time copies. • The total amount of memory necessary for a migration is dependent upon the number of LPARs participating in the sessions and the number of volumes to be moved. The intent is to move the largest amount of volumes using the least amount of memory. To this end, it is important to consider whether or not any LPARs (such as test LPARs) might be shut down during the migration. Also, it is suggested that a review of all volumes be done to identify any volumes that need not be moved, such as page packs, work packs, etc. • It may be that there is not enough memory available in the production environ- ment. Having the test LPAR(s) shut down would make that storage potentially available either by DRM if all storage is dynamically re-configurable or a power-on-reset (POR) may be necessary in order to access that storage. • If the total memory requirement is more than any one LPAR could handle, what can be done in this situation is to balance the load across the partici- pating LPARs so that no one partition is saturated. This would result in more than one LPAR running master sessions in a push scenario as laid out in a worksheet.
  • 21. Softek TDMF z/OS extended distance migration. Page 21 3. Swap migration versus point-in-time copy. Highlights • It is recommended that the point-in-time copy function be used rather than the swap function. The point-in-time function provides a fall back position Make certain to use the point-in- in the event of a problem. Additionally, this would provide the ability time function rather than the to ensure that system and application feature/function work at the new site swap function. without impacting production at the current location (assumes net-new or asset swap equipment). 4. Long-term sessions. When a long-term session occurs, • Determining the duration of the migration is dependent on the bandwidth keep in mind that there is no func- available on the links, the channel configuration over the channel-extenders tion within the product to pick up and the amount of transferred data being moved. Using Softek TDMF soft- where it left off. ware will result in only allocated tracks being transferred; that is, if only 70 percent of a volume is allocated, then only 70 percent of the volume needs to be migrated. • Keep in mind that during the data migration, Softek TDMF software will not survive an IPL of one of the participating systems; all sessions affected would have to be started from the beginning. There is no function within the product to pick up where it was left off. 5. Push versus pull process. Determine whether you want a • A push operation is where all the sessions are at the current location and push or a pull process. the master sessions are pushing the data to the new location. No other LPARs participate outside of the current location. – The benefit of a push operation is that there are no extra systems to be included in a session. – The target devices would be defined via channel-extension to the LPAR(s) executing the master sessions. – Agent sessions would be run on all LPARs participating in the migration. – If the communications link(s) fail for whatever reason, the sessions may be suspended and then continued when the links are reestablished.
  • 22. Softek TDMF z/OS extended distance migration. Page 22 • A pull operation is where the master session(s) are executed in the new loca- Highlights tion, and the LPARs at the current location participate as agent systems. – This assumes that there are new mainframe server(s) at the new location with nothing executing on them outside of the LPARs executing the Softek TDMF sessions and, therefore, there would be sufficient memory available for the master sessions. – The benefit would be that the current production LPARs would only be acting as Agents systems and would significantly reduce the memory requirements on these LPARs. – The target devices would be defined only to the new LPARs. The source devices would be defined via channel-extension to this environment. The agent systems are not required to have the target volume defined in a point-in-time situation. – If the communication link(s) fail, a 15-minute limitation would come into play as the master system(s) would not be able to communicate to the communications data set (COMMDS). If the link(s) have not been reestab- A multi-hop migration with loaner lished within the 15-minute limit, the sessions will fail. storage equipment may be more costly but can allow users to use Alternate method — a combination of local and remote migrations a vendor’s remote copy facility. Although this may be a more costly solution, organizations have used a combina- Benefits of multi-hop migration tion of TDMF local swap migrations along with hardware controller-based remote include exercising the new environ- replication to accomplish a data center relocation. Some have referred to this as a ment prior to cutting over to it. multi-hop migration using loaner equipment (storage) at the current site.
  • 23. Softek TDMF z/OS extended distance migration. Page 23 The basic reason for using TDMF software in this scenario is to place the data Highlights 10708885 TDMF for z/OS Extended Distance Migration on a storage subsystem that is compatible with a remote storage subsystem. Diagram 6 Users now have the ability to use a vendor’s remote copy facility such as IBM’s Peer-to-Peer Remote Copy (PPRC). MVS/TDMF MVS Local TDMF Remote PPRC Vendor B Vendor A Vendor A Dallas Chicago There are some advantages with this type of migration. Assuming that the new storage is identical at both the current and new locations, there is an oppor- tunity to exercise the new storage with the production environment prior to cutting over to the new remote site. That is, do a TDMF swap migration to the new local disk and allow PPRC to do the remote copy to the new location. After the local migration is complete, the user can continue to run production while monitoring the new subsystem at the current site for both functionality and per- formance. When satisfied that everything is working fine, the user can schedule a shutdown at the local site and bring up production at the new remote site. Another variation might use A variation of the above could be to use TDMF software’s perpetual point- TDMF software’s perpetual point- in-time option. In this scenario, it is possible to create a local replication of in-time option. the production DASD using perpetual point-in-time and allow PPRC to copy the data to the remote site. At some point, a controlled point-in-time mirror
  • 24. Softek TDMF z/OS extended distance migration. Page 24 image of the production environment can be created at the new location. Stop Highlights the PPRC session and test the relocation procedure at the remote site. When satisfied that everything will work, start a new fresh PPRC session and create a new point-in-time mirror image for the final migration. Using perpetual point- in-time, it is not necessary to recopy all the DASD from the start but just copy the delta changes since the last perpetual point-in-time copy was created. It is possible to repeat this testing for an unlimited number of times. Alternate method — use of TDMF TCP/IP option Using the TDMF TCP/IP option In some environments, the use of the TDMF TCP/IP option may be a cost- may be a cost-effective method of effective method of doing a data center relocation. The cost savings is gained moving data. by not having to acquire expensive channel extenders or establishing costly hardware replication features such as EMC’s SRDF. It should be noted that, generally speaking, the transmission bandwidth is much lower and more difficult to achieve parallel data transmission, thereby 10708885 TDMF for z/OS Extended Distance Migration extending the length of time a migration will take. Also, if the TCP/IP lines Diagram 7 are shared with other functions such as online communications then the over- all time it takes for a TDMF migration will be impacted. MVS/TDMF MVS/TDMF TDMF TCP/IP Vendor B Vendor A Dallas Chicago
  • 25. Softek TDMF z/OS extended distance migration. Page 25 With a TCP/IP replication, it not possible to do a swap migration because as Highlights a volume completes the copy/synchronization phase it is not possible (nor desirable) to physically swap that volume to the new remote location while production continues to run at the current local site. Using TCP/IP does require a point-in-time replication. With this type of replication, no Note that with this type of replication there is no hardware in place for either hardware is available to allow the the local or remote operating system to have access to the DASD at the other local or remote operating system location. The way TDMF software accomplishes this replication is by having a access to the DASD. TDMF master read the data off the local source volume and then pass the data to a TDMF master running at the remote location. The remote TDMF master will, in turn, write the data to the target volume. Shutdowns would be necessary to When all required volumes are replicated to the remote site, it is necessary create the final TDMF copies of the to shut down all applications, create the final TDMF point-in-time copy of source DASD. the source DASD onto the target DASD and then shut down MVS at both locations. The shutdown is required because it is necessary to clip or re-label the volumes at the new remote location to the VOLID of the original source volumes. TDMF software can create the ICKDSF control statements to re-label the volumes. One of the drawbacks to this type of relocation is that in order to test the relo- cation effort, it is necessary to terminate the TDMF session, clip the volumes at the new location, and test and then restart the TDMF relocation effort again from the beginning. Production at the current location does continue to run at all times other than a short outage to synchronize all local and remote storage just prior to terminating the TDMF session.
  • 26. Softek TDMF z/OS extended distance migration. Page 26 There is an option that could help facilitate the testing effort without restart- Highlights ing the entire migration again from the beginning. It is possible to use a hardware mirror facility such as IBM’s PPRC to create a second copy of the It’s possible to use a hardware DASD at the remote location. This would allow the user to continue produc- mirror facility to create a second tion in the original local data center, continue to keep replicating current copy of the DASD, allowing contin- source volumes to the remote target volumes, and test the relocation effort ued production. using the second PPRC copy. Note that it would still be necessary to quiesce the applications in the production environment to create a consistent copy of the test volumes. To further assist in this effort, one method is to use the optional perpetual point-in-time feature. This would allow the creation of as many point-in-timefor z/OS Extended Distance Migration 10708885 TDMF copies of the source DASD as required, all of which can Diagram 8 be accomplished without restarting the TDMF replication from the beginning but rather just sending the updates that occurred since the last point-in-time. LPAR A LPAR B MVS/TDMF MVS/TDMF MVS/TDMF/TEST TDMF Optional TCP/IP IBM PPRC Vendor B Vendor A Vendor C Dallas Chicago When satisfied that the testing went well, the user can execute the procedures as outlined above and complete the final TDMF TCP/IP relocation copy. Again, this could all be accomplished by starting the TDMF replication once and then just sending the new source updates to the target when required between testing period and the final relocation cutover.
  • 27. Softek TDMF z/OS extended distance migration. Page 27 Alternate method — use LDMF software to isolate an application for relocation Highlights In some cases, relocating a data center is not granular enough when all that is really required is an application or two to be relocated. The issue most of the time When only an application needs is that the application(s) is not nicely located on its own volumes but, rather, inter- to be relocated, difficulties arise mixed with many other applications that are not being relocated. Some examples because the application resides of this could be a business unit is being relocated or sold or an application that intermixed with applications that needs to be moved from a test/development environment to a production environ- don’t need to be relocated. ment located in a different data center. Using Softek LDMF and TDMF soft- Using a combination of Softek Logical Data Migration Facility (LDMF ™) and ware can help relocate an application TDMF software can accomplish the relocation with little effort and no outage with little effort and virtually no impact to the application while it continues to run production until the final outage impact. TDMF cutover. At cutover, the outage will be only for a very short period. Once the files of the application have been identified, LDMF software can be used to migrate those datasets onto dedicated volumes. (See the white paper Imple- menting LDMF z/OS for simple, effective, and nondisruptive data migrations.) Now that the application(s) is isolated on its own volumes, there are multiple options for relocating the application. One way would be use the traditional tape dump/restore, but that would require an extended outage. Another choice is to use TDMF software as discussed previously to relocate the application volumes to a new remote location with only a minimal application outage required. Other Softek TDMF solutions Softek TDMF z/OS technology is designed for volume-level migrations involving movement over local or remote distances (also known as “global migrations”). Data can be moved across a TCP/IP network (LAN or WAN) or channel extenders.
  • 28. Softek TDMF z/OS extended distance migration. Page 28 Reduction of routine database backup time Highlights One of the benefits offered by TDMF software is its ability to significantly reduce routine out-of-service database backup time. TDMF software’s ability A benefit of using Softek TDMF to perform a point-in-time backup with a prompt, coupled with its ability to z/OS technology is its ability to handle a group of related volumes as a single entity, enables it to perform the reduce out-of-service database majority of backups while the database region is active and available to users. backup time. The region would have to be down for only a small percentage of time at the end of the backup, because database transactions are buffered in CPU memory before being written out to disk. When TDMF software signals with a prompt TDMF software can perform back- that it is ready to complete the migrations for a group of related volumes, the ups while the database is active database region may be brought down for a short period, which will allow any and available to users. remaining buffered updates to be written out to disk. This ensures that the logi- cal and physical images of each of the source volumes match. Once the region is down, TDMF software’s prompt may be responded to, allow- ing TDMF software to finish backing up the relatively few remaining updates contained on the source volumes. When the backup is complete, the region may be brought up and put back into service. The point-in-time copy of the database created on the set of target volumes may then be written to tape using the user’s conventional disk-to-tape package. Maintenance Users may use TDMF software to migrate off of packs that have not yet failed but are giving indications that maintenance is required. TDMF software has multiple other uses including migrations for Performance tuning packs that need maintenance and As the dynamics of various DASD subsystems within a shop change, TDMF for rebalancing workloads for software can be used to nondisruptively rebalance workloads for better over- better performance. all performance.
  • 29. Softek TDMF z/OS extended distance migration. Page 29 Lease considerations Highlights For users who lease DASD, TDMF software can be used to help manage and significantly reduce lease overlap between old and new equipment. Application testing The software can also be used to TDMF software may be used to create copies of production data that application help reduce lease overlap and to programmers may use to develop modifications required for processing data. create copies of production data for various reasons. Other Softek migration solutions Softek LDMF software Softek LDMF z/OS has the ability to nondisruptively switch files from a source volume to a target volume. LDMF software moves the applications dynamically onto new storage. This switchover feature, which occurs under user control, results in redirection of an application’s I/O function (e.g., from the original source to the target). This occurs without any disruption to the application. Softek LDMF software is designed and optimized specifically for local data Softek LDMF software can assist migrations at the file extent level. Softek LDMF and TDMF software can both with data migration as well. And be used in the same environment to address different migration project require- Softek TDMF for open systems ments. LDMF and TDMF software provide a very fast, easy, optimized migration works with several platforms solution for data migrations including IBM AIX, Windows NT and other environments. Softek TDMF software for open systems For open systems, Softek also offers TDMF software for the open systems platforms. The platforms include IBM AIX®, HP-UX, Sun Solaris, Linux® and Microsoft® Windows® NT®, 2000, 2003. Softek Replicator for open systems Softek Replicator is versatile multiplatform data replication software that enables local and offsite disaster recovery and eliminates backup windows.
  • 30. Softek TDMF z/OS extended distance migration. Page 30 Summary Highlights Today’s business-critical applications must be available 24x7, with no down- time window for data migration or relocation. Softek TDMF software, the standard in data migration, gives users the freedom and the power to move data from any storage to any storage, on any platform, over any distance, at anytime — with no interruption to active applications. Softek TDMF software helps with Softek TDMF z/OS is easy to use, flexible and transparent to applications. your data migration needs while Whether the need is to upgrade to new storage and/or new server hardware maintaining application availability. (from any vendor, to any vendor), consolidate or relocate a data center, or find a more effective way to migrate data based on cost/performance, TDMF soft- ware provides the solution, all while maintaining application availability. TDMF definitions The following terms are commonly used when discussing a migration strategy for transferring workloads: TDMF session: A migration group(s) and/or migration pairs to be processed in a single Softek TDMF execution. Master: The Softek TDMF system running as an MVS batch job that is responsible for the data copy function. There can be only one master system in a Softek TDMF session. Agent: An associated Softek TDMF MVS image running in a shared storage environment with the master. To ensure data integrity, any MVS LPAR that has access to the source or target volumes must be running the TDMF master or one of the agent systems. The master and associated agent systems commu- nicate via a shared system communications data set (COMMDS).
  • 31. Softek TDMF z/OS extended distance migration. Page 31 Source: The originating DASD volume(s) containing the data to be migrated. Highlights Target: The destination DASD volume(s) receiving the migrated data. These are some of the terms Migration pair: Source and target volumes for a single migration. used when discussing a data migration strategy. Migration group: A group of volume pairs with the same group name. Synchronization: The collection of the final set of updates from all source volumes in a group or session applied to the target volumes. For a point-in- time copy, synchronization involves quiescing any source application systems to ensure that all buffers are flushed. Swap migration: A migration session in which the source VOLSER is switched to the target volume at the end of the session. Point-in-time migration: A migration session in which the VOLSER on the source volume is not switched, and remains on-line to the application on the original volume. Local: Equivalent to source in an extended distance migration. Remote: Equivalent to target in an extended distance migration. Push migration: An extended distance migration in which the Softek TDMF master runs on the local system and makes point-in-time copies to remote vol- umes using channel extenders and communication links. The remote volumes do not have to be attached to any processor.
  • 32. Pull migration: An extended distance migration in which the Softek TDMF © Copyright IBM Corporation 2007 master runs on the remote system and makes point-in-time copies from the IBM Global Services Route 100 local volumes to the remote volumes using channel extenders and communi- Somers, NY 10589 cation links. The local system(s) run only a Softek TDMF agent. U.S.A. Produced in the United States of America Gen/Genning: To generate or produce something according to an algorithm 12-07 All Rights Reserved or program or set of rules (the opposite of parse). IBM, the IBM logo, AIX, LDMF, Softek, TDMF and z/OS are trademarks or registered trademarks of Mbps: megabits per second International Business Machines Corporation in the United States, other countries, or both. MBps: megabytes per second Microsoft, Windows, and Windows NT are trade- marks of Microsoft Corporation in the United States, other countries, or both. Loadlibs: load libraries Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. For more information Other company, product and service names may For more information about TDMF z/OS, visit: be trademarks or service marks of others. References in this publication to IBM products or ibm.com/services/storage services do not imply that IBM intends to make them available in all countries in which IBM operates. Performance/capacity results or other technical statistics appearing in this document are provided by the author solely for the purposes of illustrating specific technical concepts relating to the products discussed herein. The performance/capacity results or other technical statistics published herein do not constitute or represent a warranty as to mer- chantability, operation, or fitness of any product for any particular purpose. GTW01276-USEN-01