End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
Upcoming SlideShare
Loading in...5
×
 

End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

on

  • 3,987 views

 

Statistics

Views

Total Views
3,987
Views on SlideShare
3,987
Embed Views
0

Actions

Likes
0
Downloads
28
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624 End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624 Document Transcript

  • Front coverEnd-to-End Schedulingwith IBM Tivoli Workload kloadScheduler V 8.2Plan and implement your end-to-endscheduling environmentExperiment with real-lifescenariosLearn best practices andtroubleshooting Vasfi Gucer Michael A. Lowry Finn Bastrup Knudsenibm.com/redbooks
  • International Technical Support OrganizationEnd-to-End Scheduling with IBM Tivoli WorkloadScheduler V 8.2September 2004 SG24-6624-00
  • Note: Before using this information and the product it supports, read the information in “Notices” on page ix.First Edition (September 2004)This edition applies to IBM Tivoli Workload Scheduler Version 8.2, IBM Tivoli Workload Schedulerfor z/OS Version 8.2.© Copyright International Business Machines Corporation 2004. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADPSchedule Contract with IBM Corp.
  • Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Introduction to end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Introduction to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . . . . 4 1.3.1 Overview of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . 4 1.3.2 Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 4 1.4 Introduction to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.1 Overview of IBM Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . 5 1.4.2 IBM Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . 6 1.5 Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 Summary of enhancements in V8.2 related to end-to-end scheduling . . . . 8 1.6.1 New functions related with performance and scalability . . . . . . . . . . . 8 1.6.2 General enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.6.3 Security enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.7 The terminology used in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 2. End-to-end scheduling architecture . . . . . . . . . . . . . . . . . . . . . 25 2.1 IBM Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 27 2.1.1 Tivoli Workload Scheduler for z/OS configuration. . . . . . . . . . . . . . . 28 2.1.2 Tivoli Workload Scheduler for z/OS database objects . . . . . . . . . . . 32 2.1.3 Tivoli Workload Scheduler for z/OS plans. . . . . . . . . . . . . . . . . . . . . 37 2.1.4 Other Tivoli Workload Scheduler for z/OS features . . . . . . . . . . . . . 44 2.2 Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1 The IBM Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . . 51 2.2.2 Tivoli Workload Scheduler workstation types . . . . . . . . . . . . . . . . . . 54 2.2.3 Tivoli Workload Scheduler topology . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.2.4 IBM Tivoli Workload Scheduler components . . . . . . . . . . . . . . . . . . 57 2.2.5 IBM Tivoli Workload Scheduler plan . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 End-to-end scheduling architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59© Copyright IBM Corp. 2004. All rights reserved. iii
  • 2.3.1 How end-to-end scheduling works . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components . . . . . . 62 2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration . . . . . 68 2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans . . . . . . . . . . . 75 2.3.5 Making the end-to-end scheduling system fault tolerant. . . . . . . . . . 84 2.3.6 Benefits of end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . 86 2.4 Job Scheduling Console and related components . . . . . . . . . . . . . . . . . . 89 2.4.1 A brief introduction to the Tivoli Management Framework . . . . . . . . 90 2.4.2 Job Scheduling Services (JSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2.4.3 Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2.5 Job log retrieval in an end-to-end environment . . . . . . . . . . . . . . . . . . . . . 98 2.5.1 Job log retrieval via the Tivoli Workload Scheduler connector . . . . . 98 2.5.2 Job log retrieval via the OPC connector . . . . . . . . . . . . . . . . . . . . . . 99 2.5.3 Job log retrieval when firewalls are involved. . . . . . . . . . . . . . . . . . 101 2.6 Tivoli Workload Scheduler, important files, and directory structure . . . . 103 2.7 conman commands in the end-to-end environment . . . . . . . . . . . . . . . . 106 Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.1 Different ways to do end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 111 3.2 The rationale behind end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 112 3.3 Before you start the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.3.1 How to order the Tivoli Workload Scheduler software . . . . . . . . . . 114 3.3.2 Where to find more information for planning . . . . . . . . . . . . . . . . . . 116 3.4 Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS116 3.4.1 Tivoli Workload Scheduler for z/OS documentation . . . . . . . . . . . . 117 3.4.2 Service updates (PSP bucket, APARs, and PTFs) . . . . . . . . . . . . . 117 3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.4.4 Hierarchical File System (HFS) cluster . . . . . . . . . . . . . . . . . . . . . . 124 3.4.5 Data sets related to end-to-end scheduling . . . . . . . . . . . . . . . . . . 127 3.4.6 TCP/IP considerations for end-to-end server in sysplex . . . . . . . . . 129 3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.5 Planning for end-to-end scheduling with Tivoli Workload Scheduler . . . 139 3.5.1 Tivoli Workload Scheduler publications and documentation. . . . . . 139 3.5.2 Tivoli Workload Scheduler service updates (fix packs) . . . . . . . . . . 140 3.5.3 System and software requirements. . . . . . . . . . . . . . . . . . . . . . . . . 140 3.5.4 Network planning and considerations . . . . . . . . . . . . . . . . . . . . . . . 141 3.5.5 Backup domain manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 3.5.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 3.5.7 Fault-tolerant agent (FTA) naming conventions . . . . . . . . . . . . . . . 146 3.6 Planning for the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . 149iv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 3.6.1 Job Scheduling Console documentation. . . . . . . . . . . . . . . . . . . . . 150 3.6.2 Job Scheduling Console service (fix packs) . . . . . . . . . . . . . . . . . . 150 3.6.3 Compatibility and migration considerations for the JSC . . . . . . . . . 151 3.6.4 Planning for Job Scheduling Console availability . . . . . . . . . . . . . . 153 3.6.5 Planning for server started task for JSC communication . . . . . . . . 1543.7 Planning for migration or upgrade from previous versions . . . . . . . . . . . 1553.8 Planning for maintenance or upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . 156Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574.1 Before the installation is started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1584.2 Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling . . 159 4.2.1 Executing EQQJOBS installation aid . . . . . . . . . . . . . . . . . . . . . . . 162 4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems . . . . . . . 167 4.2.3 Allocate end-to-end data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 4.2.4 Create and customize the work directory . . . . . . . . . . . . . . . . . . . . 170 4.2.5 Create started task procedures for Tivoli Workload Scheduler for z/OS 173 4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.2.7 Initialization statements used to describe the topology. . . . . . . . . . 184 4.2.8 Example of DOMREC and CPUREC definitions. . . . . . . . . . . . . . . 197 4.2.9 The JTOPTS TWSJOBNAME() parameter . . . . . . . . . . . . . . . . . . . 200 4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for z/OS . 2034.3 Installing Tivoli Workload Scheduler in an end-to-end environment . . . . 207 4.3.1 Installing multiple instances of Tivoli Workload Scheduler on one machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 4.3.2 Verify the Tivoli Workload Scheduler installation . . . . . . . . . . . . . . 2114.4 Define, activate, verify fault-tolerant workstations . . . . . . . . . . . . . . . . . . 211 4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler controller workstation database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 4.4.2 Activate the fault-tolerant workstation definition . . . . . . . . . . . . . . . 213 4.4.3 Verify that the fault-tolerant workstations are active and linked . . . 2144.5 Creating fault-tolerant workstation job definitions and job streams . . . . . 217 4.5.1 Centralized and non-centralized scripts . . . . . . . . . . . . . . . . . . . . . 217 4.5.2 Definition of centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 4.5.3 Definition of non-centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . 221 4.5.4 Combination of centralized script and VARSUB, JOBREC parameters 232 4.5.5 Definition of FTW jobs and job streams in the controller. . . . . . . . . 2344.6 Verification test of end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . 235 4.6.1 Verification of job with centralized script definitions . . . . . . . . . . . . 236 Contents v
  • 4.6.2 Verification of job with non-centralized scripts . . . . . . . . . . . . . . . . 239 4.6.3 Verification of centralized script with JOBREC parameters . . . . . . 242 4.7 Activate support for the Tivoli Workload Scheduler Job Scheduling Console 245 4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC server . . 246 4.7.2 Installing and configuring Tivoli Management Framework 4.1 . . . . 252 4.7.3 Alternate method using Tivoli Management Framework 3.7.1 . . . . 253 4.7.4 Creating connector instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 4.7.5 Creating WTMF administrators for Tivoli Workload Scheduler . . . . 257 4.7.6 Installing the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . 261 Chapter 5. End-to-end implementation scenarios and examples. . . . . . 265 5.1 Description of our environment and systems . . . . . . . . . . . . . . . . . . . . . 266 5.2 Creation of the Symphony file in detail . . . . . . . . . . . . . . . . . . . . . . . . . . 273 5.3 Migrating Tivoli OPC tracker agents to end-to-end scheduling . . . . . . . . 274 5.3.1 Migration benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 5.3.2 Migration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 5.3.3 Migration checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 5.3.4 Migration actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 5.3.5 Migrating backward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 5.4 Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network . . . . . . . . . . . . . . . . . . . . . . . . . . 288 5.4.1 Illustration of the conversion process . . . . . . . . . . . . . . . . . . . . . . . 289 5.4.2 Considerations before doing the conversion. . . . . . . . . . . . . . . . . . 291 5.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 5.4.4 Some guidelines to automate the conversion process . . . . . . . . . . 299 5.5 Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios . . . . 303 5.5.1 Configure Tivoli Workload Scheduler for z/OS backup engines . . . 304 5.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS end-to-end server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 5.5.3 Configure backup domain manager for first-level domain manager 306 5.5.4 Switch to Tivoli Workload Scheduler backup domain manager . . . 308 5.5.5 Implementing Tivoli Workload Scheduler high availability on high availability environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 5.6 Backup and maintenance guidelines for FTAs . . . . . . . . . . . . . . . . . . . . 318 5.6.1 Backup of the Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 319 5.6.2 Stdlist files on Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 319 5.6.3 Auditing log files on Tivoli Workload Scheduler FTAs. . . . . . . . . . . 321 5.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAs . . . . . 321 5.6.5 Central repositories for important Tivoli Workload Scheduler files . 322 5.7 Security on fault-tolerant agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 5.7.1 The security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325vi End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 5.7.2 Sample security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3275.8 End-to-end scheduling tips and tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 5.8.1 File dependencies in the end-to-end environment . . . . . . . . . . . . . 331 5.8.2 Handling offline or unlinked workstations . . . . . . . . . . . . . . . . . . . . 332 5.8.3 Using dummy jobs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 5.8.4 Placing job scripts in the same directories on FTAs . . . . . . . . . . . . 334 5.8.5 Common errors for jobs on fault-tolerant workstations . . . . . . . . . . 334 5.8.6 Problems with port numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 5.8.7 Cannot switch to new Symphony file (EQQPT52E) messages. . . . 340Appendix A. Connector reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343Setting the Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Authorization roles required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Working with Tivoli Workload Scheduler for z/OS connector instances . . . . . 344 The wopcconn command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345Working with Tivoli Workload Scheduler connector instances . . . . . . . . . . . . 346 The wtwsconn.sh command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347Useful Tivoli Framework commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Contents vii
  • viii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service thatdoes not infringe any IBM intellectual property right may be used instead. However, it is the usersresponsibility to evaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document.The furnishing of this document does not give you any license to these patents. You can send licenseinquiries, in writing, to:IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where such provisionsare inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDESTHIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimerof express or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM maymake improvements and/or changes in the product(s) and/or the program(s) described in this publication atany time without notice.Any references in this information to non-IBM Web sites are provided for convenience only and do not in anymanner serve as an endorsement of those Web sites. The materials at those Web sites are not part of thematerials for this IBM product and use of those Web sites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirmthe accuracy of performance, compatibility or any other claims related to non-IBM products. Questions onthe capabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrates programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which thesample programs are written. These examples have not been thoroughly tested under all conditions. IBM,therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,modify, and distribute these sample programs in any form without payment to IBM for the purposes ofdeveloping, using, marketing, or distributing application programs conforming to IBMs applicationprogramming interfaces.© Copyright IBM Corp. 2004. All rights reserved. ix
  • TrademarksThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AIX® NetView® ServicePac® AS/400® OS/390® Tivoli® HACMP™ OS/400® Tivoli Enterprise Console® IBM® RACF® TME® Language Environment® Redbooks™ VTAM® Maestro™ Redbooks (logo) ™ z/OS® MVS™ S/390® zSeries®The following terms are trademarks of other companies:Java and all Java-based trademarks and logos are trademarks or registered trademarks of SunMicrosystems, Inc. in the United States, other countries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in theUnited States, other countries, or both.Intel is a trademark of Intel Corporation in the United States, other countries, or both.UNIX is a registered trademark of The Open Group in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Other company, product, and service names may be trademarks or service marks of others.x End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Preface The beginning of the new century sees the data center with a mix of work, hardware, and operating systems previously undreamed of. Today’s challenge is to manage disparate systems with minimal effort and maximum reliability. People experienced in scheduling traditional host-based batch work must now manage distributed systems, and those working in the distributed environment must take responsibility for work running on the corporate OS/390® system. This IBM® Redbook considers how best to provide end-to-end scheduling using IBM Tivoli® Workload Scheduler Version 8.2, both distributed (previously known as Maestro™) and mainframe (previously known as OPC) components. In this book, we provide the information for installing the necessary Tivoli Workload Scheduler software components and configuring them to communicate with each other. In addition to technical information, we consider various scenarios that may be encountered in the enterprise and suggest practical solutions. We describe how to manage work and dependencies across both environments using a single point of control. We believe that this redbook will be a valuable reference for IT specialists who implement end-to-end scheduling with Tivoli Workload Scheduler 8.2.The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center. Vasfi Gucer is a Project Leader at the International Technical Support Organization, Austin Center. He worked for IBM Turkey for 10 years and has been with the ITSO since January 1999. He has more than 10 years of experience in the areas of systems management, and networking hardware and software on mainframe and distributed platforms. He has worked on various Tivoli customer projects as a Systems Architect in Turkey and the United States. Vasfi is also a IBM Certified Senior IT Specialist. Michael A. Lowry is an IBM Certified Consultant and Instructor currently working for IBM in Stockholm, Sweden. Michael does support, consulting, and training for IBM customers, primarily in Europe. He has 10 years of experience in the IT services business and has worked for IBM since 1996. Michael studied engineering and biology at the University of Texas in Austin, his hometown.© Copyright IBM Corp. 2004. All rights reserved. xi
  • Before moving to Sweden, he worked in Austin for Apple, IBM, and the IBM Tivoli Workload Scheduler Support Team at Tivoli Systems. He has five years of experience with Tivoli Workload Scheduler and has extensive experience with IBM network and storage management products. He is also an IBM Certified AIX® Support Professional. Finn Bastrup Knudsen is an Advisory IT Specialist in Integrated Technology Services (ITS) in IBM Global Services in Copenhagen, Denmark. He has 12 years of experience working with IBM Tivoli Workload Scheduler for z/OS® (OPC) and four years of experience working with IBM Tivoli Workload Scheduler. Finn primarily does consultation and services at customer sites, as well as IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler training. He is a certified Tivoli Instructor in IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. He has worked at IBM for 13 years. His areas of expertise include IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. Also thanks to the following people for their contributions to this project: International Technical Support Organization, Austin Center Budi Darmawan and Betsy Thaggard IBM Italy Angelo Dambrosio, Paolo Falsi, Antonio Gallotti, Pietro Iannucci, Valeria Perticara IBM USA Robert Haimowitz, Stephen Viola IBM Germany Stefan FrankeNotice This publication is intended to help Tivoli specialists implement an end-to-end scheduling environment with IBM Tivoli Workload Scheduler 8.2. The information in this publication is not intended as the specification of any programming interfaces that are provided by Tivoli Workload Scheduler 8.2. See the PUBLICATIONS section of the IBM Programming Announcement for Tivoli Workload Scheduler 8.2 for more information about what publications are considered to be product documentation.xii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will team with IBM technical professionals, Business Partners, and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us. We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 905 Internal Zip 2834 11501 Burnet Road Austin, Texas 78758-3493 Preface xiii
  • xiv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 1 Chapter 1. Introduction IBM Tivoli Workload Scheduler for z/OS Version 8.2 introduces many new features and further integrates the OPC-based and Maestro-based scheduling engines. In this chapter, we give a brief introduction to the IBM Tivoli Workload Scheduler 8.2 suite and summarize the functions that are introduced in Version 8.2: “Job scheduling” on page 2 “Introduction to end-to-end scheduling” on page 3 “Introduction to Tivoli Workload Scheduler for z/OS” on page 4 “Introduction to Tivoli Workload Scheduler” on page 5.2 “Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler” on page 7 “Summary of enhancements in V8.2 related to end-to-end scheduling” on page 8 “The terminology used in this book” on page 21© Copyright IBM Corp. 2004 1
  • 1.1 Job scheduling Scheduling is the nucleus of the data center. Orderly, reliable sequencing and management of process execution is an essential part of IT management. The IT environment consists of multiple strategic applications, such as SAP/3 and Oracle, payroll, invoicing, e-commerce, and order handling. These applications run on many different operating systems and platforms. Legacy systems must be maintained and integrated with newer systems. Workloads are increasing, accelerated by electronic commerce. Staffing and training requirements increase, and many platform experts are needed. There are too many consoles and no overall point of control. Constant (24x7) availability is essential and must be maintained through migrations, mergers, acquisitions, and consolidations. Dependencies exist between jobs in different environments. For example, a customer can use a Web browser to fill out an order form that triggers a UNIX® job that acknowledges the order, an AS/400® job that orders parts, a z/OS job that debits the customer’s bank account, and a Windows NT® job that prints an invoice and address label. Each job must run only after the job before it has completed. The IBM Tivoli Workload Scheduler Version 8.2 suite provides an integrated solution for running this kind of complicated workload. Its Job Scheduling Console provides a centralized point of control and unified interface for managing the workload regardless of the platform or operating system on which the jobs run. The Tivoli Workload Scheduler 8.2 suite includes IBM Tivoli Workload Scheduler, IBM Tivoli Workload Scheduler for z/OS, and the Job Scheduling Console. Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be used separately or together. End-to-end scheduling means using both products together, with an IBM mainframe acting as the scheduling controller for a network of other workstations. Because Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS have different histories and work on different platforms, someone who is familiar with one of the programs may not be familiar with the other. For this reason, we give a short introduction to each product separately and then proceed to discuss how the two programs work together.2 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 1.2 Introduction to end-to-end scheduling End-to-end scheduling means scheduling workload across all computing resources in your enterprise, from the mainframe in your data center, to the servers in your regional headquarters, all the way to the workstations in your local office. The Tivoli Workload Scheduler end-to-end scheduling solution is a system whereby scheduling throughout the network is defined, managed, controlled, and tracked from a single IBM mainframe or sysplex. End-to-end scheduling requires using two different programs: Tivoli Workload Scheduler for z/OS on the mainframe, and Tivoli Workload Scheduler on other operating systems (UNIX, Windows®, and OS/400®). This is shown in Figure 1-1. MASTERDM Tivoli Master Domain z/OS Workload Manager Scheduler OPCMASTER for z/OS DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB Tivoli Workload Scheduler FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 1-1 Both schedulers are required for end-to-end scheduling Despite the similar names, Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler are quite different and have distinct histories. IBM Tivoli Workload Scheduler for z/OS was originally called OPC. It was developed by IBM in the early days of the mainframe. IBM Tivoli Workload Scheduler was originally developed by a company called Unison Software. Unison was purchased by Tivoli, and Tivoli was then purchased by IBM. Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have slightly different ways of working, and programs have many features in common. IBM has continued development of both programs toward the goal of providing closer Chapter 1. Introduction 3
  • and closer integration between them. The reason for this integration is simple: to facilitate an integrated scheduling system across all operating systems. It should be obvious that end-to-end scheduling depends on using the mainframe as the central point of control for the scheduling network. There are other ways to integrate scheduling between z/OS and other operating systems. We will discuss these in the following sections.1.3 Introduction to Tivoli Workload Scheduler for z/OS IBM Tivoli Workload Scheduler for z/OS has been scheduling and controlling batch workloads in data centers since 1977. Originally called Operations Planning and Control (OPC), the product has been extensively developed and extended to meet the increasing demands of customers worldwide. An overnight workload consisting of 100,000 production jobs is not unusual, and Tivoli Workload Scheduler for z/OS can easily manage this kind of workload.1.3.1 Overview of Tivoli Workload Scheduler for z/OS IBM Tivoli Workload Scheduler for z/OS databases contain all of the information about the work that is to be run, when it should run, and the resources that are needed and available. This information is used to calculate a forecast called the long-term plan. Data center staff can check this to confirm that the desired work is being scheduled when required. The long-term plan usually covers a time range of four to twelve weeks. The current plan is produced based on the long-term plan and the databases. The current plan usually covers 24 hours and is a detailed production schedule. Tivoli Workload Scheduler for z/OS uses the current plan to submit jobs to the appropriate processor at the appropriate time. All jobs in the current plan have Tivoli Workload Scheduler for z/OS status codes that indicate the progress of work. When a job’s predecessors are complete, Tivoli Workload Scheduler for z/OS considers it ready for submission. It verifies that all requested resources are available, and when these conditions are met, it causes the job to be submitted.1.3.2 Tivoli Workload Scheduler for z/OS architecture IBM Tivoli Workload Scheduler for z/OS consists of a controller and one or more trackers. The controller, which runs on a z/OS system, manages the Tivoli Workload Scheduler for z/OS and the long term and current plans. The controller schedules work and causes jobs to be submitted to the appropriate system at the appropriate time.4 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Trackers are installed on every system managed by the controller. The tracker is the link between the controller and the managed system. The tracker submits jobs when the controller instructs it to do so, and it passes job start and job end information back to the controller. The controller can schedule jobs on z/OS system using trackers or on other operating systems using fault-tolerant agents (FTAs). FTAs can be run on many operating systems, including AIX, Linux®, Solaris, HP-UX, OS/400, and Windows. FTAs run IBM Tivoli Workload Scheduler, formerly called Maestro. The most common way of working with the controller is via ISPF panels. However, several other methods are available, including Program Interfaces, TSO commands, and the Job Scheduling Console. The Job Scheduling Console (JSC) is a Java™-based graphical user interface for controlling and monitoring workload on the mainframe and other platforms. The first version of JSC was released at the same time as Tivoli OPC Version 2.3. The current version of JSC (1.3) has been updated with several new functions specific to Tivoli Workload Scheduler for z/OS. JSC provides a common interface to both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. For more information about IBM Tivoli Workload Scheduler for z/OS architecture, see Chapter 2, “End-to-end scheduling architecture” on page 25.1.4 Introduction to Tivoli Workload Scheduler IBM Tivoli Workload Scheduler is descended from the Unison Maestro program. Unison Maestro was developed by Unison Software on the Hewlett-Packard MPE operating system. It was then ported to UNIX and Windows. In its various manifestations, Tivoli Workload Scheduler has a 17-year track record. During the processing day, Tivoli Workload Scheduler manages the production environment and automates most operator activities. It prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs begin as soon as their dependencies are satisfied, idle time is minimized. Jobs never run out of sequence. If a job fails, IBM Tivoli Workload Scheduler can handle the recovery process with little or no operator intervention.1.4.1 Overview of IBM Tivoli Workload Scheduler As with IBM Tivoli Workload Scheduler for z/OS, there are two basic aspects to job scheduling in IBM Tivoli Workload Scheduler: The database and the plan. The database contains all definitions for scheduling objects, such as jobs, job streams, resources, and workstations. It also holds statistics of job and job stream execution, as well as information on the user ID that created an object Chapter 1. Introduction 5
  • and when an object was last modified. The plan contains all job scheduling activity planned for a period of one day. In IBM Tivoli Workload Scheduler, the plan is created every 24 hours and consists of all the jobs, job streams, and dependency objects that are scheduled to execute for that day. Job streams that do not complete successfully can be carried forward into the next day’s plan.1.4.2 IBM Tivoli Workload Scheduler architecture A typical IBM Tivoli Workload Scheduler network consists of a master domain manager, domain managers, and fault-tolerant agents. The master domain manager, sometimes referred to as just the master, contains the centralized database files that store all defined scheduling objects. The master creates the plan, called Symphony, at the start of each day. Each domain manager is responsible for distribution of the plan to the fault-tolerant agents (FTAs) in its domain. A domain manager also handles resolution of dependencies between FTAs in its domain. FTAs are the workhorses of a Tivoli Workload Scheduler network. FTAs are where most jobs are run. As their name implies, fault-tolerant agents are fault tolerant. This means that in the event of a loss of communication with the domain manager, FTAs are capable of resolving local dependencies and launching their jobs without interruption. FTAs are capable of this because each FTA has its own copy of the plan. The plan contains a complete set of scheduling instructions for the production day. Similarly, a domain manager can resolve dependencies between FTAs in its domain even in the event of a loss of communication with the master, because the domain manager’s plan receives updates from all subordinate FTAs and contains the authoritative status of all jobs in that domain. The master domain manager is updated with the status of all jobs in the entire IBM Tivoli Workload Scheduler network. Logging and monitoring of the IBM Tivoli Workload Scheduler network is performed on the master. Starting with Tivoli Workload Scheduler Version 7.0, a new Java-based graphical user interface was made available to provide an easy-to-use interface to Tivoli Workload Scheduler. This new GUI is called Job Scheduling Console (JSC). The current version of JSC has been updated with several functions specific to Tivoli Workload Scheduler. The JSC provides a common interface to both Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. For more about IBM Tivoli Workload Scheduler architecture, see Chapter 2, “End-to-end scheduling architecture” on page 25.6 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 1.5 Benefits of integrating Tivoli Workload Scheduler forz/OS and Tivoli Workload Scheduler Both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have individual strengths. While an enterprise running mainframe and non-mainframe systems could schedule and control work using only one of these tools or using both tools separately, a complete solution requires that Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler work together. The Tivoli Workload Scheduler for z/OS long-term plan gives peace of mind by showing the workload forecast weeks or months into the future. Tivoli Workload Scheduler fault-tolerant agents go right on running jobs even if they lose communication with the domain manager. Tivoli Workload Scheduler for z/OS manages huge numbers of jobs through a sysplex of connected z/OS systems. Tivoli Workload Scheduler extended agents can control work on applications such as SAP R/3 and Oracle. Many data centers need to schedule significant amounts of both mainframe and non-mainframe jobs. It is often desirable to have a single point of control for scheduling on all systems in the enterprise, regardless of platform, operating system, or application. These businesses would probably benefit from implementing the end-to-end scheduling configuration. End-to-end scheduling enables the business to make the most of its computing resources. That said, the end-to-end scheduling configuration is not necessarily the best way to go for every enterprise. Some computing environments would probably benefit from keeping their mainframe and non-mainframe schedulers separate. Others would be better served by integrating the two schedulers in a different way (for example, z/OS [or MVS™] extended agents). Enterprises with a majority of jobs running on UNIX and Windows servers might not want to cede control of these jobs to the mainframe. Because the end-to-end solution involves software components on both mainframe and non-mainframe systems, there will have to be a high level of cooperation between your mainframe operators and your UNIX and Windows system administrators. Careful consideration of the requirements of end-to-end scheduling is necessary before going down this path. There are also several important decisions that must be made before beginning an implementation of end-to-end scheduling. For example, there is a trade-off between centralized control and fault tolerance. Careful planning now can save you time and trouble later. In Chapter 3, “Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2” on page 109, we explain in detail the decisions that must be made prior to implementation. We strongly recommend that you read this chapter in full before beginning any implementation. Chapter 1. Introduction 7
  • 1.6 Summary of enhancements in V8.2 related toend-to-end scheduling Version 8.2 is the latest version of both IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. In this section we cover the new functions that affect end-to-end scheduling in three categories.1.6.1 New functions related with performance and scalability Several features are now available with IBM Tivoli Workload Scheduler for z/OS 8.2 that directly or indirectly affect performance. Multiple first-level domain managers In IBM Tivoli Workload Scheduler for z/OS 8.1, there was a limitation of only one first-level domain manager (called the primary domain manager). In Version 8.2, you can have multiple first-level domain managers (that is, the level immediately below OPCMASTER). See Figure 1-2 on page 9. This allows greater flexibility and scalability and eliminates a potential performance bottleneck. It also allows greater freedom in defining your Tivoli Workload Scheduler distributed network.8 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • OPCMASTER z/OS Master Domain Manager DomainZ DomainY AIX AIX Domain Domain Manager Manager DMZ DMY DomainA DomainB DomainC HPUX AIX HPUX Domain Domain Domain Manager Manager Manager DMA DMB DMC FTA1 FTA2 FTA3 FTA4 AIX Linux Windows 2000 SolarisFigure 1-2 IBM Tivoli Workload Scheduler network with two first-level domainsImproved SCRIPTLIB parserThe job definitions for non-centralized scripts are kept in members in theSCRPTLIB data set (EQQSCLIB DD statement). The definitions are specified inkeywords and parameter definitions. See example below:Example 1-1 SCRPTLIB datasetBROWSE TWS.INST.SCRPTLIB(AIXJOB01) - 01.08 Line 00000000 Col 001 Command ===> Scroll ===>********************************* Top of Data *****************************/* Job to be executed on AIX machines */VARSUB TABLES(FTWTABLE) PREFIX(&) VARFAIL(YES) TRUNCATE(NO)JOBREC JOBSCR(&TWSHOME./scripts/return_rc.sh 2) RCCONDSUCC((RC=4) OR (RC=6))RECOVERY OPTION(STOP) MESSAGE(Reply Yes when OK to continue) Chapter 1. Introduction 9
  • ******************************** Bottom of Data *************************** The information in the SCRPTLIB member must be parsed every time a job is added to the Symphony file (both at Symphony creation or dynamically). In IBM Tivoli Workload Scheduler 8.1, the TSO parser was used, but this caused a major performance issue: up to 70% of the time that it took to create a Symphony file was spent parsing the SCRIPTLIB library members. In Version 8.2, a new parser has been implemented that significantly reduces the parsing time and consequently the Symphony file creation time. Check server status before Symphony file creation In an end-to-end configuration, daily planning batch jobs require that both the controller and server are active to be able to synchronize all the tasks and avoid unprocessed events being left in the event files. If the server is not active the daily planning batch process now fails at the beginning to avoid pointless extra processing. Two new log messages show the status of the end-to-end server: EQQ3120E END-TO-END SERVER NOT AVAILABLE EQQZ193I END-TO-END TRANSLATOR SERVER PROCESS IS NOW AVAILABLE Improved job log retrieval performance In IBM Tivoli Workload Scheduler 8.1, the thread structure of the Translator process implied that only usual incoming events were immediately notified to the controller; job log events were detected by the controller only when another event arrived or after a 30-second timeout. In IBM Tivoli Workload Scheduler 8.2, a new input-writer thread has been implemented that manages the writing of events to the input queue and takes input from both the input translator and the job log retriever. This enables the job log retriever to test whether there is room on the input queue and if not, it loops until enough space is available. Meanwhile the input translator can continue to write its smaller events to the queue.1.6.2 General enhancements In this section, we cover enhancements in the general category. Centralized Script Library Management In order to ease the migration path from OPC tracker agents to IBM Tivoli Workload Scheduler Distributed Agents, a new function has been introduced in Tivoli Workload Scheduler 8.2 called Centralized Script Library Management (or Centralized Scripting). It is now possible to use the Tivoli Workload Scheduler for z/OS engine as the centralized repository for scripts of distributed jobs.10 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Centralized script is stored in the JOBLIB and it provides features that were onOPC tracker agents such as: JCL Editing Variable substitution and Job Setup Automatic Recovery Support for usage of the job-submit exit (EQQUX001) Note: Centralized script feature is not supported for fault tolerant jobs running on an AS/400 fault tolerant agent.Rules for defining centralized scriptsTo define a centralized script in the JOBLIB, the following rules must beconsidered: The lines that start with //* OPC, //*%OPC, and //*>OPC are used for the variable substitution and the automatic recovery. They are removed before the script is downloaded on the distributed agent. Each line starts from column 1 to column 80. Backslash () at column 80 is the character of continuation. Blanks at the end of the line are automatically removed.These rules guarantee the compatibility with the old tracker agent jobs. Note: The SCRIPTLIB follows the TSO rules, so the rules to define a centralized script in the JOBLIB differ from those to define the JOBSCR and JOBCMD of a non-centralized script.For more details, refer to 4.5.2, “Definition of centralized scripts” on page 219.A new data set, EQQTWSCS, has been introduced with this new release tofacilitate centralized scripting. EQQTWSCS is a PDSE data set used totemporarily store a script when it is downloaded from the JOBLIB data set to theagent for its submission.User interface changes for the centralized scriptCentralized Scripting required changes to several Tivoli Workload Scheduler forz/OS interfaces such as ISPF, Job Scheduling Console, and a number of batchinterfaces. In this section, we cover the changes to the user interfaces ISPF andJob Scheduling Console.In ISPF, a new job option has been added to specify whether an operation thatruns on a fault tolerant workstation has a centralized script. It can value Y/N: Y if the job has the script stored centrally in the JOBLIB. Chapter 1. Introduction 11
  • N if the script is stored locally and the job has the job definition in the SCRIPTLIB. In a database, the value of this new job option can be modified during the add/modify of an application or operation. It can be set for every operation, without workstation checking. When a new operation is created, the default value for this option is N. For non-FTW (Fault Tolerant Workstation) operations, the value of the option is automatically changed to Y during Daily Plan or when exiting the Modify an occurrence or Create an occurrence dialog. The new Centralized Script option was added for operations in the Application Description database and is always editable (Figure 1-3). Figure 1-3 CENTRALIZED SCRIPT option in the AD dialog The Centralized Script option also has been added for operations in the current plan. It is editable only when adding a new operation. It can be browsed when modifying an operation (Figure 1-4 on page 13).12 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 1-4 CENTRALIZED SCRIPT option in the CP dialogSimilarly, Centralized Script has been added in the Job Scheduling Consoledialog for creating an FTW task, as shown in Figure 1-5.Figure 1-5 Centralized Script option in the JSC dialog Chapter 1. Introduction 13
  • Considerations when using centralized scripts Using centralized scripts can ease the migration path from OPC tracker agents to FTAs. It is also easier to maintain the centralized scripts because they are kept in a central location, but these benefits come with some limitations. When deciding whether to store the script locally or centrally, take into consideration that: The script must be downloaded every time a job runs. There is no caching mechanism on the FTA. The script is discarded as soon as the job completes. A rerun of a centralized job causes the script to be downloaded again. There is a reduction in the fault tolerance, because the centralized dependency can be released only by the controller. Recovery for non-centralized jobs In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job definition to specify recovery options and actions. Recovery is performed automatically on the FTA in case of an abend. By this feature, it is now possible to use the recovery for jobs running in a end-to-end network as implemented in IBM Tivoli Workload Scheduler distributed. Defining recovery for non-centralized jobs To activate the recovery for a non-centralized job, you have to specify the RECOVERY statement in the job member in the scriptlib. It is possible to specify one or both of the following recovery actions: A recovery job (JOBCMD or JOBSCR keywords) A recovery prompt (MESSAGE keyword) The recovery actions must be followed by one of the recovery options (the OPTION keyword), stop, continue, or rerun. The default is stop with no recovery job and no recovery prompt. Figure 1-6 on page 15 shows the syntax of the RECOVERY statement.14 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 1-6 Syntax of the RECOVERY statement The keywords JOBUSR, JOBWS, INTRACTV, and RCCONDSUC can be used only if you have defined a recovery job using the JOBSCR or JOBCMD keyword. You cannot use the recovery prompt if you specify the recovery STOP option without using a recovery job. Having the OPTION(RERUN) and no recovery prompt specified could cause a loop. To prevent this situation, after a failed rerun of the job, a recovery prompt message is shown automatically. Note: The RECOVERY statement is ignored if it is used with a job that runs a centralized script. For more details, refer to 4.5.3, “Definition of non-centralized scripts” on page 221. Recovery actions available The following table describes the recovery actions that can be taken against a job that ended in error (and not failed). Note that JobP is the principal job, while JobR is the recovery job.Table 1-1 The recovery actions taken against a job ended in error ACTION/OPTION Stop Continue Rerun No recovery JobP remains in error. JobP is completed. Rerun JobP. prompt/No recovery job A recovery Issue the prompt. JobP Issue recovery prompt. If Issue the prompt. If no prompt/No remains in error. “yes” reply, JobP is reply, JobP remains in recovery job completed. If no reply, error. If “yes” reply, rerun JobP remains in error. JobP. Chapter 1. Introduction 15
  • ACTION/OPTION Stop Continue Rerun No recovery Launch JobR. Launch JobR. JobP is Launch JobR. prompt/A recovery If it is successful, JobP completed. If it is successful, rerun job is completed; otherwise JobP; otherwise JobP JobP remains in error. remains in error. A recovery Issue the prompt. If no Issue the prompt. Issue the prompt. If no prompt/A recovery reply, JobP remains in If no reply, JobP remains reply, JobP remains in job error. If “yes” reply: in error. error. If “yes” reply: Launch JobR. If “yes” reply: Launch JobR. If it is successful, Launch JobR. If it is successful, JobP is completed; JobP is completed. rerun JobP; otherwise otherwise JobP JobP remains in error. remains in error. Job Instance Recovery Information panels Figure 1-7 shows the Job Scheduling Console Job Instance Recovery Information panel. You can browse the job log of the recovery job, and you can reply prompt. Note the fields in the Job Scheduling Console panel and JOBREC parameters mapping.Figure 1-7 JSC and JOBREC parameters mapping16 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Also note that you can access the same information from the ISPF panels. Fromthe Operation list in MCP (5.3), if the operation is abended and the RECOVERYstatement has been used, you can use the row command RI (RecoveryInformation) to display the new panel EQQRINP as shown in Figure 1-8.Figure 1-8 EQQRINP ISPF panelVariable substitution for non-centralized jobsIn Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the jobdefinition to specify Variable Substitution Directives. This provides the capabilityto use the variable substitution for jobs running in an end-to-end network withoutusing the centralized script solution.Tivoli Workload Scheduler for z/OS–supplied variables and user-definedvariables (defined using a table) are supported in this new function. Variables aresubstituted when a job is added to Symphony (that is, when the Daily Planningcreates the Symphony or the job is added to the plan using the MCP dialog).To activate the variable substitution, use the VARSUB statement. The syntax ofthe VARSUB statement is given in Figure 1-9 on page 18. Note that it must bethe first one in the SCRPTLIB member containing the job definition. TheVARSUB statement enables you to specify variables when you set a statementkeyword in the job definition. Chapter 1. Introduction 17
  • Figure 1-9 Syntax of the VARSUB statement Use the TABLES keyword to identify the variable tables that must be searched and the search order. In particular: APPL indicates the application variable table specified in the VARIABLE TABLE field on the MCP panel, at Occurrence level. GLOBAL indicates the table defined in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch options. Any non-alphanumeric character, except blanks, can be used as a symbol to indicate that the characters that follow represent a variable. You can define two kinds of symbols using the PREFIX or BACKPREF keywords in the VARSUB statement; it allows you to define simple and compound variables. For more details, refer to 4.5.3, “Definition of non-centralized scripts” on page 221, and “Job Tailoring” in IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263. Return code mapping In Tivoli Workload Scheduler 8.1, if a fault tolerant job ends with a return code greater then 0 it is considered as abended. It should be possible to define whether a job is successful or abended according to a “success condition” defined at job level. This would supply the NOERROR functionality, supported only for host jobs. In Tivoli Workload Scheduler 8.2 for z/OS, a new keyword (RCCONDSUC) has been added in the job definition to specify the success condition. Tivoli Workload Scheduler 8.2 for z/OS interfaces show the operations return code. Customize the JOBREC and the RECOVERY statements in the SCRIPTLIB to specify a success condition for the job adding the RCCONDSUC keyword. The success condition expression can contain a combination of comparison and Boolean expressions.18 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Comparison expressionComparison expression specifies the job return codes. The syntax is: (RCoperator operand)-RC The RC keyword.Operand An integer between -2147483647 and 2147483647.Operator Comparison operator Table 1-2 lists the values it can have.Table 1-2 Operator Comparison operator values Example Operator Description RC < a < Less than RC <= a <= Less than or equal to RC> a > Greater than RC >= a >= Greater than or equal to RC = a = Equal to RC <> a <> Not equal to Note: Unlike IBM Tivoli Workload Scheduler distributed, the != operator is not supported to specify a ‘not equal to’ condition.The successful RC is specified by a logical combination of comparisonexpressions. The syntax is: comparison_expression operatorcomparison_expression.For example, you can define a successful job as a job that ends with a returncode less than 3 or equal to 5 as follows: RCCONDSUC(“(RC<3) OR (RC=5)“) Note: If you do not specify the RCCONDSUC, only a return code equal to zero corresponds to a successful condition.Late job handlingIn IBM Tivoli Workload Scheduler 8.2 distributed, a user can define a DEADLINEtime for a job or a job stream. If the job never started or if it is still executing afterthe deadline time has passed, Tivoli Workload Scheduler informs the user aboutthe missed deadline. Chapter 1. Introduction 19
  • IBM Tivoli Workload Scheduler for z/OS 8.2 now supports this function. In Version 8.2, the user can specify and modify a deadline time for a job or a job stream. If the job is running on a fault-tolerant agent, the deadline time is also stored in the Symphony file, and it is managed locally by the FTA. In an end-to-end network, the deadline is always defined for operations and occurrences. Batchman process on USS does not check the deadline to improve performances.1.6.3 Security enhancements This new version includes a number of security enhancements, which are discussed in this section. Firewall support in an end-to-end environment For previous versions of Tivoli Workload Scheduler for z/OS, running the commands to start or stop a workstation or to get the standard list requires opening a direct TCP/IP connection between the originator and the destination nodes. In a firewall environment, this forces users to break the firewall to open a direct communication path between the Tivoli Workload Scheduler for z/OS master and each fault-tolerant agent in the network. In this version, it is now possible to enable the firewall support of Tivoli Workload Scheduler in an end-to-end environment. If a firewall exists between a workstation and its domain manager, in order to force the start, stop, and get job output commands to go through the domain’s hierarchy, it is necessary to set the FIREWALL option to YES in the CPUREC statement. Example 1-2 shows a CPUREC definition that enables the firewall support. Example 1-2 CPUREC definition with firewall support enabled CPUREC CPUNAME(TWAD) CPUOS(WNT) CPUNODE(jsgui) CPUDOMAIN(maindom) CPUTYPE(FTA) FIREWALL(Y) SSL support It is now possible to enable the strong authentication and encryption (SSL) support of IBM Tivoli Workload Scheduler in an end-to-end environment. You can enable the Tivoli Workload Scheduler processes that run as USS (UNIX System Services) processes in the Tivoli Workload Scheduler for z/OS address20 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • space to establish SSL authentication between a Tivoli Workload Scheduler for z/OS master and the underlying IBM Tivoli Workload Scheduler domain managers. The authentication mechanism of IBM Tivoli Workload Scheduler is based on the OpenSSL toolkit, while IBM Tivoli Workload Scheduler for z/OS uses the System SSL services of z/OS. To enable SSL authentication for your end-to-end network, you must perform the following actions: 1. Create as many private keys, certificates, and trusted certification authority (CA) chains as you plan to use in your network. Refer to the OS/390 V2R10.0 System SSL Programming Guide and Reference, SC23-3978, for further details about the SSL protocol. 2. Customize the localopts file on IBM Tivoli Workload Scheduler workstations. To find how to enable SSL in the IBM Tivoli Workload Scheduler domain managers, refer to IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264. 3. Configure IBM Tivoli Workload Scheduler for z/OS: – Customize localopts file on USS workdir. – Customize the TOPOLOGY statement for the OPCMASTER. – Customize CPUREC statements for every workstation in the net. Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for the SSL support in the Tivoli Workload Scheduler for z/OS.1.7 The terminology used in this book The IBM Tivoli Workload Scheduler 8.2 suite comprises two somewhat different software programs, each with its own history and terminology. For this reason, there are sometimes two different and interchangeable names for the same thing. Other times, a term used in one context can have a different meaning in another context. To help clear up this confusion, we now introduce some of the terms and acronyms that will be used throughout the book. In order to make the terminology used in this book internally consistent, we adopted a system of terminology that may be a bit different than that used in the product documentation. So take a moment to read through this list, even if you are already familiar with the products. IBM Tivoli Workload Scheduler 8.2 suite Chapter 1. Introduction 21
  • The suite of programs that includes IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These programs are used together to make end-to-end scheduling work. Sometimes called just IBM Tivoli Workload Scheduler. IBM Tivoli Workload Scheduler This is the version of IBM Tivoli Workload Scheduler that runs on UNIX, OS/400, and Windows operating systems, as distinguished from IBM Tivoli Workload Scheduler for z/OS, a somewhat different program. Sometimes called IBM Tivoli Workload Scheduler Distributed. IBM Tivoli Workload Scheduler is based on the old Maestro program. IBM Tivoli Workload Scheduler for z/OS This is the version of IBM Tivoli Workload Scheduler that runs on z/OS, as distinguished from IBM Tivoli Workload Scheduler (by itself, without the for z/OS specification). IBM Tivoli Workload Scheduler for z/OS is based on the old OPC program. Master The top level of the IBM Tivoli Workload Scheduler or IBM Tivoli Workload Scheduler for z/OS scheduling network. Also called the master domain manager, because it is the domain manager of the MASTERDM (top-level) domain. Domain manager The agent responsible for handling dependency resolution for subordinate agents. Essentially an FTA with a few extra responsibilities. Fault-tolerant agent An agent that keeps its own local copy of the plan file and can continue operation even if the connection to the parent domain manager is lost. Also called an FTA. In IBM Tivoli Workload Scheduler for z/OS, FTAs are referred to as fault tolerant workstations. Scheduling engine An IBM Tivoli Workload Scheduler engine or IBM Tivoli Workload Scheduler for z/OS engine. IBM Tivoli Workload Scheduler engine The part of IBM Tivoli Workload Scheduler that does actual scheduling work, as distinguished from the other components that are related primarily to the user interface (for example, the IBM Tivoli Workload Scheduler connector). Essentially the part of IBM Tivoli Workload Scheduler that is descended from the old Maestro program.22 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • IBM Tivoli Workload Scheduler for z/OS engine The part of IBM Tivoli Workload Scheduler for z/OS that does actual scheduling work, as distinguished from the other components that are related primarily to the user interface (for example, the IBM Tivoli Workload Scheduler for z/OS connector). Essentially the controller plus the server.IBM Tivoli Workload Scheduler for z/OS controller The part of the IBM Tivoli Workload Scheduler for z/OS engine that is based on the old OPC program.IBM Tivoli Workload Scheduler for z/OS server The part of IBM Tivoli Workload Scheduler for z/OS that is based on the UNIX IBM Tivoli Workload Scheduler code. Runs in UNIX System Services (USS) on the mainframe.JSC Job Scheduling Console. This is the common graphical user interface (GUI) to both the IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS scheduling engines.Connector A small program that provides an interface between the common GUI (Job Scheduling Console) and one or more scheduling engines. The connector translates to and from the different “languages” used by the different scheduling engines.JSS Job Scheduling Services. Essentially a library that is used by the connectors.TMF Tivoli Management Framework. Also called just the Framework. Chapter 1. Introduction 23
  • 24 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 2 Chapter 2. End-to-end scheduling architecture End-to-end scheduling involves running programs on multiple platforms. For this reason, it is important to understand how the different components work together. Taking the time to get acquainted with end-to-end scheduling architecture will make it easier for you to install, use, and troubleshoot your end-to-end scheduling system. In this chapter, the following topics are discussed: “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27 “Tivoli Workload Scheduler architecture” on page 50 “End-to-end scheduling architecture” on page 59 “Job Scheduling Console and related components” on page 89 If you are unfamiliar with IBM Tivoli Workload Scheduler for z/OS, you can start with the section about its architecture to get a better understanding of how it works. If you are already familiar with Tivoli Workload Scheduler for z/OS but would like to learn more about IBM Tivoli Workload Scheduler (for other platforms such as UNIX, Windows, or OS/400), you can skip to that section.© Copyright IBM Corp. 2004 25
  • If you are already familiar with both IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS, skip ahead to the third section, in which we describe how both programs work together when configured as an end-to-end network. The Job Scheduling Console, its components, and its architecture, are described in the last topic. In this topic, we describe the different components that are used to establish a Job Scheduling Console environment.26 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 2.1 IBM Tivoli Workload Scheduler for z/OS architecture IBM Tivoli Workload Scheduler for z/OS expands the scope for automating your data processing operations. It plans and automatically schedules the production workload. From a single point of control, it drives and controls the workload processing at both local and remote sites. By using IBM Tivoli Workload Scheduler for z/OS to increase automation, you use your data processing resources more efficiently, have more control over your data processing assets, and manage your production workload processing better. IBM Tivoli Workload Scheduler for z/OS is composed of three major features: The IBM Tivoli Workload Scheduler for z/OS agent feature The agent is the base product in IBM Tivoli Workload Scheduler for z/OS. The agent is also called a tracker. It must run on every operating system in your z/OS complex on which IBM Tivoli Workload Scheduler for z/OS controlled work runs. The agent records details of job starts and passes that information to the engine, which updates the plan with statuses. The IBM Tivoli Workload Scheduler for z/OS engine feature One z/OS operating system in your complex is designated the controlling system and it runs the engine. The engine is also called the controller. Only one engine feature is required, even when you want to establish standby engines on other z/OS systems in a sysplex. The engine manages the databases and the plans and causes the work to be submitted at the appropriate time and at the appropriate system in your z/OS sysplex or on another system in a connected z/OS sysplex or z/OS system. The IBM Tivoli Workload Scheduler for z/OS end-to-end feature This feature makes it possible for the IBM Tivoli Workload Scheduler for z/OS engine to manage a production workload in a Tivoli Workload Scheduler distributed environment. You can schedule, control, and monitor jobs in Tivoli Workload Scheduler from the Tivoli Workload Scheduler for z/OS engine with this feature. The end-to-end feature is covered in 2.3, “End-to-end scheduling architecture” on page 59. The workload on other operating environments can also be controlled with the open interfaces that are provided with Tivoli Workload Scheduler for z/OS. Sample programs using TCP/IP or a Network Job Entry/Remote Spooling Communication Subsystem (NJE/RSCS) combination show you how you can control the workload on environments that at present have no scheduling feature. Chapter 2. End-to-end scheduling architecture 27
  • In addition to these major parts, the IBM Tivoli Workload Scheduler for z/OS product also contains the IBM Tivoli Workload Scheduler for z/OS connector and the Job Scheduling Console (JSC). IBM Tivoli Workload Scheduler for z/OS connector Maps the Job Scheduling Console commands to the IBM Tivoli Workload Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS connector requires that the Tivoli Management Framework be configured for a Tivoli server or Tivoli managed node. Job Scheduling Console A Java-based graphical user interface (GUI) for the IBM Tivoli Workload Scheduler suite. The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler for z/OS engine plan and database objects. It provides, through the IBM Tivoli Workload Scheduler for z/OS connector, functionality similar to the IBM Tivoli Workload Scheduler for z/OS legacy ISPF interface. You can use the Job Scheduling Console from any machine as long as it has a TCP/IP link with the machine running the IBM Tivoli Workload Scheduler for z/OS connector. The same Job Scheduling Console can be used for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. In the next topics, we provide an overview of IBM Tivoli Workload Scheduler for z/OS configuration, the architecture, and the terminology used in Tivoli Workload Scheduler for z/OS.2.1.1 Tivoli Workload Scheduler for z/OS configuration IBM Tivoli Workload Scheduler for z/OS supports many configuration options using a variety of communication methods: The controlling system (the controller or engine) Controlled z/OS systems Remote panels and program interface applications Job Scheduling Console Scheduling jobs that are in a distributed environment using Tivoli Workload Scheduler (described in 2.3, “End-to-end scheduling architecture” on page 59)28 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • The controlling systemThe controlling system requires both the agent and the engine. One controllingsystem can manage the production workload across all of your operatingenvironments.The engine is the focal point of control and information. It contains the controllingfunctions, the dialogs, the databases, the plans, and the scheduler’s own batchprograms for housekeeping and so forth. Only one engine is required to controlthe entire installation, including local and remote systems.Because IBM Tivoli Workload Scheduler for z/OS provides a single point ofcontrol for your production workload, it is important to make this systemredundant. This minimizes the risk of having any outages in your productionworkload in case the engine or the system with the engine fails. To make theengine redundant, one can start backup engines (hot standby engines) on othersystems in the same sysplex as the active engine. If the active engine or thecontrolling system fails, Tivoli Workload Scheduler for z/OS can automaticallytransfer the controlling functions to a backup system within a Parallel Sysplex.Through Cross Coupling Facility (XCF), IBM Tivoli Workload Scheduler for z/OScan automatically maintain production workload processing during systemfailures. The standby engine can be started on several z/OS systems in thesysplex.Figure 2-1 on page 30 shows an active engine with two standby engines runningin one sysplex. When an engine is started on a system in the sysplex, it willcheck whether there is already an active engine in the sysplex. It there are noactive engines, it will be an active engine. If there is an active engine, it will be astandby engine. The engine in Figure 2-1 on page 30 has connections to eightagents: three in the sysplex, two remote, and three in another sysplex. Theagents on the remote systems and in the other sysplexes are connected to theactive engine via ACF/VTAM® connections. Chapter 2. End-to-end scheduling architecture 29
  • Agent Agent Standby Standby Engine Engine z/OS SYSPLEX Agent Active Engine Remote VTAM VTAM Remote Agent Agent Remote Remote Agent Agent z/OS SYSPLEX Remote Agent Figure 2-1 Two sysplex environments and stand-alone systems Controlled z/OS systems An agent is required for every controlled z/OS system in a configuration. This includes, for example, locally controlled systems within shared DASD or sysplex configurations. The agent runs as a z/OS subsystem and interfaces with the operating system through JES2 (Job Execution Subsystem) or JES3, and SMF (System Management Facility), using the subsystem interface and the operating system exits. The agent monitors and logs the status of work, and passes the status information to the engine via shared DASD, XCF, or ACF/VTAM. You can exploit z/OS and the cross-system coupling facility (XCF) to connect your local z/OS systems. Rather than being passed to the controlling system via shared DASD, work status information is passed directly via XCF connections. XCF enables you to exploit all production-workload-restart facilities and its hot standby function in Tivoli Workload Scheduler for z/OS. Remote systems The agent on a remote z/OS system passes status information about the production work in progress to the engine on the controlling system. All communication between Tivoli Workload Scheduler for z/OS subsystems on the controlling and remote systems is done via ACF/VTAM.30 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Tivoli Workload Scheduler for z/OS enables you to link remote systems usingACF/VTAM networks. Remote systems are frequently used locally (on premises)to reduce the complexity of the data processing installation.Remote panels and program interface applicationsISPF panels and program interface (PIF) applications can run in a different z/OSsystem than the one where the active engine is running. Dialogs and PIFapplications send requests to and receive data from a Tivoli Workload Schedulerfor z/OS server that is running on the same z/OS system as the target engine, viaadvanced program-to-program communications (APPC). The APPC servercommunicates with the active engine to perform the requested actions.Using an APPC server for ISPF panels and PIF gives the user the freedom to runISPF panels and PIF on any system in a z/OS enterprise, as long as this systemhas advanced program-to-program communication with the system where theactive engine is started. This also means that you do not have to make sure thatyour PIF jobs always run on the z/OS system where the active engine is started.Furthermore, using the APPC server makes it seamless for panel users and PIFprograms if the engine is moved to its backup engine.The APPC server is a separate address space, started and stopped eitherautomatically by the engine, or by the user via the z/OS start command. Therecan be more than one server for an engine. If the dialogs or the PIF applicationsrun on the same z/OS system as the target engine, the server may not beinvolved. As shown in Figure 2-2 on page 32, it is possible to run the IBM TivoliWorkload Scheduler for z/OS dialogs and PIF applications from any system aslong as the system has an ACF/VTAM connection to the APPC server. Chapter 2. End-to-end scheduling architecture 31
  • PIF program ISPF z/OS panels SYSPLEX Active Engine APPC Server VTAM VTAM Remote Remote System System Remote System ISPF ISPF panels panels PIF program Figure 2-2 APPC server with remote panels and PIF access to ITWS for z/OS Note: Job Scheduling Console is the GUI to both IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. JSC is discussed in 2.4, “Job Scheduling Console and related components” on page 89.2.1.2 Tivoli Workload Scheduler for z/OS database objects Scheduling with IBM Tivoli Workload Scheduler for z/OS includes the capability to do the following: Schedule jobs across multiple systems local and remotely. Group jobs into job streams according to, for example, function or application, and define advanced run cycles based on customized calendars for the job streams. Set workload priorities and specify times for the submission of particular work. Base submission of work on availability of resources. Tailor jobs automatically based on dates, date calculations, and so forth. Ensure correct processing order by identifying dependencies such as successful completion of previous jobs, availability of resources, and time of day.32 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Define automatic recovery and restart for jobs. Forward incomplete jobs to the next production day.This is accomplished by defining scheduling objects in the Tivoli WorkloadScheduler for z/OS databases that are managed by the active engine and sharedby the standby engines. Scheduling objects are combined in these databases sothat they represent the workload that you want to have handled by TivoliWorkload Scheduler for z/OS.Tivoli Workload Scheduler for z/OS databases contain information about thework that is to be run, when it should be run, and the resources that are neededand available. This information is used to calculate a forward forecast called thelong-term plan.Scheduling objects are elements that are used to define your Tivoli WorkloadScheduler for z/OS workload. Scheduling objects include job streams (jobs anddependencies as part of job streams), workstations, calendars, periods, operatorinstructions, resources, and JCL variables.All of these scheduling objects can be created, modified, or deleted by using thelegacy IBM Tivoli Workload Scheduler for z/OS ISPF panels. Job streams,workstations, and resources can be managed from the Job Scheduling Consoleas well.Job streams A job stream (also known as an application in the legacy OPC ISPF interface) is a description of a unit of production work. It includes a list of jobs (related tasks) that are associated with that unit of work. For example, a payroll job stream might include a manual task in which an operator prepares a job; several computer-processing tasks in which programs are run to read a database, update employee records, and write payroll information to an output file; and a print task that prints paychecks. IBM Tivoli Workload Scheduler for z/OS schedules work based on the information that you provide in your job stream description. A job stream can include the following: A list of the jobs (related tasks) that are associated with that unit of work, such as: – Data entry – Job preparation – Job submission or started-task initiation – Communication with the NetView® program – File transfer to other operating environments Chapter 2. End-to-end scheduling architecture 33
  • – Printing of output – Post-processing activities, such as quality control or dispatch – Other tasks related to the unit of work that you want to schedule, control, and track A description of dependencies between jobs within a job stream and between jobs in other job streams Information about resource requirements, such as exclusive use of a data set Special operator instructions that are associated with a job How, when, and where each job should be processed Run policies for that unit of work; that is, when it should be scheduled or, alternatively, the name of a group definition that records the run policy Workstations When scheduling and processing work, Tivoli Workload Scheduler for z/OS considers the processing requirements of each job. Some typical processing considerations are: What human or machine resources are required for processing the work (for example, operators, processors, or printers)? When are these resources available? How will these jobs be tracked? Can this work be processed somewhere else if the resources become unavailable? You can plan for maintenance windows in your hardware and software environments. Tivoli Workload Scheduler for z/OS enables you to perform a controlled and incident-free shutdown of the environment, preventing last-minute cancellation of active tasks. You can choose to reroute the workload automatically during any outage, planned or unplanned. Tivoli Workload Scheduler for z/OS tracks jobs as they are processed at workstations and dynamically updates the plan with real-time information on the status of jobs. You can view or modify this status information online using the workstation ready lists in the dialog. Dependencies In general, every data-processing-related activity must occur in a specific order. Activities performed out of order will, at the very least, create invalid output; in the worst case your34 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • corporate data will be corrupted. In any case, the result is costly reruns, missed deadlines, and unsatisfied customers. You can define dependencies for jobs when a specific processing order is required. When IBM Tivoli Workload Scheduler for z/OS manages the dependent relationships, the jobs are started in the correct order every time they are scheduled. A dependency is called internal when it is between two jobs in the same job stream, and external when it is between two jobs in different job streams. You can work with job dependencies graphically from the Job Scheduling Console (Figure 2-3).Figure 2-3 Job Scheduling Console display of dependencies between jobsCalendars Tivoli Workload Scheduler for z/OS uses information about when the jobs departments work and when they are free, so job streams are not scheduled to run on days when processing resources are not available (such as Sundays and holidays). This information is stored in a calendar. Tivoli Workload Scheduler for z/OS supports multiple calendars for enterprises where different departments have different work days and free Chapter 2. End-to-end scheduling architecture 35
  • days (different groups within a business operate according to different calendars). The multiple calendar function is critical if your enterprise has installations in more than one geographical location (for example, with different local or national holidays). Resources Tivoli Workload Scheduler for z/OS enables you to serialize work based on the status of any data processing resource. A typical example is a job that uses a data set as input but must not start until the data set is successfully created and loaded with valid data. You can use resource serialization support to send availability information about a data processing resource to the workload in Tivoli Workload Scheduler for z/OS. To accomplish this, Tivoli Workload Scheduler for z/OS uses resources (also called special resources). Resources are typically defined to represent physical or logical objects used by jobs. A resource can be used to serialize access to a data set or to limit the number of file transfers on a particular network link. The resource does not have to represent a physical object in your configuration, although it often does. Tivoli Workload Scheduler for z/OS keeps a record of the state of each resource and its current allocation status. You can choose to hold resources in case a job allocating the resources ends abnormally. You can also use the Tivoli Workload Scheduler for z/OS interface with the Resource Object Data Manager (RODM) to schedule jobs according to real resource availability. You can subscribe to RODM updates in both local and remote domains. Tivoli Workload Scheduler for z/OS enables you to subscribe to data set activity on z/OS systems. Its dataset triggering function automatically updates special resource availability when a data set is closed. You can use this notification to coordinate planned activities or to add unplanned work to the schedule. Periods Tivoli Workload Scheduler for z/OS uses business processing cycles, or periods, to calculate when your job streams should be run; for example, weekly or every 10th working day. Periods are based on the business cycles of your customers. Tivoli Workload Scheduler for z/OS supports a range of periods for processing the different job streams in your production workload. It has several predefined periods that can be used when defining run cycles for your job streams, such as36 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • week, month, year, and all of the Julian months (January through December). When you define a job stream, you specify when it should be planned using a run cycle, which can be: A rule with a format such as: ONLY the SECOND TUESDAY of every MONTH EVERY FRIDAY in the user-defined period SEMESTER1 In this example, the words in capitals are selected from lists of ordinal numbers, names of days, and common calendar intervals or period names, respectively. A combination of period and offset. For example, an offset of 10 in a monthly period specifies the tenth day of each month. Operator instr. You can specify an operator instruction to be associated with a job in a job stream. This could be, for example, special running instructions for a job or detailed restart information in case a job abends and needs to be restarted. JCL variables JCL variables are used to do automatic job tailoring in Tivoli Workload Scheduler for z/OS. There are several predefined JCL variables, such as current date, current time, planning date, day number of week, and so forth. Besides these predefined variables, you can define specific or unique variables, so your local defined variables can be used for automatic job tailoring as well.2.1.3 Tivoli Workload Scheduler for z/OS plans IBM Tivoli Workload Scheduler for z/OS plans your production workload schedule. It produces both high-level (long-term) plan and detailed (current) plans. These plans drive the production workload and can show the status of the production workload on your system at any specified time. You can produce trial plans to forecast future workloads (for example, to simulate the effects of changes to your production workload, calendar, and installation). Tivoli Workload Scheduler for z/OS builds the plans from your description of the production workload (that is, the objects you have defined in the Tivoli Workload Scheduler for z/OS databases). The plan process First, the long-term plan is created, which shows the job streams that should be run each day in a period, usually for one or two months. Then a more detailed Chapter 2. End-to-end scheduling architecture 37
  • current plan is created. The current plan is used by Tivoli Workload Scheduler for z/OS to submit and control jobs and job streams. Long-term planning The long-term plan is a high-level schedule of your anticipated production workload. It lists, by day, the instances of job streams to be run during the period of the plan. Each instance of a job stream is called an occurrence. The long-term plan shows when occurrences are to run, as well as the dependencies that exist between the job streams. You can view these dependencies graphically on your terminal as a network to check that work has been defined correctly. The plan can assist you in forecasting and planning for heavy processing days. The long-term-planning function can also produce histograms showing planned resource use for individual workstations during the plan period. You can use the long-term plan as the basis for documenting your service level agreements. It lets you relate service level agreements directly to your production workload schedules so that your customers can see when and how their work is to be processed. The long-term plan provides a window to the future. How far into the future is up to you, from one day to four years. Normally, the long-term plan goes two to three months into the future. You can also produce long-term plan simulation reports for any future date. IBM Tivoli Workload Scheduler for z/OS can automatically extend the long-term plan at regular intervals. You can print the long-term plan as a report, or you can view, alter, and extend it online using the legacy ISPF dialogs. The long-term plan extension is performed by a Tivoli Workload Scheduler for z/OS program. This program is normally run as part of the daily Tivoli Workload Scheduler for z/OS housekeeping job stream. By running this program on workdays and letting the program extend the long-term plan by one working day, you assure that the long-term plan is always up-to-date (Figure 2-4 on page 39).38 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Job Databases Resources Workstations Streams Calendars Periods 1. Extend long term plan 1 workday 90 days Long Term Plan Long Term PlanFigure 2-4 The long-term plan extension processThis way the long-term plan always reflects changes that are made to jobstreams, run cycles, and calendars, because these definitions are reread by theprogram that extends the long-term plan. The long-term plan extension programreads job streams (run cycles), calendars, and periods and creates the high-levellong-term plan based on these objects.Current planThe current plan, or simply the plan is the heart of Tivoli Workload Scheduler forz/OS processing: In fact, it drives the production workload automatically andprovides a way to check its status. The current plan is produced by the run ofbatch jobs that extract from the long-term plan the occurrences that fall within thespecified period of time, considering also the job details. The current plan selectsa window from the long-term plan and makes the jobs ready to be run. They willactually be started depending on the decided restrictions (dependencies,resources availability, or time-dependent jobs).Job streams and related objects are copied from the Tivoli Workload Schedulerfor z/OS databases to the current plan occurrences. Because the objects arecopied to the current plan data set, any changes that are made to them in theplan will not be reflected in the Tivoli Workload Scheduler for z/OS databases.The current plan is a rolling plan that can cover several days. The extension ofthe current plan is performed by a Tivoli Workload Scheduler for z/OS programthat normally is run on workdays as part of the daily workday-scheduledhousekeeping job stream (Figure 2-5 on page 40). Chapter 2. End-to-end scheduling architecture 39
  • Job Databases Resources Workstations Streams Calendars Periods Current Plan Old current plan Remove completed job streams Add detail for next day New current plan Extension 1 workday 90 days Long Term Plan Long Term Plan today tomorrow Figure 2-5 The current plan extension process Extending the current plan by one workday means that it can cover more than one calendar day. If, for example, Saturday and Sunday are considered as Fridays (in the calendar used by the run cycle for the housekeeping job stream), then when the current plan extension program is run on Friday afternoon and the plan will go to Monday afternoon. A common method is to cover 1–2 days with regular extensions each shift. Production workload processing activities are listed by minute in the plan. You can either print the current plan as a report, or view, alter, and extend it online, by using the legacy ISPF dialogs. Note: Changes that are made to the job stream run-cycle, such as changing the job stream from running on Mondays to running on Tuesdays, will not be reflected immediately in the long-term or current plan. To have such changes reflected in the long-term plan and current plan you must first run a Modify all or Extend long-term plan and then extend or replan the current plan. Therefore, it is a good practice to run the Extend long-term plan with one working day (shown in Figure 2-4 on page 39) before the Extend of current plan as part of normal Tivoli Workload Scheduler for z/OS housekeeping. Running job streams and jobs in the plan Tivoli Workload Scheduler for z/OS automatically: Starts and stops started tasks40 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Edits z/OS job JCL statements before submission Submits jobs in the specified sequence to the target operating environment—every time Tracks each scheduled job in the plan Determines the success or failure of the jobs Displays status information and instructions to guide workstation operators Provides automatic recovery of z/OS jobs when they end in error Generates processing dates for your job stream run cycles using rules such as: – Every second Tuesday of the month – Only the last Saturday in June, July, and August – Every third workday in the user-defined PAYROLL period Starts jobs with regard to real resource availability Performs data set cleanup in error and rerun situations for the z/OS workload Tailors the JCL for step restarts of z/OS jobs and started tasks Dynamically schedules additional processing in response to activities that cannot be planned Provides automatic notification when an updated data set is closed, which can be used to trigger subsequent processing Generates alerts when abnormal situations are detected in the workloadAutomatic workload submissionTivoli Workload Scheduler for z/OS automatically drives work through thesystem, taking into account work that requires manual or program-recordedcompletion. (Program-recorded completion refers to situations where the statusof a scheduler-controlled job is set to Complete by a user-written program.) Italso promotes the optimum use of resources, improves system availability, andautomates complex and repetitive operator tasks. Tivoli Workload Scheduler forz/OS automatically controls the submission of work according to: Dependencies between jobs Workload priorities Specified times for the submission of particular work Availability of resourcesBy saving a copy of the JCL for each separate run, or occurrence, of a particularjob in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentionalreuse of temporary JCL changes, such as overrides. Chapter 2. End-to-end scheduling architecture 41
  • Job tailoring Tivoli Workload Scheduler for z/OS provides automatic job-tailoring functions, which enable jobs to be automatically edited. This can reduce your dependency on time-consuming and error-prone manual editing of jobs. Tivoli Workload Scheduler for z/OS job tailoring provides: Automatic variable substitution Dynamic inclusion and exclusion of inline job statements Dynamic inclusion of job statements from other libraries or from an exit For jobs to be submitted on a z/OS system, these job statements will be z/OS JCL. Variables can be substituted in specific columns, and you can define verification criteria to ensure that invalid strings are not substituted. Special directives supporting the variety of date formats used by job stream programs enable you to dynamically define the required format and change them multiple times for the same job. Arithmetic expressions can be defined to let you calculate values such as the current date plus four work days. Manual control and intervention Tivoli Workload Scheduler for z/OS enables you to check the status of work and intervene manually when priorities change or when you need to run unplanned work. You can query the status of the production workload and then modify the schedule if needed. Status inquiries With the legacy ISPF dialogs or with the Job Scheduling Console, you can make queries online and receive timely information about the status of the production workload. Time information that is displayed by the dialogs can be in the local time of the dialog user. Using the dialogs, you can request detailed or summary information about individual job streams, jobs, and workstations, as well as summary information concerning workload production as a whole. You can also display dependencies graphically as a network at both job stream and job level. Status inquiries: Provide you with overall status information that you can use when considering a change in workstation capacity or when arranging an extra shift or overtime work. Help you supervise the work flow through the installation; for instance, by displaying the status of work at each workstation.42 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Help you decide whether intervention is required to speed the processing of specific job streams. You can find out which job streams are the most critical. You can also check the status of any job stream, as well as the plans and actual times for each job. Enable you to check information before making modifications to the plan. For example, you can check the status of a job stream and its dependencies before deleting it or changing its input arrival time or deadline. See “Modifying the current plan” on page 43 for more information. Provide you with information about the status of processing at a particular workstation. Perhaps work that should have arrived at the workstation has not arrived. Status inquiries can help you locate the work and find out what has happened to it.Modifying the current planTivoli Workload Scheduler for z/OS makes status updates to the planautomatically using its tracking functions. However, you can change the planmanually to reflect unplanned changes to the workload or to the operationsenvironment, which often occur during a shift. For example, you may need tochange the priority of a job stream, add unplanned work, or reroute work fromone workstation to another. Or you may need to correct operational errorsmanually. Modifying the current plan may be the best way to handle thesesituations.You can modify the current plan online. For example, you can: Include unexpected jobs or last-minute changes to the plan. Tivoli Workload Scheduler for z/OS then automatically creates the dependencies for this work. Manually modify the status of jobs. Delete occurrences of job streams. Graphically display job dependencies before you modify them. Modify the data in job streams, including the JCL. Respond to error situations by: – Rerouting jobs – Rerunning jobs or occurrences – Completing jobs or occurrences – Changing jobs or occurrences Change the status of workstations by: – Rerouting work from one workstation to another – Modifying workstation reporting attributes Chapter 2. End-to-end scheduling architecture 43
  • – Updating the availability of resources – Changing the way resources are handled Replan or extend the current plan. In addition to using the dialogs, you can modify the current plan from your own job streams using the program interface or the application programming interface. You can also trigger Tivoli Workload Scheduler for z/OS to dynamically modify the plan using TSO commands or a batch program. This enables unexpected work to be added automatically to the plan. It is important to remember that the current plan contains copies of the objects that are read from the Tivoli Workload Scheduler for z/OS databases. This means that changes that are made to current plan instances will not be reflected in the corresponding database objects.2.1.4 Other Tivoli Workload Scheduler for z/OS features In the following sections we investigate other features of IBM Tivoli Workload Scheduler for z/OS. Automatically controlling the production workload Tivoli Workload Scheduler for z/OS automatically drives the production workload by monitoring the flow of work and by directing the processing of jobs so that it follows the business priorities that are established in the plan. Through its interface to the NetView program or its management-by-exception ISPF dialog, Tivoli Workload Scheduler for z/OS can alert the production control specialist to problems in the production workload processing. Furthermore, the NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to perform corrective actions in response to these problems. Recovery and restart Tivoli Workload Scheduler for z/OS provides automatic restart facilities for your production work. You can specify the restart actions to be taken if work that it initiates ends in error (Figure 2-6 on page 45). You can use these functions to predefine automatic error-recovery and restart actions for jobs and started tasks. The scheduler’s integration with the NetView for OS/390 program enables it to automatically pass alerts to the NetView for OS/390 in error situations. Use of the z/OS cross-system coupling facility (XCF) enables Tivoli Workload Scheduler for z/OS processing when system failures occur.44 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 2-6 IBM Tivoli Workload Scheduler for z/OS automatic recovery and restartRecovery of jobs and started tasksAutomatic recovery actions for failed jobs are specified in user-defined controlstatements. Parameters in these statements determine the recovery actions tobe taken when a job or started task ends in error.Restart and cleanupRestart and cleanup are basically two tasks: Restarting an operation at the job level or step level Cleaning up the associated data sets Note: The IBM Tivoli Workload Scheduler for z/OS 8.2 restart and cleanup function has been updated and redesigned. Apply fix for APAR PQ79506 and PQ79507 to get the redesigned and updated function.You can use restart and cleanup to catalog, uncatalog, or delete data sets whena job ends in error or when you need to rerun a job. Dataset cleanup takes careof JCL in the form of in-stream JCL, in-stream procedures, and catalogedprocedures on both local and remote systems. This function can be initiatedautomatically by Tivoli Workload Scheduler for z/OS or manually by a userthrough the panels. Tivoli Workload Scheduler for z/OS resets the catalog to thestatus that it was before the job ran for both generation data set groups (GDGs) Chapter 2. End-to-end scheduling architecture 45
  • and for DD allocated data sets contained in JCL. In addition, restart and cleanup supports the use of Removable Media Manager in your environment. Restart at both the step level and job level are also provided in the IBM Tivoli Workload Scheduler for z/OS legacy ISPF panels and in the JSC. It manages resolution of generation data group (GDG) names, JCL-containing nested INCLUDEs or PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler for z/OS also automatically identifies problems that can prevent successful restart, providing a logic of the “best restart step.” You can browse the job log or request a step-level restart for any z/OS job or started task even when there are no catalog modifications. The job-log browse functions are also available for the workload on other operating platforms, which is especially useful for those environments that do not support a System Display and Search Facility (SDSF) or something similar. These facilities are available to you without the need to make changes to your current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide data set cleanup capability on remote agent systems. Production workload restart Tivoli Workload Scheduler for z/OS provides a production workload restart, which can automatically maintain the processing of your work if a system or connection fails. Scheduler-controlled production work for the unsuccessful system is rerouted to another system. Because Tivoli Workload Scheduler for z/OS can restart and manage the production workload, the integrity of your processing schedule is maintained, and service continues for your customers. Tivoli Workload Scheduler for z/OS exploits the VTAM Model Application Program Definition feature and the z/OS-defined symbols to ease the configuration and job in a sysplex environment, giving the user a single-system view of the sysplex. Starting, stopping, and managing your engines and agents does not require you to know which sysplex the z/OS image is actually running on. z/OS Automatic Restart Manager support In case of program failure, all of the scheduler components are enabled to be restarted by the Automatic Restart Manager (ARM) of the z/OS operating system. Automatic status checking To track the work flow, Tivoli Workload Scheduler for z/OS interfaces directly with the operating system, collecting and analyzing status information about the production work that is currently active in the system. Tivoli Workload Scheduler for z/OS can record status information from both local and remote processors.46 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • When status information is reported from remote sites in different time zones,Tivoli Workload Scheduler for z/OS makes allowances for the time differences.Status reporting from heterogeneous environmentsThe processing on other operating environments can also be tracked by TivoliWorkload Scheduler for z/OS. You can use supplied programs to communicatewith the engine from any environment that can establish communications with az/OS system.Status reporting from user programsYou can pass status information about production workload processing to TivoliWorkload Scheduler for z/OS from your own user programs through a standardsupplied routine.Additional job-completion checkingIf required, Tivoli Workload Scheduler for z/OS provides further status checkingby scanning SYSOUT and other print data sets from your processing when thesuccess or failure of the processing cannot be determined by completion codes.For example, Tivoli Workload Scheduler for z/OS can check the text of systemmessages or messages originating from your user programs. Using informationcontained in job completion checker (JCC) tables, Tivoli Workload Scheduler forz/OS determines what actions to take when it finds certain text strings. Theseactions can include: Reporting errors Re-queuing SYSOUT Writing incident records to an incident data setManaging unplanned workTivoli Workload Scheduler for z/OS can be automatically triggered to update thecurrent plan with information about work that cannot be planned in advance. Thisenables Tivoli Workload Scheduler for z/OS to control unexpected work.Because it checks the processing status of this work, automatic recoveryfacilities are also available.Interfacing with other programsTivoli Workload Scheduler for z/OS provides a program interface (PIF) with whichyou can automate most actions that you can perform online through the dialogs.This interface can be called from CLISTs, user programs, and via TSOcommands.The application programming interface (API) lets your programs communicatewith Tivoli Workload Scheduler for z/OS from any compliant platform. You canuse Common Programming Interface for Communications (CPI-C), advancedprogram-to-program communication (APPC), or your own logical unit (LU) 6.2 Chapter 2. End-to-end scheduling architecture 47
  • verbs to converse with Tivoli Workload Scheduler for z/OS through the API. You can use this interface to query and update the current plan. The programs can be running on any platform that is connected locally, or remotely through a network, with the z/OS system where the engine runs. Management of critical jobs IBM Tivoli Workload Scheduler for z/OS exploits the capability of the Workload Manager (WLM) component of z/OS to ensure that critical jobs are completed on time. If a critical job is late, Tivoli Workload Scheduler for z/OS favors it using the existing Workload Manager interface. Security Today, data processing operations increasingly require a high level of data security, particularly as the scope of data processing operations expands and more people within the enterprise become involved. Tivoli Workload Scheduler for z/OS provides complete security and data integrity within the range of its functions. It provides a shared central service to different user departments even when the users are in different companies and countries, and a high level of security to protect scheduler data and resources from unauthorized access. With Tivoli Workload Scheduler for z/OS, you can easily organize, isolate, and protect user data to safeguard the integrity of your end-user applications (Figure 2-7). Tivoli Workload Scheduler for z/OS can plan and control the work of many user groups and maintain complete control of access to data and services. Figure 2-7 IBM Tivoli Workload Scheduler for z/OS security48 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Audit trailWith the audit trail, you can define how you want IBM Tivoli Workload Schedulerfor z/OS to log accesses (both reads and updates) to scheduler resources.Because it provides a history of changes to the databases, the audit trail can beextremely useful for staff that works with debugging and problem determination.A sample program is provided for reading audit-trail records. The program readsthe logs for a period that you specify and produces a report detailing changesthat have been made to scheduler resources.System Authorization Facility (SAF)IBM Tivoli Workload Scheduler for z/OS uses the system authorization facility, afunction of z/OS, to pass authorization verification requests to your securitysystem (for example, RACF®). This means that you can protect your schedulerdata objects with any security system that uses the SAF interface.Protection of data and resourcesEach user request to access a function or to access data is validated by SAF.This is some of the information that can be protected: Calendars and periods Job stream names or job stream owner, by name Workstation, by name Job stream-specific data in the plan Operator instructions JCLTo support distributed, multi-user handling,Tivoli Workload Scheduler for z/OSenables you to control the level of security that you want to implement, right downto the level of individual records. You can define generic or specific RACFresource names to extend the level of security checking.If you have RACF Version 2 Release 1 installed, you can use the IBM TivoliWorkload Scheduler for z/OS reserved resource class (IBMOPC) to manageyour Tivoli Workload Scheduler for z/OS security environment. This means thatyou do not have to define your own resource class by modifying RACF andrestarting your system.Data integrity during submissionTivoli Workload Scheduler for z/OS can ensure the correct security environmentfor each job it submits, regardless of whether the job is run on a local or a remotesystem. Tivoli Workload Scheduler for z/OS enables you to create tailoredsecurity profiles for individual jobs or groups of jobs. Chapter 2. End-to-end scheduling architecture 49
  • 2.2 Tivoli Workload Scheduler architecture Tivoli Workload Scheduler helps you plan every phase of production. During the processing day, its production control programs manage the production environment and automate most operator activities. Tivoli Workload Scheduler prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs start running as soon as their dependencies are satisfied, idle time is minimized and throughput is improved. Jobs never run out of sequence. If a job ends in error, Tivoli Workload Scheduler handles the recovery process with little or no operator intervention. IBM Tivoli Workload Scheduler is composed of three major parts: IBM Tivoli Workload Scheduler engine The IBM Tivoli Workload Scheduler engine is installed on every non-mainframe workstation in the scheduling network (UNIX, Windows, and OS/400 computers). When the engine is installed on a workstation, it can be configured to play a specific role in the scheduling network. For example, the engine can be configured to be a master domain manager, a domain manager, or a fault-tolerant agent. In an ordinary Tivoli Workload Scheduler network, there is a single master domain manager at the top of the network. However, in an end-to-end scheduling network, there is no master domain manager. Instead, its functions are instead performed by the IBM Tivoli Workload Scheduler for z/OS engine, installed on a mainframe. This is discussed in more detail later in this chapter. IBM Tivoli Workload Scheduler connector The connector “connects” the Job Scheduling Console to Tivoli Workload Scheduler, routing commands from JSC to the Tivoli Workload Scheduler engine. In an ordinary IBM Tivoli Workload Scheduler network, the Tivoli Workload Scheduler connector is usually installed on the master domain manager. In an end-to-end scheduling network, there is no master domain manager. so the connector is usually installed on the first-level domain managers. The Tivoli Workload Scheduler connector can also be installed on other domain managers or fault-tolerant agents in the network. The connector software is installed on top of the Tivoli Management Framework, which must be configured as a Tivoli Management Region server or managed node. The connector software cannot be installed on a TMR endpoint. Job Scheduling Console (JSC) JSC is the Java-based graphical user interface for the IBM Tivoli Workload Scheduler suite. The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler plan and database objects. It provides, through the Tivoli Workload Scheduler connector, the50 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • functions of the command-line programs conman and composer. The Job Scheduling Console can be installed on a desktop workstation or laptop, as long as the JSC has a TCP/IP link with the machine running the Tivoli Workload Scheduler connector. Using the JSC, operators can schedule and administer Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS over the network. In the next sections, we provide an overview of the IBM Tivoli Workload Scheduler network and workstations, the topology that is used to describe the architecture in Tivoli Workload Scheduler, the Tivoli Workload Scheduler components, and the plan.2.2.1 The IBM Tivoli Workload Scheduler network A Tivoli Workload Scheduler network is made up of the workstations, or CPUs, on which jobs and job streams are run. A Tivoli Workload Scheduler network contains at least one IBM Tivoli Workload Scheduler domain, the master domain, in which the master domain manager is the management hub. It is the master domain manager that manages the databases and it is from the master domain manager that you define new objects in the databases. Additional domains can be used to divide a widely distributed network into smaller, locally managed groups. In the simplest configuration, the master domain manager maintains direct communication with all of the workstations (fault-tolerant agents) in the Tivoli Workload Scheduler network. All workstations are in the same domain, MASTERDM (Figure 2-8). MASTERDM AIX Master Domain Manager FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 2-8 A sample IBM Tivoli Workload Scheduler network with only one domain Chapter 2. End-to-end scheduling architecture 51
  • Using multiple domains reduces the amount of network traffic by reducing the communications between the master domain manager and the other computers in the network. Figure 2-9 depicts an example of a Tivoli Workload Scheduler network with three domains. In this example, the master domain manager is shown as an AIX system. The master domain manager does not have to be on an AIX system; it can be installed on any of several different platforms, including AIX, Linux, Solaris, HPUX, and Windows. Figure 2-9 is only an example that is meant to give an idea of a typical Tivoli Workload Scheduler network. MASTERDM AIX Master Domain Manager DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 2-9 IBM Tivoli Workload Scheduler network with three domains In this configuration, the master domain manager communicates directly only with the subordinate domain managers. The subordinate domain managers communicate with the workstations in their domains. In this way, the number of connections from the master domain manager are reduced. Multiple domains also provide fault-tolerance: If the link from the master is lost, a domain manager can still manage the workstations in its domain and resolve dependencies between them. This limits the impact of a network outage. Each domain may also have one or more backup domain managers that can become the domain manager for the domain if the domain manager fails. Before the start of each day, the master domain manager creates a plan for the next 24 hours. This plan is placed in a production control file, named Symphony. Tivoli Workload Scheduler is then restarted throughout the network, and the master domain manager sends a copy of the Symphony file to each of the52 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • subordinate domain managers. Each domain manager then sends a copy of theSymphony file to the fault-tolerant agents in that domain.After the network has been started, scheduling events such as job starts andcompletions are passed up from each workstation to its domain manager. Thedomain manager updates its Symphony file with the events and then passes theevents up the network hierarchy to the master domain manager. The events arethen applied to the Symphony file on the master domain manager. Events fromall workstations in the network will be passed up to the master domain manager.In this way, the master’s Symphony file contains the authoritative record of whathas happened during the production day. The master also broadcasts thechanges down throughout the network, updating the Symphony files of domainmanagers and fault-tolerant agents that are running in full status mode.It is important to remember that Tivoli Workload Scheduler does not limit thenumber of domains or levels (the hirerarchy) in the network. There can be asmany levels of domains as is appropriate for a given computing environment. Thenumber of domains or levels in the network should be based on the topology ofthe physical network where Tivoli Workload Scheduler is installed. Most often,geographical boundaries are used to determine divisions between domains.See 3.5.4, “Network planning and considerations” on page 141 for moreinformation about how to design an IBM Tivoli Workload Scheduler network.Figure 2-10 on page 54 shows an example of a four-tier Tivoli WorkloadScheduler network:1. Master domain manager, MASTERDM2. DomainA and DomainB3. DomainC, DomainD, DomainE, FTA1, FTA2, and FTA34. FTA4, FTA5, FTA6, FTA7, FTA8, and FTA9 Chapter 2. End-to-end scheduling architecture 53
  • MASTERDM Master AIX Domain Manager DomainA DomainB Domain AIX Domain HPUX Manager Manager DMA DMB FTA1 FTA2 FTA3 HPUX Solaris AIX DomainC DomainD DomainE AIX AIX Solaris DMC DMD DME FTA4 FTA5 FTA6 FTA7 FTA8 FTA9 Linux OS/400 Win 2K Win XP AIX HPUX Figure 2-10 A multi-tiered IBM Tivoli Workload Scheduler network2.2.2 Tivoli Workload Scheduler workstation types For most cases, workstation definitions refer to physical workstations. However, in the case of extended and network agents, the workstations are logical definitions that must be hosted by a physical IBM Tivoli Workload Scheduler workstation. There are several different types of Tivoli Workload Scheduler workstations: Master domain manager (MDM) The domain manager of the topmost domain of a Tivoli Workload Scheduler network. It contains the centralized database of all defined scheduling objects, including all jobs and their dependencies. It creates the plan at the start of each day, and performs all logging and reporting for the network. The master distributes the plan to all subordinate domain managers and fault-tolerant agents. In an end-to-end scheduling network, the IBM Tivoli Workload Scheduler for z/OS engine (controller) acts as the master domain manager.54 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Domain manager (DM) The management hub in a domain. All communications to and from the agents in a domain are routed through the domain manager. The domain manager can resolve dependencies between jobs in its subordinate agents. The copy of the plan on the domain manager is updated with reporting and logging from the subordinate agents. Backup domain manager A fault-tolerant agent that is capable of assuming the responsibilities of its domain manager. The copy of the plan on the backup domain manager is updated with the same reporting and logging information as the domain manager plan. Fault-tolerant agent (FTA) A workstation that is capable of resolving local dependencies and launching its jobs in the absence of a domain manager. It has a local copy of the plan generated in the master domain manager. It is also called a fault tolerant workstation. Standard agent (SA) A workstation that launches jobs only under the direction of its domain manager. Extended agent (XA) A logical workstation definition that enables one to launch and control jobs on other systems and applications. IBM Tivoli Workload Scheduler for Applications includes extended agent methods for the following systems: SAP R/3, Oracle Applications, PeopleSoft, CA7, JES2, and JES3.Figure 2-11 on page 56 shows a Tivoli Workload Scheduler network with some ofthe different workstation types.It is important to remember that domain manager FTAs, including the masterdomain manager FTA and backup domain manager FTAs, are FTAs with someextra responsibilities. The servers with these FTAs can, and most often will, beservers where you run normal batch jobs that are scheduled and tracked by TivoliWorkload Scheduler. This means that these servers do not have to be serversdedicated only for Tivoli Workload Scheduler work. The servers can still do someother work and run some other applications.However, you should not choose to use one of your busiest servers as one ofyour Tivoli Workload Scheduler domain managers of first-level. Chapter 2. End-to-end scheduling architecture 55
  • MASTERDM Master AIX Domain Manager DomainA DomainB Domain AIX Domain HPUX Manager Manager DMA DMB Job Scheduling Console FTA1 FTA2 FTA3 HPUX Solaris AIX DomainC DomainD DomainE Solaris AIX AIX DMC DMD DME FTA4 FTA5 FTA6 FTA7 FTA8 FTA9 Linux OS/400 Win NT Win 2K AIX HPUX Figure 2-11 IBM Tivoli Workload Scheduler network with different workstation types2.2.3 Tivoli Workload Scheduler topology The purpose of having multiple domains is to delegate some of the responsibilities of the master domain manager and to provide extra fault tolerance. Fault tolerance is enhanced because a domain manager can continue to resolve dependencies within the domain even if the master domain manager is temporarily unavailable. Workstations are generally grouped into a domain because they share a common set of characteristics. Most often, workstations will be grouped into a domain because they are in close physical proximity to one another, such as in the same office. Domains may also be based on organizational unit (for example, department), business function, or application. Grouping related workstations in a domain reduces the amount of information that must be communicated between domains, and thereby reduces the amount of network traffic generated. In 3.5.4, “Network planning and considerations” on page 141, you can find more information about how to configure an IBM Tivoli Workload Scheduler network based on your particular distributed network and environment.56 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 2.2.4 IBM Tivoli Workload Scheduler components Tivoli Workload Scheduler is comprised of several separate programs, each with a distinct function. This division of labor segregates networking, dependency resolution, and job launching into their own individual processes. These processes communicate among themselves through the use of message files (also called event files). Every event that occurs during the production day is handled by passing events between processes through the message files. A computer running Tivoli Workload Scheduler has several active IBM Tivoli Workload Scheduler processes. They are started as a system service, by the StartUp command, or manually from the Job Scheduling Console. The main processes are: netman The network listener program, which initially receives all TCP connections. The netman program accepts an incoming request from a remote program, spawns a new process to handle the request, and if necessary hands the socket over to the new process. writer The network writer process that passes incoming messages from a remote workstation to the local mailman process (via the Mailbox.msg event file). mailman The primary message management process. The mailman program reads events from the Mailbox.msg file and then either passes them to batchman (via the Intercom.msg event file) or sends them to a remote workstation. batchman The production control process. Working with the plan (Symphony), batchman starts jobs streams, resolves dependencies, and directs jobman to launch jobs. After the Symphony file has been created (at the beginning of the production day), batchman is the only program that makes changes to the Symphony file. jobman The job control process. The jobman program launches and monitors jobs. Figure 2-12 on page 58 shows the IBM Tivoli Workload Scheduler processes and their intercommunication via message files. Chapter 2. End-to-end scheduling architecture 57
  • TWS connector Symphony & Operator input User interface TWS processes programs message files stop, start conman NetReq.msg netman & shut mailman remote JSC maestro_engine Symphony writer remote writer Changes to conman Mailbox.msg mailman Symphony JSC maestro_plan Intercom.msg batchman Courier.msg jobmanFigure 2-12 IBM Tivoli Workload Scheduler interprocess communication2.2.5 IBM Tivoli Workload Scheduler plan The IBM Tivoli Workload Scheduler plan is the to-do list that tells Tivoli Workload Scheduler what jobs to run and what dependencies must be satisfied before each job is launched. The plan usually covers 24 hours; this period is sometimes referred to as the production day and can start at any point in the day. The best time of day to create a new plan is a time when few or no jobs are expected to be running. A new plan is created at the start of the production day. After the plan has been created, a copy is sent to all subordinate workstations. The domain managers then distribute the plan to their fault-tolerant agent. The subordinate domain managers distribute their copy to all of the fault-tolerant agents in their domain and to all domain managers that are subordinate to them, and so on down the line. This enables fault-tolerant agents throughout the network to continue processing even if the network connection to their domain manager is down. From the Job Scheduling Console or the command line interface, the operator can view and make changes in the day’s production by making changes in the Symphony file. Figure 2-13 on page 59 shows the distribution of the Symphony file from master domain manager to domain managers and their subordinate agents.58 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • MASTERDM AIX Master Domain Manager TWS plan DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows XP Solaris Figure 2-13 Distribution of plan (Symphony file) in a Tivoli Workload Scheduler network IBM Tivoli Workload Scheduler processes monitor the Symphony file and make calls to the operating system to launch jobs as required. The operating system runs the job, and in return informs IBM Tivoli Workload Scheduler whether the job has completed successfully or not. This information is entered into the Symphony file to indicate the status of the job. This way the Symphony file is continuously updated with the status of all jobs: the work that needs to be done, the work in progress, and the work that has been completed.2.3 End-to-end scheduling architecture In the two previous sections, 2.2, “Tivoli Workload Scheduler architecture” on page 50, and 2.1, “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27, we described the architecture of Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. In this section, we bring the two together; here we describe how the programs work together to function as a unified end-to-end scheduling solution. End-to-end scheduling makes it possible to schedule and control jobs on mainframe, Windows, and UNIX environments, providing truly distributed scheduling. In the end-to-end configuration, Tivoli Workload Scheduler for z/OS Chapter 2. End-to-end scheduling architecture 59
  • is used as the planner for the job scheduling environment. Tivoli Workload Scheduler domain managers and fault-tolerant agents are used to schedule on the non-mainframe platforms, such as UNIX and Windows.2.3.1 How end-to-end scheduling works End-to-end scheduling means controlling scheduling from one end of an enterprise to the other — from the mainframe all the way down to the client workstation. Tivoli Workload Scheduler provides an end-to-end scheduling solution whereby one or more IBM Tivoli Workload Scheduler domain managers, and its underlying agents and domains, are put under the direct control of an IBM Tivoli Workload Scheduler for z/OS engine. To the domain managers and FTAs in the network, the IBM Tivoli Workload Scheduler for z/OS engine appears to be the master domain manager. Tivoli Workload Scheduler for z/OS creates the plan (the Symphony file) for the Tivoli Workload Scheduler network and sends the plan down to the first-level domain managers. Each of these domain managers sends the plan to all of the subordinate workstations in its domain. The domain managers act as brokers for the distributed network by resolving all dependencies for the subordinate managers and agents. They send their updates (in the form of events) to Tivoli Workload Scheduler for z/OS, which updates the plan accordingly. Tivoli Workload Scheduler for z/OS handles its own jobs and notifies the domain managers of all the status changes of its jobs that involve the IBM Tivoli Workload Scheduler plan. In this configuration, the domain manager and all the Tivoli Workload Scheduler workstations recognize Tivoli Workload Scheduler for z/OS as the master domain manager and notify it of all of the changes occurring in their own plans. At the same time, the agents are not permitted to interfere with the Tivoli Workload Scheduler for z/OS jobs, because they are viewed as running on the master that is the only node that is in charge of them. In Figure 2-14 on page 61, you can see a Tivoli Workload Scheduler network managed by a Tivoli Workload Scheduler for z/OS engine. This is accomplished by connecting a Tivoli Workload Scheduler domain manager directly to the Tivoli Workload Scheduler for z/OS engine. Actually, if you compare Figure 2-9 on page 52 with Figure 2-14 on page 61, you will see that the Tivoli Workload Scheduler network that is connected to Tivoli Workload Scheduler for z/OS is managed by a Tivoli Workload Scheduler master domain manager. When connecting this network to the engine, the AIX server that was acting as the Tivoli Workload Scheduler master domain manager is replaced by a mainframe. The new master domain manager is the Tivoli Workload Scheduler for z/OS engine.60 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • MASTERDM z/OS Master Domain Manager OPCMASTER TWS for z/OS Engine DomainA Controller DomainB AIX Server HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP SolarisFigure 2-14 IBM Tivoli Workload Scheduler for z/OS end-to-end schedulingIn Tivoli Workload Scheduler for z/OS, you can access job streams (also knownas schedules in Tivoli Workload Scheduler and applications in Tivoli WorkloadScheduler for z/OS) and add them to the current plan in Tivoli WorkloadScheduler for z/OS. In addition, you can build dependencies among TivoliWorkload Scheduler for z/OS job streams and Tivoli Workload Scheduler jobs.From Tivoli Workload Scheduler for z/OS, you can monitor and control the FTAs.In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs torun on workstations in the Tivoli Workload Scheduler network. The TivoliWorkload Scheduler for z/OS engine passes the job information to theSymphony file in the Tivoli Workload Scheduler for z/OS server, which in turnpasses the Symphony file to the first-level Tivoli Workload Scheduler domainmanagers to distribute and process. In turn, Tivoli Workload Scheduler reportsthe status of running and completed jobs back to the current plan for monitoringin the Tivoli Workload Scheduler for z/OS engine.The IBM Tivoli Workload Scheduler for z/OS engine is comprised of twocomponents (started tasks on the mainframe): the controller and the server (alsocalled the end-to-end server). Chapter 2. End-to-end scheduling architecture 61
  • 2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components To run the Tivoli Workload Scheduler for z/OS end-to-end, you must have a Tivoli Workload Scheduler for z/OS server started task dedicated for end-to-end scheduling. It is also possible to use the same server to communicate with the Job Scheduling Console. The Tivoli Workload Scheduler for z/OS uses TCP/IP for communication. The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to communicate events to the FTAs. The end-to-end server will start multiple tasks and processes using the z/OS UNIX System Services (USS). The Tivoli Workload Scheduler for z/OS end-to-end server must run on the same z/OS systems where the served Tivoli Workload Scheduler for z/OS controller is started and active. Tivoli Workload Scheduler for z/OS end-to-end scheduling is comprised of three major components: The IBM Tivoli Workload Scheduler for z/OS controller: Manages database objects, creates plans with the workload, and executes and monitors the workload in the plan. The IBM Tivoli Workload Scheduler for z/OS server: Acts as the Tivoli Workload Scheduler master domain manager. It receives a part of the current plan (the Symphony file) from the Tivoli Workload Scheduler for z/OS controller, which contains job and job streams to be executed in the Tivoli Workload Scheduler network. The server is the focal point for all communication to and from the Tivoli Workload Scheduler network. IBM Tivoli Workload Scheduler domain managers at the first level: Serve as the communication hub between the Tivoli Workload Scheduler for z/OS server and the distributed Tivoli Workload Scheduler network. The domain managers at first level are connected directly to the Tivoli Workload Scheduler master domain manager running in USS in the Tivoli Workload Scheduler for z/OS end-to-end server. In Tivoli Workload Scheduler for z/OS 8.2, you can have one or several Tivoli Workload Scheduler domain managers at the first level. These domain managers are connected directly to the Tivoli Workload Scheduler for z/OS end-to-end server, so they are called first-level domain managers. It is possible to designate Tivoli Workload Scheduler for z/OS backup domain managers for the first-level Tivoli Workload Scheduler domain managers (as it is for “normal” Tivoli Workload Scheduler fault-tolerant agents and domain managers).62 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Detailed description of the communication Figure 2-15 shows the communication between the Tivoli Workload Scheduler for z/OS controller and the Tivoli Workload Scheduler for z/OS server. TWS for z/OS Engine TWS for z/OS Controller TWS for z/OS Server programs running in USS Symphony & TWS translator threads message files processes start & stop events only NetReq.msg netman TWSCS GS GS translator spawns writer end-to-end mailman remote enabler WA output Symphony writer sender TWSOU translator subtask spawns threads NMM receiver remote writer input input Mailbox.msg mailman subtask TWSIN writer translator EM job log script retriever downloader Intercom.msg batchman tomaster.msg remote remote scribner dwnldrFigure 2-15 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication Tivoli Workload Scheduler for z/OS server processes and tasks The end-to-end server address space hosts the tasks and the data sets that function as the intermediaries between the controller and the domain managers of first level. In many cases, these tasks and data sets are replicas of the distributed Tivoli Workload Scheduler processes and files. The Tivoli Workload Scheduler for z/OS server uses the following processes, threads, and tasks for end-to-end scheduling (see Figure 2-15): netman The Tivoli Workload Scheduler network listener daemon. It is started automatically when the end-to-end server task starts. The netman process monitors the NetReq.msg queue and listens to the TCP port defined in the server topology portnumber parameter. (Default is port 31111.) When netman receives a request, it starts another program to handle the request, usually writer or mailman. Requests to start or stop mailman are written by output Chapter 2. End-to-end scheduling architecture 63
  • translator to the NetReq.msg queue. Requests to start or stop writer are sent via TCP by the mailman process on a remote workstation (domain manager at the first level). writer One writer process is started by netman for each connected remote workstation (domain manager at the first level). Each writer process receives events from the mailman process on a remote workstation and writes these events to the Mailbox.msg file. mailman The main message handler process. Its main tasks are: Routing events. It reads the events stored in the Mailbox.msg queue and sends them either to the controller (writing them in the Intercom.msg file), or to the writer process on a remote workstation (via TCP). Linking to remote workstations (domain managers at the first level). The mailman process requests that the netman program on each remote workstation starts a writer process to accept the connection. Sending the Symphony file to subordinate workstations (domain managers at the first level). When a new Symphony file is created, the mailman process sends a copy of the file to each subordinate domain manager and fault-tolerant agent. batchman Updates the Symphony file and resolves dependencies at master level. After the Symphony file has been written the first time, batchman is the only program that makes changes to the file. The batchman program in USS does not perform job submission; this is why there is no jobman process running in UNIX System Services). translator Through its input and output threads (discussed in more detail below), the translator process translates events from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format and vice versa. The translator program was developed specifically to handle the job of event translation from OPC events to Maestro events, and vice versa. The translator process runs in UNIX System Services on the mainframe; it does not run on domain managers or FTAs. The translator program provides the glue that binds Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler together; translator enables64 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • these two products to function as a unified scheduling system.job log retriever A thread of the translator process that is spawned to fetch a job log from a fault-tolerant agent. One job log retriever thread is spawned for each requested FTA job log. The job log retriever receives the log, sizes it according to the LOGLINES parameter, translates it from UTF-8 to EBCDIC, and queues it in the inbound queue of the controller. The retrieval of a job log is a lengthy operation and can take a few moments to complete. The user may request several logs at the same time. The job log retriever thread terminates after the log has been written to the inbound queue. If using the IBM Tivoli Workload Scheduler for z/OS ISPF panel interface, the user will be notified by a message when the job log has been received.script downloader A thread of the translator process that is spawned to download the script for an operation (job) defined in Tivoli Workload Scheduler with the Centralized Script option set to Yes. One script downloader thread is spawned for each script that must be downloaded. Several script downloader threads can be active at the same time. The script that is to be downloaded is received from the output translator.starter The basic or main process in the end-to-end server UNIX System Services. The starter process is the first process that is started in UNIX System Services when the end-to-end server started task is started. The starter process starts the translator and the netman processes (not shown in Figure 2-15 on page 63).Events passed from the server to the controllerinput translator A thread of the translator process. The input translator thread reads events from the tomaster.msg file and translates them from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format. It also performs UTF-8 to EBCDIC translation and sends the translated events to the input writer.input writer Receives the input from the job log retriever, input translator, and script downloader and writes it in the inbound queue (the EQQTWSIN data set). Chapter 2. End-to-end scheduling architecture 65
  • receiver subtask A subtask of the end-to-end task run in the Tivoli Workload Scheduler for z/OS controller. It receives events from the inbound queue and queues them to the Event Manager task. The events have already been filtered and elaborated by the input translator. Events passed from the controller to the server sender subtask A subtask of the end-to-end task in the Tivoli Workload Scheduler for z/OS controller. It receives events for changes to the current plan that is related to Tivoli Workload Scheduler fault-tolerant agents. The Tivoli Workload Scheduler for z/OS tasks that can change the current plan are: General Service (GS), Normal Mode Manager (NMM), Event Manager (EM), and Workstation Analyzer (WA). The events are received via SSI, the usual method the Tivoli Workload Scheduler for z/OS tasks use to exchanged events. The NMM sends events to the sender task when the plan is extended or replanned for synchronization purposes. output translator A thread of the translator process. The output translator thread reads events from the outbound queue. It translates the events from Tivoli Workload Scheduler for z/OS format to Tivoli Workload Scheduler format and evaluates them, performing the appropriate function. Most events, including those related to changes to the Symphony file, are written to Mailbox.msg. Requests to start or stop netman or mailman are written to NetReq.msg. Output translator also translates events from EBCDIC to UTF-8. The output translator interacts with three different components, depending on the type of the event: Starts a job log retriever thread if the event is to retrieve the log of a job from a Tivoli Workload Scheduler agent. Starts a script downloader thread if the event is to download the script. Queues an event in NetReq.msg if the event is to start or stop mailman. Queues events in Mailbox.msg for the other events that are sent to update the Symphony file on the Tivoli Workload Scheduler agents (for example, events for a job that has changed status, events for manual changes66 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • on jobs or workstations, or events to link or unlink workstations). Switches the Symphony files.IBM Tivoli Workload Scheduler for z/OS datasets and files used forend-to-end schedulingThe Tivoli Workload Scheduler for z/OS server and controller uses the followingdata sets and files for end-to-end scheduling:EQQTWSIN Sequential data set used to queue events sent by the server to the controller (the inbound queue). Must be defined in Tivoli Workload Scheduler for z/OS controller and the end-to-end server started task procedure (shown as TWSIN in Figure 2-15 on page 63).EQQTWSOU Sequential data set used to queue events sent by the controller to the server (the outbound queue). Must be defined in Tivoli Workload Scheduler for z/OS controller and the end-to-end server started task procedure (shown as TWSOU in Figure 2-15 on page 63).EQQTWSCS Partitioned data set used to temporarily store a script when it is downloaded from the Tivoli Workload Scheduler for z/OS JOBLIB data set to the fault-tolerant agent for its submission. This data set is shown as TWSCS in Figure 2-15 on page 63. This data set is described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69. It is not shown in Figure 2-15 on page 63.Symphony HFS file containing the active copy of the plan used by the distributed Tivoli Workload Scheduler agents.Sinfonia HFS file containing the distribution copy of the plan used by the distributed Tivoli Workload Scheduler agents. This file is not shown in Figure 2-15 on page 63.NetReq.msg HFS file used to queue requests for the netman process.Mailbox.msg HFS file used to queue events sent to the mailman process.intercom.msg HFS file used to queue events sent to the batchman process.tomaster.msg HFS file used to queue events sent to the input translator process. Chapter 2. End-to-end scheduling architecture 67
  • Translator.chk HFS file used as checkpoint file for the translator process. It is equivalent to the checkpoint data set used by the Tivoli Workload Scheduler for z/OS controller. For example, it contains information about the status of the Tivoli Workload Scheduler for z/OS current plan, Symphony run number, Symphony availability. This file is not shown in Figure 2-15 on page 63. Translator.wjl HFS file used to store information about job log retrieval and script downloading that are in progress. At initialization, the translator checks the translator.wjl file for job log retrieval and script downloading that did not complete (both correctly or in error) and sends the error back to the controller. This file is not shown in Figure 2-15 on page 63. EQQSCLIB Partitioned data set used as a repository for jobs with non-centralized script definitions running on FTAs. The EQQSCLIB data set is described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69. It is not shown in Figure 2-15 on page 63. EQQSCPDS VSAM data sets containing a copy of the current plan used by the daily plan batch programs to create the Symphony file. The end-to-end plan creating process is described in 2.3.4, “Tivoli Workload Scheduler for z/OS end-to-end plans” on page 75. It is not shown in Figure 2-15 on page 63.2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration The topology of the distributed IBM Tivoli Workload Scheduler network that is connected to the IBM Tivoli Workload Scheduler for z/OS engine is described in parameter statements for the Tivoli Workload Scheduler for z/OS server and for the Tivoli Workload Scheduler for z/OS programs that handle the long-term plan and the current plan. Parameter statements are also used to activate the end-to-end subtasks in the Tivoli Workload Scheduler for z/OS controller. The parameter statements that are used to describe the topology is covered in 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 174. This section also includes an example of how to reflect a specific Tivoli Workload Scheduler network topology in Tivoli Workload68 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Scheduler for z/OS servers and plan programs using the Tivoli WorkloadScheduler for z/OS topology parameter statements.Tivoli Workload Scheduler for z/OS end-to-end databaseobjectsIn order to run jobs on fault-tolerant agents or extended agents, one must firstdefine database objects related to the Tivoli Workload Scheduler workload inTivoli Workload Scheduler for z/OS databases.The Tivoli Workload Scheduler for z/OS end-to-end related database objects are: IBM Tivoli Workload Scheduler for z/OS fault tolerant workstations A fault tolerant workstation is a computer workstation configured to schedule jobs on FTAs. The workstation must also be defined in the server CPUREC initialization statement (see Figure 2-16 on page 70). IBM Tivoli Workload Scheduler for z/OS job streams, jobs, and dependencies Job streams and jobs to run on Tivoli Workload Scheduler FTAs are defined like other job streams and jobs in Tivoli Workload Scheduler for z/OS. To run a job on a Tivoli Workload Scheduler FTA, the job is simply defined on a fault tolerant workstation. Dependencies between Tivoli Workload Scheduler distributed jobs are created exactly the same way as other job dependencies in the Tivoli Workload Scheduler for z/OS controller. This is also the case when creating dependencies between Tivoli Workload Scheduler distributed jobs and Tivoli Workload Scheduler for z/OS mainframe jobs. Some of the Tivoli Workload Scheduler for z/OS mainframe-specific options are not available for Tivoli Workload Scheduler distributed jobs. Chapter 2. End-to-end scheduling architecture 69
  • F100 workstation definition in ISPF: Topology definition for F100 workstation: F100 workstation definition in JSC: Figure 2-16 A workstation definition and its corresponding CPUREC IBM Tivoli Workload Scheduler for z/OS resources Only global resources are supported and can be used for Tivoli Workload Scheduler distributed jobs. This means that the resource dependency is resolved by the Tivoli Workload Scheduler for z/OS controller and not locally on the FTA. For a job running on an FTA, the use of resources causes the loss of fault tolerance. Only the controller determines the availability of a resource and consequently lets the FTA start the job. Thus, if a job running on an FTA uses a resource, the following occurs: – When the resource is available, the controller sets the state of the job to started and the extended status to waiting for submission. – The controller sends a release-dependency event to the FTA. – The FTA starts the job. If the connection between the engine and the FTA is broken, the operation does not start on the FTA even if the resource becomes available.70 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Note: Special resource dependencies are represented differently depending on whether you are looking at the job through Tivoli Workload Scheduler for z/OS interfaces or Tivoli Workload Scheduler interfaces. If you observe the job using Tivoli Workload Scheduler for z/OS interfaces, you can see the resource dependencies as expected. However, when you monitor a job on a fault-tolerant agent by means of the Tivoli Workload Scheduler interfaces, you will not be able to see the resource that is used by the job. Instead you will see a dependency on a job called OPCMASTER#GLOBAL.SPECIAL_RESOURCES. This dependency is set by the engine. Every job that has special resource dependencies has a dependency to this job. When the engine allocates the resource for the job, the dependency is released. (The engine sends a release event for the specific job through the network.)The task or script associated with the FTA job, defined in Tivoli WorkloadScheduler for z/OSIn IBM Tivoli Workload Scheduler for z/OS 8.2, the task or script associated tothe FTA job can be defined in two different ways:a. Non-centralized script Defined in a special partitioned data set, EQQSCLIB, allocated in the Tivoli Workload Scheduler for z/OS controller started task procedure, stores the job or task definitions for FTA jobs. The script (the JCL) resides on the fault-tolerant agent. This is the default behavior in Tivoli Workload Scheduler for z/OS for fault-tolerant agent jobs.b. Centralized script Defines the job in Tivoli Workload Scheduler for z/OS with the Centralized Script option set to Y (Yes). Note: The default for all operations and jobs in Tivoli Workload Scheduler for z/OS is N (No). A centralized script resides in the IBM Tivoli Workload Scheduler for z/OS JOBLIB and is downloaded to the fault-tolerant agent every time the job is submitted. The concept of centralized script has been added for compatibility with the way that Tivoli Workload Scheduler for z/OS manages jobs in the z/OS environment. Chapter 2. End-to-end scheduling architecture 71
  • Non-centralized script For every FTA job definition in Tivoli Workload Scheduler for z/OS where the centralized script option is set to N (non-centralized script) there must be a corresponding member in the EQQSCLIB data set. The members of EQQSCLIB contain a JOBREC statement that describes the path to the job or the command to be executed and eventually the user to be used when the job or command is executed. Example for a UNIX script: JOBREC JOBSCR(/Tivoli/tws/scripts/script001_accounting) JOBUSR(userid01) Example for a UNIX command: JOBREC JOBCMD(ls) JOBUSR(userid01) If the JOBUSR (user for the job) keyword is not specified, the user defined in the CPUUSER keyword of the CPUREC statement for the fault-tolerant workstation is used. If necessary, Tivoli Workload Scheduler for z/OS JCL variables can be used in the JOBREC definition. Tivoli Workload Scheduler for z/OS JCL variables and variable substitution in a EQQSCLIB member is managed and controlled by VARSUB statements placed directly in the EQQSCLIB member with the JOBREC definition for the particular job. Furthermore, it is possible to define Tivoli Workload Scheduler recovery options for the job defined in the JOBREC statement. Tivoli Workload Scheduler recovery options are defined with RECOVERY statements placed directly in the EQQSCLIB member with the JOBREC definition for the particular job. The JOBREC (and optionally VARSUB and RECOVERY) definitions are read by the Tivoli Workload Scheduler for z/OS plan programs when producing the new current plan and placed as part of the job definition in the Symphony file. If a Tivoli Workload Scheduler distributed job stream is added to the plan in Tivoli Workload Scheduler for z/OS, the JOBREC definition will be read by Tivoli Workload Scheduler for z/OS, copied to the Symphony file on the Tivoli Workload Scheduler for z/OS server, and sent (as events) by the server to the Tivoli Workload Scheduler agent Symphony files via the directly connected Tivoli Workload Scheduler domain managers. It is important to remember that the EQQSCLIB member only has a pointer (the path) to the job that is going to be executed. The actual job (the JCL) is placed locally on the FTA or workstation in the directory defined by the JOBREC JOBSCR definition.72 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • This also means that it is not possible to use the JCL edit function in TivoliWorkload Scheduler for z/OS to edit the script (the JCL) for jobs where the script(the pointer) is defined by a JOBREC statement in the EQQSCLIB data set.Centralized scriptScript for a job defined with centralized script option set to Y must be defined inTivoli Workload Scheduler for z/OS JOBLIB. The script is defined the same wayas normal JCL.It is possible (but not necessary) to define some parameters of the centralizedscript, such as the user, in a job definition member of the SCRPTLIB data set.With centralized scripts, you can perform variable substitution, automaticrecovery, JCL editing, and job setup (as for “normal” z/OS jobs defined in theTivoli Workload Scheduler for z/OS JOBLIB). It is also possible to use thejob-submit exit (EQQUX001).Note that jobs with centralized script will be defined in the Symphony file with adependency named script. This dependency will be released when the job isready to run and the script is downloaded from the Tivoli Workload Scheduler forz/OS controller to the fault-tolerant agent.To download a centralized script, the DD statement EQQTWSCS must bepresent in the controller and server started tasks. During the download the<twshome>/centralized directory is created at the fault-tolerant workstation. Thescript is downloaded to this directory. If an error occurs during this operation, thecontroller retries the download every 30 seconds for a maximum of 10 times. Ifthe script download still fails after 10 retries, the job (operation) is marked asEnded-in-error with error code OSUF.Here are the detailed steps for downloading and executing centralized scripts onFTAs (Figure 2-17 on page 75):1. Tivoli Workload Scheduler for z/OS controller instructs sender subtask to begin script download.2. The sender subtask writes the centralized script to the centralized scripts data set (EQQTWSCS).3. The sender subtask writes a script download event (type JCL, action D) to the output queue (EQQTWSOU).4. The output translator thread reads the JCL-D event from the output queue.5. The output translator thread reads the script from the centralized scripts data set (EQQTWSCS).6. The output translator thread spawns a script downloader thread. Chapter 2. End-to-end scheduling architecture 73
  • 7. The script downloader thread connects directly to netman on the FTA where the script will run. 8. netman spawns dwnldr and connects the socket from the script downloader thread to the new dwnldr process. 9. dwnldr downloads the script from the script downloader thread and writes it to the TWSHome/centralized directory on the FTA. 10.dwnldr notifies the script downloader thread of the result of the download. 11.The script downloader thread passes the result to the input writer thread. 12.If the script download was successful, the input writer thread writes a script download successful event (type JCL, action C) on the input queue (EQQTWSIN). If the script download was unsuccessful, the input writer thread writes a a script download in error event (type JCL, action E) on the input queue. 13.The receiver subtask reads the script download result event from the input queue. 14.The receiver subtask notifies the Tivoli Workload Scheduler for z/OS controller of the result of the script download. If the result of the script download was successful, the OPC controller then sends a release dependency event (type JCL, action R) to the FTA, via the normal IPC channel (sender subtask → output queue → output translator → Mailbox.msg → mailman → writer on FTA, and so on). This event causes the job to run.74 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • MASTERDM z/OS 1 3 4 OPC Controller sender subtask out output translator Master Domain 2 5 14 cs Manager 6 13 12 11 receiver subtask in input writer script downloader DomainZ AIX Domain Manager DMZ DomainA DomainB HPUX 10 7 Domain AIX Domain Manager Manager DMA DMB netman FTA1 FTA2 FTA3 FTA4 8 dwnldr AIX OS/400 Windows XP Solaris myscript.sh 9Figure 2-17 Steps and processes for downloading centralized script Creating centralized script in the Tivoli Workload Scheduler for z/OS JOBLIB data set is described in 4.5.2, “Definition of centralized scripts” on page 219.2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans When scheduling jobs in the Tivoli Workload Scheduler environment, current plan processing also includes the automatic generation of the Symphony file that goes to the IBM Tivoli Workload Scheduler for z/OS server and IBM Tivoli Workload Scheduler subordinate domain managers as well as fault-tolerant agents. The Tivoli Workload Scheduler for z/OS current plan program is normally run on workdays in the engine as described in 2.1.3, “Tivoli Workload Scheduler for z/OS plans” on page 37. Chapter 2. End-to-end scheduling architecture 75
  • Figure 2-18 shows a combined view of long-term planning and current planning. Changes to the databases need an update of the long-term plan, thus most site run the LTP Modify batch job immediately before extending the current plan. Databases Job Resources Workstations Calendars Periods Streams Steps of plan 1. Extend long term plan extension 2. Extend current plan 90 days 1 workday Plan LTP Long Term Plan extension today tomorrow Details of Remove Add detail current plan Old current completed job for next New current extension plan streams day plan Figure 2-18 Combined view of the long-term planning and current planning If the end-to-end feature is activated in Tivoli Workload Scheduler for z/OS, the current plan program will read the topology definitions described in the TOPLOGY, DOMREC, CPUREC, and USRREC initialization statements (see 2.3.3, “Tivoli Workload Scheduler for z/OS end-to-end configuration” on page 68) and the script library (EQQSCLIB) as part of the planning process. Information from the initialization statements and the script library will be used to create a Symphony file for the Tivoli Workload Scheduler FTAs (see Figure 2-19 on page 77). The whole process is handled by Tivoli Workload Scheduler for z/OS planning programs.76 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Job Databases Resources Workstations Streams Current Plan Remove completed Add detail Old current plan job streams New current plan Extension for next day & Replan 1. Extract TWS plan form current plan 2. Add topology (domain, workstation) 3. Add task definition (path and user) for New Symphony distributed TWS jobs Script Topology library DefinitionsFigure 2-19 Creation of Symphony file in Tivoli Workload Scheduler for z/OS planprogramsThe process is handled by Tivoli Workload Scheduler for z/OS planningprograms, which is described in the next section.Detailed description of the Symphony creationFigure 2-15 on page 63 gives a description of the tasks and processes involvedin the Symphony creation. Chapter 2. End-to-end scheduling architecture 77
  • TWS for z/OS Engine TWS for z/OS Controller TWS for z/OS Server programs running in USS Symphony & TWS translator threads message files processes start & stop events only NetReq.msg netman TWSCS GS translator GS spawns writer end-to-end mailman remote enabler WA output Symphony writer sender TWSOU translator subtask spawns threads NMM receiver remote writer input input Mailbox.msg subtask TWSIN mailman writer translator EM job log script retriever downloader Intercom.msg batchman tomaster.msg remote remote scribner dwnldrFigure 2-20 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication 1. The process is handled by Tivoli Workload Scheduler for z/OS planning batch programs. The batch produces the NCP and initializes the symUSER. 2. The Normal Node Manager (NMM) sends the SYNC START (S) event to the server, and the E2E receiver starts, leaving all events in the inbound queue (TWSIN). 3. When the SYNC START (S) is processed by the output translator, it stops the OPCMASTER, sends the SYNC END (E) to the controller, and stops the entire network. 4. The NMM applies the job tracking events received while the new plan was produced. It then copies the new current plan data set (NCP) to the Tivoli Workload Scheduler for z/OS current plan data set (CP1 or CP2), make a current plan backup up (copies active CP1/CP2 to inactive CP1/CP2) and creates the Symphony Current Plan (SCP) data set as a copy of the active current plan (CP1 or CP2) data set. 5. Tivoli Workload Scheduler for z/OS mainframe schedule is resumed. 6. The end-to-end receiver begins to process events in the queue.78 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 7. The SYNC CPREADY (Y) is sent to the output translator and starts, leaving in the outbound queue (TWSOU) all the events.8. The plan program starts producing the SymUSER file starting from SCP and then renames it Symnew.9. When the Symnew file has been created, the plan program ends and NMM notifies the output translator that the Symnew file is ready, sending the SYNC SYMREADY (R) event to the output translator.10.The output translator renames old Symphony and Sinfonia files to Symold and Sinfold files, and a Symphony OK (X) or NOT OK (B) Sync event is sent to the Tivoli Workload Scheduler for z/OS engine, which logs a message in the engine message log indicating whether the Symphony has been switched.11.The Tivoli Workload Scheduler for z/OS server master is started in USS and the Input Translator starts to process new events. As in Tivoli Workload Scheduler distributed, mailman and batchman process events are left in local event files and start distributing the new Symphony file to the whole IBM Tivoli Workload Scheduler network.When the Symphony file is created by the Tivoli Workload Scheduler for z/OSplan programs, it (or, more precisely, the Sinfonia file) will be distributed to theTivoli Workload Scheduler for z/OS subordinate domain manager, which in turndistributes the Symphony (Sinfonia) file to its subordinate domain managers andfault-tolerant agents. (See Figure 2-21 on page 80.) Chapter 2. End-to-end scheduling architecture 79
  • MASTERDM z/OS Master The TWS plan is extracted Domain from the TWS for z/OS plan Manager TWS for TWS plan z/OS plan DomainZ AIX Domain The TWS plan is then distributed Manager to the subordinate DMs and FTAs DMZ TWS plan DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows 2000 Solaris Figure 2-21 Symphony file distribution from ITWS for z/OS server to ITWS agents The Symphony file is generated: Every time the Tivoli Workload Scheduler for z/OS plan is extended or replanned When a Symphony renew batch job is submitted (from Tivoli Workload Scheduler for z/OS legacy ISPF panels, option 3.5) The Symphony file contains: Jobs to be executed on Tivoli Workload Scheduler FTAs z/OS (mainframe) jobs that are predecessors to Tivoli Workload Scheduler distributed jobs Job streams that have at least one job in the Symphony file Topology information for the distributed network with all the workstation and domain definitions, including the master domain manager of the distributed network; that is, the Tivoli Workload Scheduler for z/OS host.80 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • After the Symphony file is created and distributed to the Tivoli WorkloadScheduler FTAs, the Symphony file is updated by events: When job status changes When jobs or job streams are modified When jobs or job streams for the Tivoli Workload Scheduler FTAs are added to the plan in the Tivoli Workload Scheduler for z/OS controller.If you look at the Symphony file locally on a Tivoli Workload Scheduler FTA, fromthe Job Scheduling Console, or using the Tivoli Workload Scheduler commandline interface to the plan (conman), you will see that: The Tivoli Workload Scheduler workstation has the same name as the related workstation defined in Tivoli Workload Scheduler for z/OS for the agent. OPCMASTER is the hard-coded name for the master domain manager workstation for the Tivoli Workload Scheduler for z/OS controller. The name of the job stream (or schedule) is the hexadecimal representation of the occurrence (job stream instance) token (internal unique and invariant identifier for occurrences). The job streams are always defined on the OPCMASTER workstation. (Having no dependencies, this does not reduce fault tolerance.) See Figure 2-22 on page 82. Using this hexadecimal representation for the job stream instances makes it possible to have several instances for the same job stream, because they have unique job stream names. Therefore, it is possible to have a plan in the Tivoli Workload Scheduler for z/OS controller and a distributed Symphony file that spans more than 24 hours. Note: In Tivoli Workload Scheduler for z/OS, the key in the plan for an occurrence is job stream name and input arrival time. In the Symphony file, the key is the job stream instance name. Since Tivoli Workload Scheduler for z/OS can have several job stream instances with the same name in the plan, it is necessary with an unique and invariant identifier (the occurrence token) for the occurrence or job stream instance name in the Symphony file. The job name is made up according to one of the following formats (see Figure 2-22 on page 82 for an example): – <T>_<opnum>_<applname> when the job is created in the Symphony file – <T>_<opnum>_<ext>_<applname> when the job is first deleted from the current plan and then recreated in the current plan Chapter 2. End-to-end scheduling architecture 81
  • In these examples: – <T> is J for normal jobs (operations), P for jobs that are representing pending predecessors, or R for recovery jobs (jobs added by Tivoli Workload Scheduler recovery). – <opnum> is the operation number for the job in the job stream (in current plan). – <ext> is a sequential number that is incremented every time the same operation is deleted and then recreated in current plan; if 0, it is omitted. – <applname> is the name of the occurrence (job stream) the operation belongs to. Job name and workstation for Job Stream name and workstation for distributed job in Symphony file job stream in Symphony file Figure 2-22 Job name and job stream name as generated in the Symphony file Tivoli Workload Scheduler for z/OS uses the job name and an operation number as "key" for the job in a job stream. In the Symphony file it is only the job name that is used as key. Since Tivoli Workload Scheduler for z/OS can have the same job name several times in on job stream and distinguishes between identical job names with the operation number, the job names generated in the Symphony file contains the Tivoli Workload Scheduler for z/OS operation number as part of the job name. The name of a job stream (application) can contain national characters such as dollar ($), sect (§), and pound (£). These characters are converted into dashes (-) in the names of included jobs when the job stream is added to the symphony file or when the Symphony file is created. For example, consider the job stream name: APPL$$234§§ABC£ In the Symphony file, the names of the jobs in this job stream will be: <T>_<opnum>_APPL--234--ABC- This nomenclature is still valid because the job stream instance (occurrence) is identified by the occurrence token, and the operations are each identified by the82 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • operation numbers (<opnum>) that are part of the job names in the Symphonyfile. Note: The criteria that are used to generate job names in the Symphony file can be managed by the Tivoli Workload Scheduler for z/OS JTOPTS TWSJOBNAME() parameter, which was introduced with APAR PQ77970. It is possible, for example, to use the job name (from the operation) instead of the job stream name for the job name in the Symphony file, so the job name will be <T>_<opnum>_<jobname> in the Symphony file.In normal situations, the Symphony file is automatically generated as part of theTivoli Workload Scheduler for z/OS plan process. The topology definitions areread and built into the Symphony file as part of the Tivoli Workload Scheduler forz/OS plan programs, so regular operation situations can occur where you need torenew (or rebuild) the Symphony file from the Tivoli Workload Scheduler for z/OSplan: When you make changes to the script library or to the definitions of the TOPOLOGY statement When you add or change information in the plan, such as workstation definitionsTo have the Symphony file rebuilt or renewed, you can use the Symphony Renewoption of the Daily Planning menu (option 3.5 in the legacy IBM Tivoli WorkloadScheduler for z/OS ISPF panels).This renew function can also be used to recover from error situations such as: A non-valid job definition in the script library Incorrect workstation definitions An incorrect Windows user name or password Changes to the script library or to the definitions of the TOPOLOGY statementIn 5.8.5, “Common errors for jobs on fault-tolerant workstations” on page 334, wedescribe how to correct several of these error situations without redistributing theSymphony file. It is worth it to get familiar with these alternatives before you startredistributing a Symphony file in a heavily loaded production environment. Chapter 2. End-to-end scheduling architecture 83
  • 2.3.5 Making the end-to-end scheduling system fault tolerant In the following, we cover some possible cases of failure in end-to-end scheduling and ways to mitigate against these failures: 1. The Tivoli Workload Scheduler for z/OS engine (controller) can fail due to a system or task outage. 2. The Tivoli Workload Scheduler for z/OS server can fail due to a system or task outage. 3. The domain managers at the first level, that is the domain managers directly connected to the Tivoli Workload Scheduler for z/OS server, can fail due to a system or task outage. To avoid an outage of the end-to-end workload managed in the Tivoli Workload Scheduler for z/OS engine and server and in the Tivoli Workload Scheduler domain manager, you should consider: Using a standby engine (controller) for the Tivoli Workload Scheduler for z/OS engine (controller). Making sure that your Tivoli Workload Scheduler for z/OS server can be reached if the Tivoli Workload Scheduler for z/OS engine (controller) is moved to one of its standby engines (TCP/IP configuration in your enterprise). Remember that the end-to-end server started task always must be active on the same z/OS system as the active engine (controller). Defining backup domain managers for your Tivoli Workload Scheduler domain managers at the first level. Note: It is a good practice to define backup domain managers for all domain managers in the distributed Tivoli Workload Scheduler network. Figure 2-23 shows an example of a fault-tolerant end-to-end network with a Tivoli Workload Scheduler for z/OS standby controller engine and one Tivoli Workload Scheduler backup domain manager for one Tivoli Workload Scheduler domain manager at the first level.84 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • MASTERDM Standby Standby Engine Engine z/OS SYSPLEX Active Engine Server DomainZ Domain AIX AIX Backup Manager Domain DMZ Manager (FTA) DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows 2000 SolarisFigure 2-23 Redundant configuration with standby engine and IBM Tivoli WorkloadScheduler backup DMIf the domain manager for DomainZ fails, it will be possible to switch to thebackup domain manager. The backup domain manager has an updatedSymphony file and knows the subordinate domain managers and fault-tolerantagents, so it can take over the responsibilities of the domain manager. Thisswitch can be performed without any outages in the workload management.If the switch to the backup domain manager is going to be active across the TivoliWorkload Scheduler for z/OS plan extension, you must change the topologydefinitions in the Tivoli Workload Scheduler for z/OS DOMREC initializationstatements. The backup domain manager fault tolerant workstation is going to bethe domain manager at the first level for the Tivoli Workload Schedulerdistributed network, even after the plan extension.Example 2-1 shows how to change the name of the fault tolerant workstation inthe DOMREC initialization statement, if the switch to the backup domainmanager is effective across the Tivoli Workload Scheduler for z/OS planextension. (See 5.5.4, “Switch to Tivoli Workload Scheduler backup domainmanager” on page 308 for more information.) Chapter 2. End-to-end scheduling architecture 85
  • Example 2-1 DOMREC initialization statement DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM) Should be changed to: DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMB) DOMPARENT(MASTERDM) Where FDMB is the name of the fault tolerant workstation where the backup domain manager is running. If the Tivoli Workload Scheduler for z/OS engine or server fails, it will be possible to let one of the standby engines in the same sysplex take over. This takeover can be accomplished without any outages in the workload management. The Tivoli Workload Scheduler for z/OS server must follow the Tivoli Workload Scheduler for z/OS engine. That is, if the Tivoli Workload Scheduler for z/OS engine is moved to another system in the sysplex, the Tivoli Workload Scheduler for z/OS server must be moved to the same system in the sysplex. Note: The synchronization between the Symphony file on the Tivoli Workload Scheduler domain manager and the Symphony file on its backup domain manager has improved considerably with FixPack 04 for IBM Tivoli Workload Scheduler, in which an enhanced and improved fault tolerant switch manager functionality is introduced.2.3.6 Benefits of end-to-end scheduling The benefits that can be gained from using the Tivoli Workload Scheduler for z/OS end-to-end scheduling include: The ability to connect Tivoli Workload Scheduler fault-tolerant agents to an Tivoli Workload Scheduler for z/OS controller. Scheduling on additional operating systems. The ability to define resource dependencies between jobs that run on different FTAs or in different domains. Synchronizing work in mainframe and distributed environments. The ability to organize the scheduling network into multiple tiers, delegating some responsibilities to Tivoli Workload Scheduler domain managers. Extended planning capabilities, such as the use of long-term plans, trial plans, and extended plans, also for the Tivoli Workload Scheduler network. Extended plans also means that the current plan can span more than 24 hours. One possible benefit is being able to extend a current plan over a time86 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • period when no one will be available to verify that the current plan wassuccessfully created each day, such as over a holiday weekend. Theend-to-end environment also allows the current plan to be extended for aspecified length of time, or to replan the current plan to remove completedjobs.Powerful run-cycle and calendar functions. Tivoli Workload Schedulerend-to-end enables more complex run cycles and rules to be defined todetermine when a job stream should be scheduled.Ability to create a Trial Plan that can span more than 24 hours.Improved use of resources (keep resource if job ends in error).Enhanced use of host names instead of dotted IP addresses.Multiple job or job stream instances in the same plan. In the end-to-endenvironment, job streams are renamed using a unique identifier so thatmultiple job stream instances can be included in the current plan.The ability to use batch tools (for example, Batchloader, Massupdate, OCL,BCIT) that enable batched changes to be made to the Tivoli WorkloadScheduler end-to-end database and plan.The ability to specify at the job level whether the job’s script should becentralized (placed in Tivoli Workload Scheduler for z/OS JOBLIB) ornon-centralized (placed locally on the Tivoli Workload Scheduler agent).Use of Tivoli Workload Scheduler for z/OS JCL variables in both centralizedand non-centralized scripts.The ability to use Tivoli Workload Scheduler for z/OS recovery in centralizedscripts or Tivoli Workload Scheduler recovery in non-centralized scripts.The ability to define and browse operator instructions associated with jobs inthe database and plan. In a Tivoli Workload Scheduler distributedenvironment, it is possible to insert comments or a description in a jobdefinition, but these comments and description are not visible from the planfunctions.The ability to define a job stream that will be submitted automatically to TivoliWorkload Scheduler when one of the following events occurs in the z/OSsystem: a particular job is executed or terminated in the z/OS system, aspecified resource becomes available, or a z/OS dataset is created oropened. Chapter 2. End-to-end scheduling architecture 87
  • Considerations Implementing Tivoli Workload Scheduler for z/OS end-to-end also imposes some limitations: Windows users’ passwords are defined directly (without any encryption) in the Tivoli Workload Scheduler for z/OS server initialization parameters. It is possible to place these definitions in a separate library with restricted access (restricted by RACF, for example) to authorized persons. In an end-to-end configuration, some of the conman command options are disabled. On an end-to-end FTA, the conman command only allows display operations and the subset of commands (such as kill, altpass, link/unlink, start/stop, switchmgr) that do not affect the status or sequence of jobs. Command options that could affect the information that is contained in the Symphony file are not allowed. For a complete list of allowed conman commands, refer to 2.7, “conman commands in the end-to-end environment” on page 106. Workstation classes are not supported in an end-to-end configuration. The LIMIT attribute is supported on the workstation level, not on the job stream level in an end-to-end environment. Some Tivoli Workload Scheduler functions are not available directly on Tivoli Workload Scheduler FTAs, but can be handled by other functions in Tivoli Workload Scheduler for z/OS. For example: – IBM Tivoli Workload Scheduler prompts • Recovery prompts are supported. • The Tivoli Workload Scheduler predefined and ad hoc prompts can be replaced with the manual workstation function in Tivoli Workload Scheduler for z/OS. – IBM Tivoli Workload Scheduler file dependencies • It is not possible to define file dependencies directly at job level in Tivoli Workload Scheduler for z/OS for distributed Tivoli Workload Scheduler jobs. • The filewatch program that is delivered with Tivoli Workload Scheduler can be used to create file dependencies for distributed jobs in Tivoli Workload Scheduler for z/OS. Using the filewatch program, the file dependency is “replaced” by a job dependency in which a predecessor job checks for the file using the filewatch program.88 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • – Dependencies on job stream level The traditional way to handle these types of dependencies in Tivoli Workload Scheduler for z/OS is to define a “dummy start” and “dummy end” job at the beginning and end of the job streams, respectively. – Repeat range (that is, rerun this job every 10 minutes”) Although there is no built-in function for this in Tivoli Workload Scheduler for z/OS, it can be accomplished in different ways, such as by defining the job repeatedly in the job stream with specific start times or by using a PIF (Tivoli Workload Scheduler for z/OS Programming Interface) program to rerun the job every 10 minutes. – Job priority change Job priority cannot be changed directly for an individual fault-tolerant job. In an end-to-end configuration, it is possible to change the priority of a job stream. When the priority of a job stream is changed, all jobs within the job stream will have the same priority. – Internetwork dependencies An end-to-end configuration supports dependencies on a job that is running in the same Tivoli Workload Scheduler end-to-end or distributed topology (network).2.4 Job Scheduling Console and related components The Job Scheduling Console (JSC) provides another way of working with Tivoli Workload Scheduler for z/OS databases and current plan. The JSC is a graphical user interface that connects to the Tivoli Workload Scheduler for z/OS engine via a Tivoli Workload Scheduler for z/OS TCP/IP server task. Usually this task is dedicated exclusively to handling JSC communications. Later in this book, the server task that is dedicated to JSC communications will be referred to as the JSC server (Figure 2-24 on page 90). The TCP/IP server is a separate address space, started and stopped either automatically by the engine or by the user via the z/OS start and stop commands. More than one TCP/IP server can be associated with an engine. Chapter 2. End-to-end scheduling architecture 89
  • TWS for z/OS Engine Databases Master JSC Server Domain Current Plan Manager TMR OPC Server Connector Tivoli Management Framework Job Job Job Scheduling Scheduling Scheduling Console Console Console Figure 2-24 Communication between JSC and ITWS for z/OS via the JSC Server The Job Scheduling Console can be run on almost any platform. Using the JSC, an operator can access both Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS scheduling engines. In order to communicate with the scheduling engines, the JSC requires several additional components to be installed: Tivoli Management Framework Job Scheduling Services (JSS) Tivoli Workload Scheduler connector, Tivoli Workload Scheduler for z/OS connector, or both The Job Scheduling Services and the connectors must be installed on top of the Tivoli Management Framework. Together, the Tivoli Management Framework, the Job Scheduling Services, and the connector provide the interface between JSC and the scheduling engine. The Job Scheduling Console is installed locally on your desktop computer, laptop computer, or workstation.2.4.1 A brief introduction to the Tivoli Management Framework Tivoli Management Framework provides the foundation on which the Job Scheduling Services and connectors are installed. It also performs access verification when a Job Scheduling Console user logs in. The Tivoli Management Environment (TME®) uses the concept of Tivoli Management Regions (TMRs). There is a single server for each TMR, called the TMR server; this is analogous90 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • to the IBM Tivoli Workload Scheduler master server. The TMR server contains the Tivoli object repository (a database used by the TMR). Managed nodes are semi-independent agents that are installed on other nodes in the network; these are roughly analogous to Tivoli Workload Scheduler fault-tolerant agents. For more information about the Tivoli Management Framework, see the IBM Tivoli Management Framework 4.1 User’s Guide, GC32-0805.2.4.2 Job Scheduling Services (JSS) The Job Scheduling Services component provides a unified interface in the Tivoli Management Framework for different job scheduling engines. Job Scheduling Services does not do anything on its own; it requires additional components called connectors in order to connect to job scheduling engines. It must be installed on either the TMR server or a managed node.2.4.3 Connectors Connectors are the components that enable the Job Scheduling Services to talk with different types of scheduling engines. When working with a particular type of scheduling engine, the Job Scheduling Console communicates with the scheduling engine via the Job Scheduling Services and the connector. A different connector is required for each type of scheduling engine. A connector can only be installed on a computer where the Tivoli Management Framework and Job Scheduling Services have already been installed. There are two types of connectors for connecting to the two types of scheduling engines in the IBM Tivoli Workload Scheduler 8.2 suite: IBM Tivoli Workload Scheduler for z/OS connector (or OPC connector) IBM Tivoli Workload Scheduler connector Job Scheduling Services communicates with the engine via the connector of the appropriate type. When working with a Tivoli Workload Scheduler for z/OS engine, the JSC communicates via the Tivoli Workload Scheduler for z/OS connector. When working with a Tivoli Workload Scheduler engine, the JSC communicates via the Tivoli Workload Scheduler connector. The two types of connectors function somewhat differently: The Tivoli Workload Scheduler for z/OS connector communicates over TCP/IP with the Tivoli Workload Scheduler for z/OS engine running on a mainframe (MVS or z/OS) computer. The Tivoli Workload Scheduler connector performs direct reads and writes of the Tivoli Workload Scheduler plan and database files on the same computer as where the Tivoli Workload Scheduler connector runs. Chapter 2. End-to-end scheduling architecture 91
  • A connector instance must be created before the connector can be used. Each type of connector can have multiple instances. A separate instance is required for each engine that will be controlled by JSC. We will now discuss each type of connector in more detail. Tivoli Workload Scheduler for z/OS connector Also sometimes called the OPC connector, the Tivoli Workload Scheduler for z/OS connector can be instantiated on any TMR server or managed node. The Tivoli Workload Scheduler for z/OS connector instance communicates via TCP with the Tivoli Workload Scheduler for z/OS TCP/IP server. You might, for example, have two different Tivoli Workload Scheduler for z/OS engines that both must be accessible from the Job Scheduling Console. In this case, you would install one connector instance for working with one Tivoli Workload Scheduler for z/OS engine, and another connector instance for communicating with the other engine. When a Tivoli Workload Scheduler for z/OS connector instance is created, the IP address (or host name) and TCP port number of the Tivoli Workload Scheduler for z/OS engine’s TCP/IP server are specified. The Tivoli Workload Scheduler for z/OS connector uses these two pieces of information to connect to the Tivoli Workload Scheduler for z/OS engine. See Figure 2-25 on page 93. Tivoli Workload Scheduler connector The Tivoli Workload Scheduler connector must be instantiated on the host where the Tivoli Workload Scheduler engine is installed so that it can access the plan and database files locally. This means that the Tivoli Management Framework must be installed (either as a TMR server or managed node) on the server where the Tivoli Workload Scheduler engine resides. Usually, this server is the Tivoli Workload Scheduler master domain manager. But it may also be desirable to connect with JSC to another domain manager or to a fault-tolerant agent. If multiple instances of Tivoli Workload Scheduler are installed on a server, it is possible to have one Tivoli Workload Scheduler connector instance for each Tivoli Workload Scheduler instance on the server. When a Tivoli Workload Scheduler connector instance is created, the full path to the Tivoli Workload Scheduler home directory associated with that Tivoli Workload Scheduler instance is specified. This is how the Tivoli Workload Scheduler connector knows where to find the Tivoli Workload Scheduler databases and plan. See Figure 2-25 on page 93. Connector instances We now give some examples of how connector instances might be installed in the real world.92 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • One connector instance of each typeIn Figure 2-25, there are two connector instances, including one Tivoli WorkloadScheduler for z/OS connector instance and one Tivoli Workload Schedulerconnector instance : The Tivoli Workload Scheduler for z/OS connector instance is associated with a Tivoli Workload Scheduler for z/OS engine running in a remote sysplex. Communication between the connector instance and the remote scheduling engine is conducted over a TCP connection. The Tivoli Workload Scheduler connector instance is associated with a Tivoli Workload Scheduler engine installed on the same AIX server. The Tivoli Workload Scheduler connector instance reads from and writes to the plan (the Symphony file) of the Tivoli Workload Scheduler engine. MASTERDM TWS for z/OS z/OS Master Databases JSC Server Domain Current Manager Plan DomainA AIX TWS Domain TWS OPC Symphony Connector Connector Manager DMB Framework Other DMs and FTAs Job Scheduling ConsoleFigure 2-25 One ITWS for z/OS connector and one ITWS connector instance Chapter 2. End-to-end scheduling architecture 93
  • Tip: Tivoli Workload Scheduler connector instances must be created on the server where the Tivoli Workload Scheduler engine is installed. This is because the connector must be able to have access locally to the Tivoli Workload Scheduler engine (specifically, to the plan and database files). This limitation obviously does not apply to Tivoli Workload Scheduler for z/OS connector instances because the Tivoli Workload Scheduler for z/OS connector communicates with the remote Tivoli Workload Scheduler for z/OS engine over TCP/IP. In this example, the connectors are installed on the domain manager DMB. This domain manager has one connector instance of each type: A Tivoli Workload Scheduler connector to monitor the plan file (Symphony) locally on DMB A Tivoli Workload Scheduler for z/OS (OPC) connector to work with the databases and current plan on the mainframe Having the Tivoli Workload Scheduler connector installed on a DM provides the operator with the ability to use JSC to look directly at the Symphony file on that workstation. This is particularly useful in the event that problems arise during the production day. If any discrepancy appears between the state of a job in the Tivoli Workload Scheduler for z/OS current plan and the Symphony file on an FTA, it is useful to be able to look at the Symphony file directly. Another benefit is that retrieval of job logs from an FTA is much faster when the job log is retrieved through the Tivoli Workload Scheduler connector. If the job log is fetched through the Tivoli Workload Scheduler for z/OS engine, it can take much longer. Connectors on multiple domain managers With the previous version of IBM Tivoli Workload Scheduler — Version 8.1 — it was necessary to have a single primary domain manager that was the parent of all other domain managers. Figure 2-25 on page 93 shows an example of such an arrangement. Tivoli Workload Scheduler 8.2 removes this limitation. With Version 8.2, it is possible to have more than one domain manager directly under the master domain manager. Most end-to-end scheduling networks will have more than one domain manager under the master. For this reason, it is a good idea to install the Tivoli Workload Scheduler connector and OPC connector on more than one domain manager.94 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • MASTERDM TWS for z/OS z/OS Master Databases JSC Server Domain Current Manager Plan DomainA DomainB AIX TWS AIX TWS Domain TWS OPC Domain TWS OPC Manager Connector Connector Manager Connector Connector DMA Symphony Framework DMA Symphony Framework Other DMs and Other DMs and FTAs FTAs Job Scheduling ConsoleFigure 2-26 An example with two connector instances of each type Note: It is a good idea to set up more than one Tivoli Workload Scheduler for z/OS connector instance associated with the engine (as in Figure 2-26). This way, if there is a problem with one of the workstations running the connector, JSC users will still be able to access the Tivoli Workload Scheduler for z/OS engine via the other connector. If JSC access is important to your enterprise, it is vital to set up redundant connector instances like this.Next, we discuss the connectors in more detail.The connector programsThese are the programs that run behind the scenes to make the connectors work.Each program and its function is described below.Programs of the IBM Tivoli Workload Scheduler for z/OS connectorThe programs that comprise the Tivoli Workload Scheduler for z/OS connectorare located in $BINDIR/OPC (Figure 2-27 on page 96). Chapter 2. End-to-end scheduling architecture 95
  • TWS for z/OS (OPC) TWS for z/OS z/OS Databases JSC Server Current Plan TMR Server or Managed Node with JSS AIX opc_connector opc_connector2 oserv Job Scheduling Console Figure 2-27 Programs of the IBM Tivoli Workload Scheduler for z/OS (OPC) connector opc_connector The main connector program that contains the implementation of the main connector methods (basically all the methods that are required to connect to and retrieve data from Tivoli Workload Scheduler for z/OS engine). It is implemented as a threaded daemon, which means that it is automatically started by the Tivoli Framework at the first request that should be handled by it, and it will stay active until there has not been a request for a long time. After it is started, it handles starting new threads for all JSC requests that require data from a specific Tivoli Workload Scheduler for z/OS engine. opc_connector2 A small connector program that contains the implementation for small methods that do not require data from Tivoli Workload Scheduler for z/OS. This program is implemented per method, which means that Tivoli Framework starts this program when a method implemented by it is called, the process performs the action for this method, and then is terminated. This is useful for methods (like the ones called by JSC when it starts and asks for information from all of the connectors) that can be isolated and not logical to maintain the process activity.96 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Programs of the IBM Tivoli Workload Scheduler connectorThe programs that comprise the Tivoli Workload Scheduler connector arelocated in $BINDIR/Maestro (Figure 2-28). TWS TWS TWS Connector TMF Databases maestro_database Symphony maestro_plan start & stop netman maestro_engine oserv events, SAP pick lists: r3batch jobs, tasks, maestro_x_server variants, etc. joblog_retriever remote SAP host remote scribner Job Scheduling ConsoleFigure 2-28 Programs of the IBM Tivoli Workload Scheduler connector maestro_engine The maestro_engine program performs authentication when a user logs in via the Job Scheduling Console. It also starts and stops the Tivoli Workload Scheduler engine. It is started by the Tivoli Management Framework (specifically, the oserv program) when a user logs in from JSC. It terminates after 30 minutes of inactivity. Note: oserv is the Tivoli service that is used as the object request broker (ORB). This service runs on the Tivoli management region server and each managed node. maestro_plan The maestro_plan program reads from and writes to the Tivoli Workload Scheduler plan. It also handles switching to a different plan. The program is started when a user accesses the plan. It terminates after 30 minutes of inactivity. Chapter 2. End-to-end scheduling architecture 97
  • maestro_database The maestro_database program reads from and writes to the Tivoli Workload Scheduler database files. It is started when a JSC user accesses a database object or creates a new object definition. It terminates after 30 minutes of inactivity. job_instance_output The job_instance_ouput program retrieves job standard list files. It is started when a JSC user runs the Browse Job Log operation. It starts up, retrieves the requested stdlist file, and then terminates. maestro_x_server The maestro_x_server program is used to provide an interface to certain types of extended agents, such as the SAP R/3 extended agent (r3batch). It starts up when a command is run in JSC that requires execution of an agent method. It runs the X-agent method, returns the output, and then terminates. It only runs on workstations that host an r3batch extended agent.2.5 Job log retrieval in an end-to-end environment In this section, we cover the detailed steps of job log retrieval in an end-to-end environment using the JSC. There are different steps involved depending on which connector you are using to retrieve the job log and whether the firewalls are involved. We cover all of these scenarios: using the Tivoli Workload Scheduler (distributed) connector (via the domain manager or first-level domain manager), using the Tivoli Workload Scheduler for z/OS (or OPC) connector, and with the firewalls in the picture.2.5.1 Job log retrieval via the Tivoli Workload Scheduler connector As shown in Figure 2-29 on page 99, the steps behind the scenes in an end-to-end scheduling network when retrieving the job log via the domain manager (using the Tivoli Workload Scheduler (distributed) connector) are: 1. Operator requests joblog in Job Scheduling Console. 2. JSC connects to oserv running on the domain manager. 3. oserv spawns job_instance_output to fetch the job log. 4. job_instance_output communicates over TCP directly with the workstation where the joblog exists, bypassing the domain manager. 5. netman on that workstation spawns scribner and hands over the TCP connection with job_instance_output to the new scribner process. 6. scribner retrieves the joblog.98 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 7. scribner sends the joblog to job_instance_output on the master. 8. job_instance_ouput relays the job log to oserv. 9. oserv sends the job log to JSC. MASTERDM z/OS Master Domain Manager DomainZ AIX Domain oserv Manager 8 3 9 DMZ 2 Job job_instance_output Scheduling Console 4 DomainA DomainB HPUX Domain AIX Domain Manager Manager DMA DMB 7 netman FTA1 FTA2 FTA3 FTA4 5 scribner AIX OS/400 Windows XP Solaris 013780.0559 6Figure 2-29 Job log retrieval in an end-to-end scheduling network via the domain manager2.5.2 Job log retrieval via the OPC connector As shown in Figure 2-30 on page 101, the following steps take place behind the scenes in an end-to-end scheduling network when retrieving the job log using the OPC connector. The initial request for joblog is done: 1. Operator requests joblog in Job Scheduling Console. 2. JSC connects to oserv running on the domain manager. 3. oserv tells the OPC connector program to request the joblog from the OPC system. Chapter 2. End-to-end scheduling architecture 99
  • 4. opc_connector relays the request to the JSC Server task on the mainframe. 5. The JSC Server requests the job log from the controller. The next step depends on whether the job log has already been retrieved. If the job log has already been retrieved, skip to step 17. If the job log has not been retrieved yet, continue with step 6. Assuming that the log has not been retrieved already: 6. The controller sends the request for the joblog to the sender subtask. 7. The controller sends a message to the operator indicating that the job log has been requested. This message is displayed in a dialog box in JSC. (The message is sent via this path: Controller → JSC Server → opc_connector → oserv → JSC). 8. The sender subtask sends the request to the output translator, via the output queue. 9. The output translator thread reads the request and spawns a job log retriever thread to handle it. 10.The job log retriever thread opens a TCP connection directly to the workstation where the job log exists, bypassing the domain manager. 11.netman on that workstation spawns scribner and hands over the TCP connection with the job log retriever to the new scribner process. 12.scribner retrieves the job log. 13.scribner sends the joblog to the job log retriever thread. 14.The job log retriever thread passes the job log to the input writer thread 15.The input writer thread sends the job log to the receiver subtask, via the input queue 16.The receiver subtask sends the job log to the controller When the operator requests the job log a second time, the first five steps are the same as in the initial request (above). This time around, because the job log has already been received by the controller; 17.The controller sends the job log to the JSC Server. 18.The JSC Server sends the information to the OPC connector program running on the domain manager. 19.The IBM Tivoli Workload Scheduler for z/OS connector relays the job log to oserv. 20.oserv relays the job log to JSC and JSC displays the job log in a new window.100 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 8 MASTERDM z/OS OPC Controller 6 sender subtask out output translator Master 17 5 16 9 Domain JSC Server receiver subtask in input writer Manager 15 14 job log retriever DMZ AIX 18 4 10 Domain opc_connector Manager DMZ 19 3 oserv DMA DMB HPUX Domain AIX Domain Manager Manager DMA DMB 2 Job Scheduling 20 Console 1 13 netman Cannot load the Job output. Reason: EQQMA41I The engine has requested to the remote agent the joblog info needed to FTA1 FTA2 FTA3 FTA4 11 process the command. Please, retry later. scribner ->EQQM637I A JOBLOG IS NEEDED TO PROCESS THE COMMAND. IT HAS BEEN REQUESTED. AIX OS/400 Windows XP Solaris 013780.0559 12 7Figure 2-30 Job log retrieval in an end-to-end network via the ITWS for z/OS- no FIREWALL=Y configured2.5.3 Job log retrieval when firewalls are involved When the firewalls are involved (that is, FIREWALL=Y configured in the CPUREC definition of the workstation in which the job log is retrieved), the steps for retrieving the job log in an end-to-end scheduling network are different. These steps are shown in Figure 2-31 on page 102. Note that the firewall is configured to allow only the following traffic: DMY → DMA and DMZ → DMB. 1. Operator requests job log in JSC, or the mainframe ISPF panels. 2. TCP connection is opened to the parent domain manager of the with the workstation where the job log exists. 3. netman on that workstation spawns router and hands over the TCP socket to the new router process. Chapter 2. End-to-end scheduling architecture 101
  • 4. router opens a TCP connection to netman on the parent domain manager of the workstation where the job log exists, because this DM is also behind the firewall. 5. netman on the DM spawns router and hands over the TCP socket with router to the new router process. 6. router opens a TCP connection to netman on the workstation where the job log exists. 7. netman on that workstation spawns scribner and hands over the TCP socket with router to the new scribner process. 8. scribner retrieves the job log. 9. scribner on FTA4 sends the job log to router on DMB. 10.router sends the job log to the router program running on DMZ. Domain 1 Job log is requested Manager or z/OS Master DomainY 2 DomainZ AIX AIX 11 netman Domain Domain 3 Manager Manager router DMY DMZ 4 Firewall DomainA 10 DomainB HPUX Domain AIX Domain netman Manager Manager 5 DMA DMB router FIREWALL(Y) 6 9 netman FTA1 FTA2 FTA3 FTA4 7 scribner AIX OS/400 Windows XP Solaris FIREWALL(Y) 013780.0559 8Figure 2-31 Job log retrieval in an end-to-end network via the ITWS for z/OS- with FIREWALL=Y configured It is important to note that in the previous scenario, you should not configure the domain manager DMB as FIREWALL=N in its CPUREC definition. If you do, you102 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • will not be able to retrieve the job log from FTA4, even though FTA4 is configured as FIREWALL=Y. This is shown is Figure 2-32. In this case, when the TCP connection to the parent domain manager of the workstation where the job log exists (DMB) is blocked by the firewall, the connection request is not received by netman on DMB. The firewall does not allow direct connections from DMZ to FTA4. The only connections from DMZ that are permitted are those that go to DMB. Because DMB has FIREWALL=N, the connection did not go through DMZ – it tried to go straight to FTA4. Domain 1 Job log is requested Manager or z/OS master DomainY DomainZ AIX AIX Domain Domain Manager Manager DMY DMZ 2 Firewall DomainA DomainB HPUX Domain AIX Domain netman Manager Manager DMA DMB FIREWALL=N FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows XP Solaris FIREWALL=YFigure 2-32 Wrong configuration: connection blocked2.6 Tivoli Workload Scheduler, important files, anddirectory structure Figure 2-33 on page 104 shows the most important files in the Tivoli Workload Scheduler 8.2 working directory in USS (WRKDIR). Chapter 2. End-to-end scheduling architecture 103
  • Symbol Legend Color Legend options & config. files databases event queues Only found on E2E Server in HFS on mainframe WRKDIR (not found on Unix or Windows workstations) plans logs localopts TWSCCLog.propterties SymX Symbad Symold Symnew Sinfonia Symphony Mailbox.msg Intercom.msg audit mozart network version pobox Translator.wjl Translator.chk stdlist globalopts NetConf ServerN.msg logs mastsked NetReq.msg FTA.msg jobs tomaster.msg YYYYMMDD_NETMAN.log YYYYMMDD_TWSMERGE.log YYYYMMDD_E2EMERGE.logFigure 2-33 The most important files in the Tivoli Workload Scheduler 8.2 working directory in USS The descriptions of the files are: SymX (where X is the name of the user that ran the CP extend or Symphony renew job): A temporary file created during a CP extend or Symphony renew. This file is copied to Symnew, which is then copied to Sinfonia and Symphony. Symbad (Bad Symphony) Only created if CP extend or Symphony renew results in an invalid Symphony. Symold (Old Symphony) From prior to most recent CP extend or Symphony renew. Translator.wjl Translator event log for requested job logs. Translator.chk Translator checkpoint file.104 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • YYYYMMDD_E2EMERGE.log Translator log. Note: The Symnew, SymX, and Symbad files are temporary files and normally cannot be seen in USS work directory. Figure 2-34 shows the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS (BINDIR). The options files in the config subdirectory are only reference copies of these files; they are not active configuration files. Symbol Legend Color Legend options & config. files Only found on E2E Server in HFS on mainframe BINDIR (not found on Unix or Windows workstations) scripts & programs catalog codeset bin config zoneinfo NetConf globalopts localopts batchman config IBM mailman configure netman translator EQQBTCHM EQQCNFG0 starter EQQCNFGR EQQMLMN0 writer EQQNTMN0 EQQSTRTR EQQTRNSL EQQWRTR0Figure 2-34 A list of the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS Figure 2-35 on page 106 shows the Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents. Note that the database files (such as jobs and calendars) are not used in the Tivoli Workload Scheduler 8.2 end-to-end scheduling environment. Chapter 2. End-to-end scheduling architecture 105
  • Legend database file option file tws Security network parameters bin mozart schedlog stdlist audit pobox version localopts cpudata userdata mastsked jobs calendars prompts resources globaloptsFigure 2-35 Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents2.7 conman commands in the end-to-end environment In Tivoli Workload Scheduler, you can use the conman command line interface to manage the distributed production. A subset of these commands can also be used in end-to-end scheduling. In general, command options that could affect the information contained in the Symphony file are not allowed. Disallowed conman command options include add and remove dependencies, submit and cancel jobs, and so forth. Figure 2-36 on page 107 and Figure 2-37 on page 107 list the conman commands that are available on end-to-end fault-tolerant workstations in a Tivoli Workload Scheduler 8.2 end-to-end scheduling network. Note that in the Type field, M stands for domain managers, F for fault-tolerant agents and A stands for standard agents. Note: The composer command line interface, which is used to manage database objects in a distributed Tivoli Workload Scheduler environment, is not used in end-to-end scheduling because in end-to-end scheduling, the databases are located on the Tivoli Workload Scheduler for z/OS master.106 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 2-36 conman commands available in end-to-end environmentFigure 2-37 conman commands available in end-to-end environment Chapter 2. End-to-end scheduling architecture 107
  • 108 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 3 Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 In this chapter, we provide details on how to plan for end-to-end scheduling with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, and the Job Scheduling Console. The chapter covers two areas: 1. Before the installation is performed Here we describe what to consider before performing the installation and how to order the product. This includes the following sections: – “Different ways to do end-to-end scheduling” on page 111 – “The rationale behind end-to-end scheduling” on page 112 – “Before you start the installation” on page 113 2. Planning for end-to-end scheduling Here we describe relevant planning issues that should be considered and handled before the actual installation and customization of Tivoli Workload© Copyright IBM Corp. 2004 109
  • Scheduler for z/OS, Tivoli Workload Scheduler, and Job Scheduling Console is performed. This includes the following sections: – “Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS” on page 116 – “Planning for end-to-end scheduling with Tivoli Workload Scheduler” on page 139 – “Planning for the Job Scheduling Console” on page 149 – “Planning for migration or upgrade from previous versions” on page 155 – “Planning for maintenance or upgrades” on page 156110 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 3.1 Different ways to do end-to-end scheduling The ability to connect mainframe and distributed platforms into an integrated scheduling network is not new. Several years ago, IBM offered two methods: By use of Tivoli OPC tracker agents With tracker agents, the Tivoli Workload Scheduler for z/OS can submit and monitor jobs on remote tracker agents. The tracker agent software had limited support for diverse operating systems. Also tracker agents were not fault-tolerant, so if the network went down, tracker agents would not continue to run. Furthermore, the scalability for tracker agents was not good, which means that it simply was not possible to get a stable environment for large distributed environments with several hundreds of tracker agents. By use of Tivoli Workload Scheduler MVS extended agents Using extended agents, Tivoli Workload Scheduler can submit and monitor mainframe jobs in (for example) OPC or JES. The extended agents had limited functionality and were not fault tolerant. This required a Tivoli Workload Scheduler master and was not ideal for large, established MVS workloads. Extended agents, though, can be a perfectly viable solution for a large Tivoli Workload Scheduler that needs to run few jobs in a z/OS mainframe environment. From Tivoli Workload Scheduler 8.1, it was possible to integrate Tivoli Workload Scheduler agents with Tivoli Workload Scheduler for z/OS, so Tivoli Workload Scheduler for z/OS was the master doing scheduling and tracking for jobs in the mainframe environment as well as in the distributed environment. The end-to-end scheduling feature of Tivoli Workload Scheduler 8.1 was the first step toward a complete unified system. The end-to-end solution has been optimized in Tivoli Workload Scheduler 8.2 where the integration between the two products, Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS is even tighter. Furthermore, some of the functions that were missing in the first Tivoli Workload Scheduler 8.1 solution have been added in the Version 8.2 end-to-end solution. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 111
  • 3.2 The rationale behind end-to-end scheduling As described in Section 2.3.6, “Benefits of end-to-end scheduling” on page 86, you can gain several benefits by using Tivoli Workload Scheduler for z/OS end-to-end scheduling. To review: You can use fault-tolerant agents so that distributed job scheduling is more independent from problems with network connections and poor network performance. You can schedule workload on additional operation systems such as Linux and Windows 2000. You have a seamless synchronization of work in mainframe and distributed environments. Making dependencies between mainframe jobs and jobs in distributed environments is straightforward, using the same terminology and known interfaces. Tivoli Workload Scheduler for z/OS can use multi-tier architecture with Tivoli Workload Scheduler domain managers. You get extended planning capabilities, such as the use of long-term plans, trial plans, and extended plans, as well as to the distributed Tivoli Workload Scheduler network. Extended plans means that the current plan can span more than 24 hours. The powerful run-cycle and calendar functions in Tivoli Workload Scheduler for z/OS can be used for distributed Tivoli Workload Scheduler jobs. Besides these benefits, using the Tivoli Workload Scheduler for z/OS end-to-end also makes it possible to: Reuse or reinforce the procedures and processes that are established for the Tivoli Workload Scheduler for z/OS mainframe environment. Operators, planners, and administrators who are trained and experienced in managing Tivoli Workload Scheduler for z/OS workload can reuse their skills and knowledge in the distributed jobs managed by the Tivoli Workload Scheduler for z/OS end-to-end. Extend disciplines established to manage and operate workload scheduling in mainframe environments, to the distributed environment. Extend procedures for a contingency established for the mainframe environment to the distributed environment. Basically, when we look at end-to-end scheduling in this book, we consider scheduling in the enterprise (mainframe and distributed) where the Tivoli Workload Scheduler for z/OS engine is the master.112 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 3.3 Before you start the installation The short version of this story is: “Get the right people on board.” End-to-end scheduling with Tivoli Workload Scheduler is not complicated to implement, but it is important to understand that end-to-end scheduling can involve many different platforms and operating systems, will use IP communication, can work across firewalls, and uses SSL communication. As described earlier in this book, end-to-end scheduling involves two products: Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These products must be installed and configured to work together for successful end-to-end scheduling. Tivoli Workload Scheduler for z/OS is installed in the z/OS mainframe environment, and Tivoli Workload Scheduler is installed on the distributed platforms where job scheduling is going to be performed. We suggest that you establish an end-to-end scheduling team or project group that includes people who are skilled in the different platforms and operating systems. Ensure that you have skilled people who know how IP communication, firewalls, and SSL work in the different environments and can configure these components to work in them. The team will be responsible for doing the planning, installation, and operation of the end-to-end scheduling environment, must be able to cooperate across department boundaries, and must understand the entire scheduling environment, both mainframe and distributed. Tivoli Workload Scheduler for z/OS administrators should be familiar with the domain architecture and the meaning of fault tolerant in order to understand that, for example, the script is not necessarily located in the job repository database. This is essential when it comes to reflecting the end-to-end network topology in Tivoli Workload Scheduler for z/OS. On the other hand, people who are in charge of Tivoli Workload Scheduler need to know the Tivoli Workload Scheduler for z/OS architecture to understand the new planning mechanism and Symphony file creation. Another important thing to plan for is education or skills transfer to planners and operators who will have the daily responsibilities of end-to-end scheduling. If your planners and operators are knowledgeable, they will be able to work more independently with the products and you will realize better quality. We recommend that all involved people (mainframe and distributed scheduling) become familiar with both scheduling environments as described throughout this book. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 113
  • Because end-to-end scheduling can involve different platforms and operating systems with different interfaces (TSO/ISPF on mainframe, command prompt on UNIX, and so forth) we also suggest planning to deploy of the Job Scheduling Console. The reason is that the JSC provides a unified and platform-independent interface to job scheduling, and users do not need detailed skills to handle or use interfaces that depend on a particular operating system.3.3.1 How to order the Tivoli Workload Scheduler software The Tivoli Workload Scheduler solution consists of three products: IBM Tivoli Workload Scheduler for z/OS (formerly called Tivoli Operations Planning and Control, or OPC) Focused on mainframe-based scheduling Tivoli Workload Scheduler (formerly called Maestro) Focused on open systems–based scheduling and can be used with the mainframe-based products for a comprehensive solution across both distributed and mainframe environments Tivoli Workload Scheduler for Applications Enables direct, easy integration between the Tivoli Workload Scheduler and enterprise applications such as Oracle E-business Suite, PeopleSoft, and SAP R/3. Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be ordered independently or together in one program suite. The JSC graphical user interface is delivered together with Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. This is also the case for the connector software that makes it possible for the JSC to communicate with either Tivoli Workload Scheduler for z/OS or Tivoli Workload Scheduler. Example 3-1 shows each product and its included components. Table 3-1 Product and components Components IBM Tivoli Tivoli Workload Tivoli Workload Workload Scheduler 8.2 Scheduler 8.2 Scheduler for for Applications z/OS 8.2 z/OS engine (OPC X Controller and Tracker) Tracker agent enabler X End-to-end enabler X114 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Components IBM Tivoli Tivoli Workload Tivoli Workload Workload Scheduler 8.2 Scheduler 8.2 Scheduler for for Applications z/OS 8.2 Tivoli Workload X Scheduler distributed (Maestro) Tivoli Workload X Scheduler Connector IBM Tivoli Workload X Scheduler for z/OS Connector Job Scheduling Console X X IBM Tivoli Workload X Scheduler for Applications for z/OS (Tivoli Workload Scheduler extended agent for z/OS)Note that the end-to-end enabler component (FMID JWSZ203) is used topopulate the base binary directory in an HFS during System ModificationProgram/Extended (SMP/E) installation.The tracker agent enabler component (FMID JWSZ2C0) makes it possible for theTivoli Workload Scheduler for z/OS controller to communicate with old TivoliOPC distributed tracker agents. Attention: The Tivoli OPC distributed tracker agents went out of support October 31, 2003.To be able to use the end-to-end scheduling solution you should order bothproducts: IBM Tivoli Workload Scheduler for z/OS and Tivoli WorkloadScheduler. In the following section, we list the ordering details.Contact your IBM representative if you have any problems ordering the productsor are missing some of the delivery or components.Software ordering detailsTable 3-2 on page 116 shows ordering details for Tivoli Workload Scheduler forz/OS and Tivoli Workload Scheduler. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 115
  • Table 3-2 Ordering details Component IBM Tivoli IBM Tivoli Tivoli Workload Delivery Workload Workload Scheduler 8.2 Comments Scheduler for Scheduler for Program number z/OS 8.2 z/OS Host Edition z/OS Engine Yes, optional Yes z/OS Agent Yes, optional Yes End-to-end Yes, optional Yes Enabler Distributed FTA Yes JSC Yes Yes Yes Delivery Native tape, ServicePac® or CD-ROM for all Service Pack or CBPDO distributed CBPDO platforms Comments The 3 z/OS All 3 z/OS components can components are be licensed and included when delivered customer buy and individually get deliver Program number 5697-WSZ 5698-WSH 5698-A173.3.2 Where to find more information for planning Besides this redbook, you can find more information in IBM Tivoli Workload Scheduling Suite General Information Version 8.2, SC32-1256. This manual is a good place to start to learn more about Tivoli Workload Scheduler, Tivoli Workload Scheduler for z/OS, the JSC, and end-to-end scheduling.3.4 Planning end-to-end scheduling with TivoliWorkload Scheduler for z/OS Before installing the Tivoli Workload Scheduler for z/OS and activating the end-to-end scheduling feature, there are several areas to consider and plan for. These areas are described in the following sections.116 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 3.4.1 Tivoli Workload Scheduler for z/OS documentation Tivoli Workload Scheduler for z/OS documentation is not shipped in hardcopy form with IBM Tivoli Workload Scheduler for z/OS 8.2. The books are available in PDF and IBM softcopy format and delivered on a CD-ROM with the Tivoli Workload Scheduler for z/OS product. The CD-ROM has part number SK2T-6951 and can also be ordered separately. Several of the Tivoli Workload Scheduler for z/OS books have been updated or revised starting in April 2004. This means that the books that are delivered with the base product are outdated, and we strongly suggest that you confirm that you have the newest versions of the books before starting the installation. This is true even for Tivoli Workload Scheduler for z/OS 8.2. Note: The publications are available for download in PDF format at: http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Look for books marked with “Revised April 2004,” as they have been updated with changes introduced by service (APARs and PTFs) for Tivoli Workload Scheduler for z/OS produced after the base version of the product was released in June 2003. We recommend that you have access to, and possibly print, the newest versions of the Tivoli Workload Scheduler for z/OS publications before starting the installation. Tivoli OPC tracker agents Although the distributed Tivoli OPC tracker agents are not supported and cannot be ordered any more, Tivoli Workload Scheduler for z/OS 8.2 can still communicate with these tracker agents, because the agent enabler software (FMID JWSZ2C0) is delivered with Version 8.2. However, the Version 8.2 manuals do not describe the related TCP or APPC ROUTOPTS initialization statement parameters. If you are going to use Tivoli OPC tracker agents with Version 8.2, then save the related Tivoli OPC publications, so you can use them for reference when necessary.3.4.2 Service updates (PSP bucket, APARs, and PTFs) Before starting the installation, be sure to check the service level for the Tivoli Workload Scheduler for z/OS that you have received from IBM, and make sure that you get all available service so it can be installed with Tivoli Workload Scheduler for z/OS. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 117
  • Because the period from the time that installation of Tivoli Workload Scheduler for z/OS is started until it is activated in your production environment can be several months, we suggest that the installed Tivoli Workload Scheduler for z/OS be updated with all service that is available at the installation time. Preventive service planning (PSP) The Program Directory that is provided with your Tivoli Workload Scheduler for z/OS distribution tape is an important document that may include technical information that is more recent than the information provided in this section. It also describes the program temporary fix (PTF) level of the Tivoli Workload Scheduler for z/OS licensed program when it was shipped from IBM, and contains instructions for unloading the software and information about additional maintenance for your level of the received distribution tape for the z/OS installation. Before you start installing Tivoli Workload Scheduler for z/OS, check the preventive service planning bucket for recommendations that may have been added by the service organizations after your Program Directory was produced. The PSP includes a recommended service section that includes high-impact or pervasive (HIPER) APARs. Ensure that the corresponding PTFs are installed before you start to customize a Tivoli Workload Scheduler for z/OS subsystem. Table 3-3 gives the PSP information for Tivoli Workload Scheduler for z/OS to be used when ordering the PSP bucket. Table 3-3 PSP upgrade and subset ID information Upgrade Subset Description TWSZOS820 HWSZ200 Agent for z/OS JWSZ202 Engine (Controller) JWSZ2A4 Engine English NLS JWSZ201 TCP/IP communication JWSZ203 End-to-end enabler JWSZ12C0 Agent enabler Important: If you are running a previous version of IBM Tivoli Workload Scheduler for z/OS or OPC on a system where the JES2 EXIT2 was assembled using the Tivoli Workload Scheduler for z/OS 8.2 macros, apply the following PTFs to avoid job tracking problems due to missing A1 and A3P records: Tivoli OPC 2.3.0: Apply UQ66036 and UQ68474. IBM Tivoli Workload Scheduler for z/OS 8.1: Apply UQ67877.118 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Important service for Tivoli Workload Scheduler for z/OSBesides the APARs and PTFs that are listed in the PSP bucket, we suggest thatyou plan to apply all available service for Tivoli Workload Scheduler for z/OS inthe installation phase.At the time of writing this book, we found several important APARs for TivoliWorkload Scheduler for z/OS end-to-end scheduling and have listed some ofthem in Table 3-4. The table also shows whether the corresponding PTFs wereavailable when this book was written (the number in the PTF number column). Note: The APAR list in Table 3-4 is not complete, but is used to give some examples of important service to apply during the installation. As mentioned before, we strongly suggest that you apply all available service during your installation of Tivoli Workload Scheduler for z/OS.Table 3-4 Important service APAR number PTF number Description PQ76474 UQ81495 Checks for number of dependencies for FTW UQ81498 job and two new messages, EQQX508E and EQQ3127E, to indicate that FTW job cannot be added to AD or CP (Symphony file) due to more than 40 dependencies for this job. PQ77014 UQ81476 During the daily planning or Symphony renew, UQ81477 the batch job ends with RC=0 even though warning messages have been issued for the Symphony file. PQ77535 Not available Important documentation with additional Doc. APAR information when creating and maintaining HFS files needed for Tivoli Workload Scheduler end-to-end processing. PQ77970 UQ82583 Makes it possible to customize the job name in UQ82584 the Symphony file. UQ82585 Before the fix, the job name was always UQ82587 generated using the operation number and UQ82579 occurrence name. Now it can be customized. UQ82601 The EQQPDFXJ member in the SEQQMISC UQ82602 library holds a detailed description (see Chapter 4, “Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling” on page 157 for more information). Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 119
  • APAR number PTF number Description PQ78043 UQ81567 64M is recommended as the minimum region size for an E2E server; however, the sample server JCL (member EQQSER in SEQQSAMP) still has REGION=6M. This should be changed to REGION=64M. PQ78097 Not available Better documentation of the WSSTAT Doc. APAR MANAGES keyword. PQ78356 UQ82697 When a job stream is added via MCP to the Symphony file, it is always added with the GMT time; therefore in this case the local timezone set for the FTA is completely ignored. PQ78891 UQ82790 Introduces new messages in the server UQ82791 message log when USS processes end UQ82784 abnormally or unexpectedly. Important for UQ82793 monitoring of the server and USS processes. UQ82794 Updates server related messages in controller message log to be more precise. PQ79126 Not available In the documentation, any reference to ZFS Doc. APAR files is missing. The Tivoli Workload Scheduler end-to-end server fully supports and can access UNIX System Services (USS) in a Hierarchical File System (HFS) or in a zSeries® File System (zFS) cluster. PQ79875 Not available If you have any fault-tolerant workstations on Doc. APAR Windows supported platforms and you want to run jobs on these workstations, you must create a member containing all users and passwords for Windows users who need to schedule jobs to run on Windows workstations. The Windows users are described using USRREC initialization statements. PQ80229 Not available In the IBM Tivoli Workload Scheduler for z/OS Doc. APAR Installation Guide, the description of the end-to-end Input and Output Events Data Sets (EQQTWSIN and EQQTWSOU) is misleading because it states that the LRECL for these files can be anywhere from 120 to 32000 bytes. In reality, the LRECL must be 120. Defining a larger LRECL causes a waste of disk space, which can lead to problems if the EQQTWSIN and EQQTWSOU files fill up completely. Also see text in APAR PQ77970.120 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • APAR number PTF number DescriptionPQ80341 UQ88867 End-to-end: Missing synchronization process UQ88868 between event manager and receiver tasks at UQ88869 Controller startup. Several new messages are introduced by this APAR (documented in EQQPDFEM member in SEQQMISC data).PQ81405 UQ82765 Checks for number of dependencies for FTW UQ82766 job and new message, EQQG016E, to indicate that FTW job cannot be added to CP due to more than 40 dependencies for this job.PQ84233 UQ87341 Implements support for Tivoli Workload UQ87342 Scheduler for z/OS commands: NP (NOP), UN UQ87343 (UN-NOP), EX (Execute), and for the “submit” UQ87344 automatic option for operations defined on UQ87345 fault-tolerant workstations. UQ87377 Also introduces a new TOPOLOGY NOPTIMEDEPENDENCY (YES/NO) parameter.PQ87120 UQ89138 Porting of Tivoli Workload Scheduler 8.2 FixPack 04 to end-to-end feature on z/OS. With this APAR the Tivoli Workload Scheduler for z/OS 8.2 end-to-end code has been aligned with the Tivoli Workload Scheduler distributed code FixPack 04 level. This APAR also introduces the Backup Domain Fault Tolerant feature in the end-to-end environment.PQ87110 UQ90485 The Tivoli Workload Scheduler end-to-end UQ90488 server is not able to get mutex lock if mountpoint of a shared HFS is moved without stopping the server. Also it contains a very important documentation update that describes how to configure the end-to-end server work directory correctly in an sysplex environment with hot stand-by controllers.Note: To learn about updates to the Tivoli Workload Scheduler for z/OS booksand the APARs and PTFs that pre-date April 2004, consult “April 2004Revised” versions of the books, as mentioned in 3.4.1, “Tivoli WorkloadScheduler for z/OS documentation” on page 117. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 121
  • Special documentation updates introduced by service Some APARs were fixed on Tivoli Workload Scheduler for z/OS 8.1 while the general availability code for Tivoli Workload Scheduler for z/OS 8.2 was frozen because of shipment. All these fixes or PTFs are sysrouted through level set APAR PQ74854 (also described as hiper cumulative APAR). This cumulative APAR is meant to align the Version 8.2 code with the maintenance level that was reached during the time the GA code was frozen. With APAR PQ74854, the documentation has been updated and is available in a PDF file. To access the changes described in this PDF file: Apply the PTF for APAR PQ74854. Transfer the EQQPDF82 member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf. Read the document using Adobe (Acrobat) Reader. APAR PQ77970 (see Table 3-4 on page 119) makes it possible to customize how the job name in the Symphony file is generated. The PTF for APAR PQ77970 installs a member, EQQPDFXJ, in the SEQQMISC library. This member holds a detailed description of how the job name in the Symphony file can be customized and how to specify the related parameters. To read the documentation in the EQQPDFXJ member: Apply the PTF for APAR PQ77970. Transfer the EQQPDFXJ member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf. Read the document using Adobe Reader. APAR PQ84233 (see Table 3-4 on page 119) implements support for Tivoli Workload Scheduler for z/OS commands for fault-tolerant agents and introduces a new TOPLOGY NOPTIMEDEPENDENCY(Yes/No) parameter. The PTF for APAR PQ84233 installs a member, EQQPDFNP, in the SEQQMISC library. This member holds a detailed description of the supported commands and the NOPTIMEDEPENDENCY parameter. To read the documentation in the EQQPDFNP member: Apply the PTF for APAR PQ84233. Transfer the EQQPDFNP member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf. Read the document using Adobe Reader.122 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Note: The documentation updates that are described in the EQQPDF82, EQQPDFXJ, and EQQPDFNP members in SEQQMISC are in the “April 2004 Revised” versions of the Tivoli Workload Scheduler for z/OS books, mentioned in 3.4.1, “Tivoli Workload Scheduler for z/OS documentation” on page 117. APAR PQ80341 (see Table 3-4 on page 119) improves the synchronization process between the controller event manager and receiver tasks. The APAR also introduces several new or updated messages. The PTF for APAR PQ80341 installs a member, EQQPDFEM, in the SEQQMISC library. This member holds a detailed description of the new or updated messages related to the improved synchronization process. To read the documentation in the EQQPDFEM member: Apply the PTF for APAR PQ80341. Transfer the EQQPDFEM member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be.pdf. Read the document using Adobe Reader. APAR PQ87110 (see Table 3-4 on page 119) contains important documentation updates with suggestions on how to define the end-to-end server work directory in a SYSPLEX shared HFS environment and a procedure to be followed before starting a scheduled shutdown for a system in the sysplex. The PTF for APAR PQ87110 installs a member, EQQPDFSY, in the SEQQMISC library. This member holds the documentation updates. To read the documentation in the EQQPDFSY member: Apply the PTF for APAR PQ87110. Transfer the EQQPDFEM member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be .pdf. Read the document using Adobe Reader.3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-endscheduling As described in the architecture chapter, end-to-end scheduling involves at least two started tasks: the Tivoli Workload Scheduler for z/OS controller and the Tivoli Workload Scheduler for z/OS server. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 123
  • The server started task will do all communication with the distributed fault-tolerant agents and will handle updates (for example, to the Symphony file). The server task must always run on the same z/OS system as the active controller task. In Tivoli Workload Scheduler for z/OS 8.2, it is possible to configure one server started task that can handle end-to-end scheduling, communication with JSC users, and APPC communication. Even though it is possible to configure one server started task, we strongly suggest using a dedicated server started task for the end-to-end scheduling. Using dedicated started tasks with dedicated responsibilities makes it possible, for example, to restart the JSC server started task without any impact on the scheduling in the end-to-end server started task. Although it is possible to run end-to-end scheduling with the Tivoli Workload Scheduler for z/OS ISPF interface, we suggest that you plan for use of the Job Scheduling Console (JSC) graphical user interface. Users with background in the distributed world will find the JSC much easier to use than learning a new interface such as TSO/ISPF to manage their daily work. Therefore we suggest planning for implementation of a server started task that can handle the communication with the JSC Connector (JSC users).3.4.4 Hierarchical File System (HFS) cluster Terminology note: An HFS data set is a z/OS data set that contains a POSIX-compliant hierarchical file system, which is a collection of files and directories organized in a hierarchical structure that can be accessed using the z/OS UNIX system services (USS). Tivoli Workload Scheduler code has been ported into UNIX System Services (USS) on z/OS. When planning for the end-to-end scheduling with Tivoli Workload Scheduler for z/OS, keep in mind that the server starts multiple tasks and processes using the USS in z/OS. The end-to-end server accesses the code delivered from IBM and creates several work files in Hierarchical File System clusters. Because of this, the z/OS USS function must be active in the z/OS environment before you can install and use the end-to-end scheduling feature in Tivoli Workload Scheduler for z/OS.124 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • The Tivoli Workload Scheduler code is installed with SMP/E in an HFS cluster inUSS. It can be installed in an existing HFS cluster or in a dedicated HFS cluster,depending on how the z/OS USS is configured.Besides the installation binaries delivered from IBM, the Tivoli WorkloadScheduler for z/OS server also needs several work files in a USS HFS cluster.We suggest that you use a dedicated HFS cluster to the server workfiles. If youare planning to install several Tivoli Workload Scheduler for z/OS end-to-endscheduling environments, you should allocate one USS HFS cluster for workfilesper end-to-end scheduling environment.Furthermore, if the z/OS environment is configured as a sysplex, where the TivoliWorkload Scheduler for z/OS server can be active on different z/OS systemswithin the sysplex, you should make sure that the USS HFS clusters with TivoliWorkload Scheduler for z/OS binaries and workfiles can be accessed from all ofthe sysplex’s systems. Starting from OS/390 Version 2 Release 9, it is possible tomount USS HFS clusters either in read-only mode or in read/write mode on allsystems in a sysplex.The USS HFS cluster with the Tivoli Workload Scheduler for z/OS binariesshould then be mounted in read mode on all systems and the USS HFS clusterwith the Tivoli Workload Scheduler for z/OS work files should be mounted inread/write mode on all systems in the sysplex.Figure 3-1 on page 126 illustrates the use of dedicated HFS clusters for twoTivoli Workload Scheduler for z/OS environments: test and production. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 125
  • Workfiles Production environment for server: HFS DSN: OMVS.TWSCPROD.HFS Mount point: /TWS/TWSCPROD Mounted Read/Write (on all systems) * WRKDIR(/TWS/TWSCPROD) Installation Binaries * BINDIR(/TWS/PROD/bin820’) HFS DSN: OMVS.PROD.TWS820.HFS Mount point: /TWS/PROD/bin820 Mounted Read Only (on all systems) Workfiles Test environment for server: HFS DSN: OMVS.TWSCTEST.HFS Mount point: /TWS/TWSCTEST Mounted Read/Write (on all systems) * WRKDIR(/TWS/TWSCTEST) Installation Binaries * BINDIR(/TWS/TEST/bin820’) HFS DSN: OMVS.TEST.TWS820.HFS Mount point: /TWS/TEST/bin820 Mounted Read Only (on all systems) Figure 3-1 Dedicated HFS clusters for Tivoli Workload Scheduler for z/OS server test and production environment Note: IBM Tivoli Workload Scheduler for z/OS 8.2 supports zFS (z/OS File System) clusters as well as HFS clusters (APAR PQ79126). Because zFS clusters offers significant performance improvements over HFS, we suggest considering use of zFS clusters instead of HFS clusters. For this redbook, we used HFS clusters in our implementation. We recommend that you create a separate HFS cluster for the working directory, mounted in read/write mode. This is because the working directory is application specific and contains application-related data. It also makes your backup easier. The size of the cluster depends on the size of the Symphony file and how long you want to keep the stdlist files. We recommend starting with at least 2 GB of space. We also recommend that you plan to have separate HFS clusters for the binaries if you have more than one Tivoli Workload Scheduler end-to-end scheduling environment, as shown in Figure 3-1. This makes it possible to apply and test maintenance and test it in the test environment before it is populated to the production environment.126 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • As mentioned earlier, OS/390 2.9 and higher support use of shared HFS clusters. Some directories (usually /var, /dev, /etc, and /tmp) are system specific, meaning that those paths are logical links pointing to different directories. When you specify the work directory, make sure that it is not on a system-specific filesystem. Or, if this is the case, make sure that the same directories on the filesystem of the other systems are pointing to the same directory. For example, you can use /u/TWS; that is not system-specific. Or you can use /var/TWS on system SYS1 and create a symbolic link /SYS2/var/TWS to /SYS1/var/TWS so that /var/TWS will point to the same directory on both SYS1 and SYS2. If you are using OS/390 versions earlier than Version 2.9 in a sysplex, the HFS cluster with the work files and binaries should be mounted manually on the system where the server is active. If the server is going to be moved to another system in the sysplex, the HFS clusters should be unmounted from the first system and mounted on the system where the server is going to be active. On the new system, the HFS cluster with work files should be mounted in read/write mode, and the HFS cluster with the binaries should be mounted in read mode. The filesystem can be mounted in read/write mode on only one system at a time. Note: Please also check documentation updates in APAR PQ87110 (see table 3-4 on page 118) if you are planning to use shared HFS with work directory for the end-to-end server. The PTFs for this APAR contain important documentation updates with suggestions on how to define the end-to-end server work directory in a SYSPLEX shared HFS environment and a procedure to be followed before starting a scheduled shutdown for a system in the sysplex. Migrating from IBM Tivoli Workload Scheduler for z/OS 8.1 If you are migrating from Tivoli Workload Scheduler for z/OS 8.1 to Tivoli Workload Scheduler for z/OS 8.2 and you are using end-to-end scheduling in the 8.1 environment, we suggest that you allocate new dedicated USS HFS clusters for the Tivoli Workload Scheduler for z/OS 8.2 work files and installation binaries.3.4.5 Data sets related to end-to-end scheduling Tivoli Workload Scheduler for z/OS has several data sets that are dedicated for end-to-end scheduling: End-to-end input and output data sets (EQQTWSIN and EQQTWSOU). These data sets are used to send events from controller to server and from server to controller. They must be defined in the controller and end-to-end server started task procedure. Current plan backup copy data set to create Symphony (EQQSCPDS). This is a VSAM data set used as a CP backup copy for the production of the Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 127
  • Symphony file in USS. It must be defined in controller started task procedure and in the current plan extend job, the current plan replan job, and the Symphony renew job. End-to-end script library (EQQSCLIB) is a partitioned data set that holds commands or job definitions for fault-tolerant agent jobs. This must be defined in the controller started task procedure and in the current plan extend job, current plan replan job, and the Symphony renew job. End-to-end centralized script data set (EQQTWSCS). This is a partitioned data set that holds scripts for fault-tolerant agents jobs while they are sent to the agent. It must be defined in controller and end-to-end server started task. Plan for the allocation of these data sets and remember to specify the data sets in controller and end-to-end server started task procedures as required as well as in the current plan extend job, the replan job, and the Symphony renew job as required. In the planning phase you should also consider whether your installation will use centralized scripts, non-centralized (local) scripts, or a combination of centralized and non-centralized scripts. Non-centralized (local) scripts – In Tivoli Workload Scheduler for z/OS 8.2, it is possible to have job definitions in the end-to-end script library and have the script (the job) be executed on the fault-tolerant agent. This is referred to as a non-centralized script. – Using non-centralized scripts makes it possible for the fault-tolerant agent to run local jobs without any connection to the controller on mainframe. – On the other hand, if the non-centralized script should be updated, it must be done locally on the agent. – Local placed scripts can be consolidated in a central repository placed on the mainframe or on a fault-tolerant agent; then on a daily basis, changed or updated scripts can be distributed to the FTAs where they will be executed. By doing this, you can keep all scripts in a common repository. This facilitates easy modification of scripts, because you only have to change the scripts in one place. We recommend this option because it gives most of the benefits of using centralized scripts without sacrificing fault tolerance. Centralized scripts – Another possibility in Tivoli Workload Scheduler for z/OS 8.2 is to have the scripts on the mainframe. The scripts will then be defined in the controller job library and, via the end-to-end server, the controller will send the script to the fault-tolerant agent when jobs are ready to run.128 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • – This makes it possible to centrally manage all scripts. – However, it compromises the fault tolerance in the end-to-end scheduling network, because the controller must have a connection to the fault-tolerant agent to be able to send the script. – The centralized script function makes migration from Tivoli OPC tracker agents with centralized scripts to end-to-end scheduling much simpler. Combination of non-centralized and centralized scripts – The third possibility is to use a combination of non-centralized and centralized scripts. – Here the decision can be made based on such factors as: • Where a particular FTA is placed in the network • How stable the network connection is to the FTA • How fast the connection is to the FTA • Special requirements for different departments to have dedicated access to their scripts on their local FTA – For non-centralized scripts, it is still possible to have a centralized repository with the scripts and then, on a daily basis, to distribute changed or updated scripts to the FTAs with non-centralized scripts.3.4.6 TCP/IP considerations for end-to-end server in sysplex In Tivoli Workload Scheduler end-to-end scheduling, the TCP/IP protocol is used to communicate between the end-to-end server task and the domain managers at the first level. The fault-tolerant architecture in the distributed network has the advantage that the individual FTAs in the distributed network can continue their own processing during a network failure. If there is no connection to the controller on the mainframe, the domain managers at the first level will buffer their events in a local file called tomaster.msg. This buffering continues until the link to the end-to-end server is re-established. If there is no connection between the domain managers at the first level and the controller on the mainframe side, dependencies between jobs on the mainframe and jobs in the distributed environment cannot be resolved. You cannot schedule these jobs before the connection is re-established. If the connection is down when, for example, a new plan is created, this new plan (the new Symphony file) will not be distributed to the domain managers at the first level and further down in the distributed network. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 129
  • In the planning phase, consider what can happen: When the z/OS system with the controller and server tasks fails When the controller or the server task fails When the z/OS system with the controller has to be stopped for a longer time (for example, due to maintenance). The goal is to make the end-to-end server task and the controller task as fail-safe as possible and to make it possible to move these tasks from one system to another within a sysplex without any major disruption in the mainframe and distributed job scheduling. As explained earlier, the end-to-end server is a started task that must be running on the same z/OS system as the controller. The end-to-end server handles all communication with the controller task and the domain managers at the first level in the distributed Tivoli Workload Scheduler distributed network. One of the main reasons to configure the controller and server task in a sysplex environment is to make these tasks as fail-safe as possible. This means that the tasks can be moved from one system to another within the same sysplex without any stop in the batch scheduling. The controller and server tasks can be moved as part of planned maintenance or in case a system fails. Handling of this process can be automated and made seamless for the user by using the Tivoli Workload Scheduler for z/OS Hot Standby function. The problem running end-to-end in a z/OS sysplex and trying to move the end-to-end server from one system to another is that the end-to-end server by default gets the IP address from the TCP/IP stack of the z/OS system where it is started. If the end-to-end server is moved to another z/OS system within the sysplex it normally gets another IP address (Figure 3-2).130 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • IP-address 2 Standby Standby Controller Controller Active Controller Standby z/OS Server Controller SYSPLEX z/OS Active SYSPLEX Controller Server Standby IP-address 1 Controller 1. Active controller and server on 2. Active Engine is moved to another one z/OS system in the sysplex system in the z/OS sysplex. Server has a “system Server gets a new “system dependent” dependent” IP-address IP-address. Can cause problems for FTA connections because IP address is in Symphony fileFigure 3-2 Moving one system to another within a z/OS sysplexWhen the end-to-end server starts, it looks in the topology member to find itshost name or IP address and port number. In particular, the host name or IPaddress is: Used to identify the socket from which the server receives and sends data from and to the distributed agents (domain managers at the first level) Stored in the Symphony file and is recognized by the distributed agents as the IP address (or host name) of the master domain manager (OPCMASTER)If the host name is not defined or the default is used, the end-to-end server bydefault will use the host name that is returned by the operating system (that is,the host name returned by the active TCP/IP stack on the system).The port number and host name will be inserted in the Symphony file when acurrent plan extend or replan batch job is submitted or a Symphony renew isinitiated in the controller task. The Symphony file will then be distributed to thedomain managers at the first level. The domain managers at the first level in turnuse this information to link back to the server. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 131
  • MASTERDM z/OS Sysplex Standby Standby Controller Active Controller z/OS Controller & wtsc64 z/OS Server 9.12.6.9 z/OS wtsc63 wtsc65 9.12.6.8 9.12.6.10 SSL UK Europe Nordic Domain AIX Domain Windows 2000 Domain AIX Manager london Manager geneva Manager stockholm U000 9.3.4.63 E000 9.3.4.185 N000 9.3.4.47 Firewall & Router FTA AIX FTA W2K FTA AIX FTA W2K FTA W2K FTA Linux U001 U002 E001 E002 N001 N002 belfast edinburgh rome amsterdam oslo helsinki 9.3.4.64 9.3.4.188 9.3.4.122 9.3.4.187 10.2.3.184 10.2.3.190 FTA W2K N003 copenhagen 10.2.3.189 Figure 3-3 First-level domain managers connected to Tivoli Workload Scheduler for z/OS server in z/OS sysplex If the z/OS controller fails on the wtsc64 system (see Figure 3-3), the standby controller either on wtsc63 or wtsc65 can take over all of the engine functions (run the controller and the end-to-end server tasks). Which controller takes over depends on how the standby controllers are configured. The domain managers of first level (london, geneva, and stockholm in Figure 3-3 on page 132) know wtsc64 as their master domain manager (from the Symphony file), so the link from the domain managers to the end-to-end server will fail, no matter which system (wtsc63 or wtsc65) the controller takes over on. One solution could be to send a new Symphony file (renew the Symphony file) from the controller and server that has taken over the domain managers of first level. Doing a renew of the Symphony file on the new controller and server recreates the Symphony file and adds the new z/OS host name or IP address (read from the topology definition or returned by the z/OS operating system) to the Symphony file. The domain managers then use this information to reconnect to the server on the new z/OS system. Since renewing the Symphony file can be disruptive, especially in a heavily loaded scheduling environment, we explain three alternative strategies that can132 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • be used to solve the reconnection problem after the server and controller havebeen moved to another system in a sysplex.For all three alteratives, the topology member is used to specify the host nameand port number for the Tivoli Workload Scheduler for z/OS server task. The hostname is copied to the Symphony file when the Symphony file is renewed or theTivoli Workload Scheduler for z/OS current plan is extended or replanned. Thedistributed domain managers at the first level will use the host name read fromthe Symphony file to connect to the end-to-end server.Because the first-level domain managers will try to link to the end-to-end serverusing the host name that is defined in the server hostname parameter, you musttake the required action to successfully establish a reconnection. Make sure thatthe host name always resolves correctly to the IP address of the z/OS systemwith the active end-to-end server. This can be acquired in different ways.In the following three sections, we have described three different ways to handlethe reconnection problem when the end-to-end server is moved from one systemto another in the same sysplex.Use of the host file on the domain managers at the first levelTo be able to use the same host name after a fail-over situation (where theengine is moved to one of its backup engines) and gain additional flexibility, wewill use a host name that always can be resolved to the IP address of the z/OSsystem with the active end-to-end server. The resolution of the host name isdone by the first-level domain managers using their local host files to get the IPaddress of the z/OS system with the end-to-end server.In the end-to-end server topology we can define a host name with a given name(such as TWSCE2E). This host name will be associated with an IP address bythe TCP/IP stack, for example in the USS /etc/hosts file, where the end-to-endserver is active.The different IP addresses of the systems where the engine can be active aredefined in the host name file (/etc/hosts on UNIX) on the domain managers at thefirst level, as in Example 3-1.Example 3-1 hosts file9.12.6.8 wtsc63.itso.ibm.com9.12.6.9 wtsc64.itso.ibm.com TWSCE2E9.12.6.10 wtsc65.itso.ibm.comIf the server is moved to the wtsc63 system, you only have to edit the hosts fileon the domain managers at the first level, so TWSCE2E now points to the newsystem as in Example 3-2. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 133
  • Example 3-2 hosts file 9.12.6.8 wtsc63.itso.ibm.com TWSCE2E 9.12.6.9 wtsc64.itso.ibm.com 9.12.6.10 wtsc65.itso.ibm.com This change takes effect dynamically (the next time the domain manager tries to reconnect to the server). One major disadvantage with this solution is that the change must be carried out by editing a local file on the domain managers at the first level. A simple move of the tasks on mainframe will then involve changes on distributed systems as well. In our example in Figure 3-3 on page 132, the local host file should be edited on three domain managers at the first level (the london, geneva and stockholm servers). Furthermore, localopts nm ipvalidate must be set to none on the agent, because the node name and IP address for the end-to-end server, which are stored for the OPCMASTER workstation (the workstation representing the end-to-end server) in the Symphony file on the agent, has changed. See the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273 for further information. Use of stack affinity on the z/OS system Another possibility is to use stack affinity to ensure that the end-to-end server host name resolves to the same IP address, even if the end-to-end server is moved to another z/OS system in the sysplex. With stack affinity, the end-to-end server host name will always be resolved using the same TCP/IP stack (the same TCP/IP started task) and hence always get the same IP address, regardless of which z/OS system the end-to-end server is started on. Stack affinity provides the ability to define which specific TCP/IP instance the application should bind to. If you are running in a multiple-stack environment in which each system has its own TCP/IP stack, the end-to-end server can be forced to use a specific stack, even if it runs on another system. A specific stack or stack affinity is defined in the Language Environment® variable: _BPXK_SETIBMOPT_TRANSPORT. To define environment variables in the end-to-end server, DD-name, STDENV should be added to the end-to-end server started task procedure. The STDENV DD-name can point to a sequential data set or a member in a partitioned dataset (for example, a member of the134 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • end-to-end server PARMLIB) in which it is possible to define environmentvariables to initialize Language Environment.In this data set or member environment, variables can be specified asVARNAM=value. See IBM Tivoli Workload Scheduler for z/OS Installation,SC32-1264, for further information.For example: //STDENV DD DISP=SHR,DSN=MY.FILE.PARM(STDENV)This member can be used to set the stack affinity using the following environmentvariable. _BPXK_SETIBMOPT_TRANSPORT=xxxxx(xxxxx indicates the TCP/IP stack the end-to-end server should bind to.)One disadvantage of stack affinity is that a particular stack on a specific z/OSsystem is used. If this stack (the TCP/IP started task) or the z/OS system withthis stack has to be stopped or requires an IPL, the end-to-end server that canrun on another system will not be able to establish connections to the domainmanagers at the first level. If this happens, manual interaction is required.For more information, see the z/OS V1R2 Communications Server: IPConfiguration Guide, SC31-8775.Use of Dynamic Virtual IP Addressing (DVIPA)DVIPA, which was introduced with OS/390 V2R8, makes it possible to assign aspecific virtual IP address to a specific application. The configurations can becreated to have this virtual IP address independent of any specific TCP/IP stackwithin the sysplex and dependent of the started application — that is, this IPaddress will be the same for the application no matter which system in thesysplex the application is started on.Even if your application has to be moved to another system because of failure ormaintenance, the application can be reached under the same virtual IP address.Use of DVIPA is the most flexible way to be prepared for application or systemfailure.We recommend that you plan for use of DVIPA for the following Tivoli WorkloadScheduler for z/OS components: Server started task used for end-to-end scheduling Server started task used for the JSC communication Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 135
  • The Tivoli Workload Scheduler for z/OS end-to-end (and JSC) server has been improved for Version 8.2. This improvement makes better use of DVIPA for the end-to-end (and JSC) server than in Tivoli Workload Scheduler 8.1. In IBM Tivoli Workload Scheduler for z/OS 8.1, a range of IP addresses to be used by DVIPA (VIPARANGE) had to be defined, as did specific PORT and IP addresses for the end-to-end server (Example 3-3). Example 3-3 Some required DVIPA definitions for Tivoli Workload Scheduler for z/OS 8.1 VIPADYNAMIC viparange define 255.255.255.248 9.12.6.104 ENDVIPADYNAMIC PORT 5000 TCP TWSJSC BIND 9.12.6.106 31182 TCP TWSCE2E BIND 9.12.6.107 In this example, DVIPA automatically assigns started task TWSCE2E, which represents our end-to-end server task that is configured to use port 31182 and IP address 9.12.6.107. DVIPA is described in great detail in the z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775. In addition, the redbook TCP/IP in a Sysplex, SG24-5235, provides useful information for DVIPA. One major problem using DVIPA in the Tivoli Workload Scheduler for z/OS 8.1 end-to-end server was that the end-to-end server mailman process still used the IP address for the z/OS system (the local IP address for outbound connections was determined by the routing table on the z/OS system). If localops nm ipvalidate was set to full in the first-level domain manager or backup domain manager, the outbound connection from the end-to-end mailman server process to the domain manager netman was rejected by the domain manager netman process. The result was that the outbound connection could not be established when the end-to-end server was moved from one system in the sysplex to another. This is changed in Tivoli Workload Scheduler for z/OS 8.2, so the end-to-end sever will use the host name or IP address that is specified in the TOPOLOGY HOSTNAME parameter both for inbound and outbound connections. This has the following advantages compared to Version 8.1: 1. It is not necessary to define the end-to-end server started task in the static DVIPA PORT definition. It is sufficient to define the DVIPA VIPARANGE parameter. When the end-to-end server starts and reads the TOPOLOGY HOSTNAME() parameter, it performs a gethostbyname() on the host name. The host name can be related to an IP address (in the VIPARANGE), for example in the USS136 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • /etc/hosts file. It then will get the same IP address across z/OS systems in the sysplex. A major advantage also if the host name or IP address is going to be changed, it is sufficient to make the change in the /etc/hosts file. It is not necessary to change the TCP/IP definitions and restart the TCP/IP stack (as long as the new IP address is within the defined range of IP addresses in the VIPARANGE parameter). 2. The host name in the TOPOLOGY HOSTNAME() parameter is used for outbound connections (from end-to-end server to the domain managers at the first level). 3. You can use network address IP validation on the domain managers at the first level. The advantages of 1 and 2 also apply to the JSC server. Example 3-4 shows required DVIPA definitions for Tivoli Workload Scheduler 8.2 in our environment. Example 3-4 Example of required DVIPA definitions for ITWS for z/OS 8.2 VIPADYNAMIC viparange define 255.255.255.248 9.12.6.104 ENDVIPADYNAMIC And the /etc/hosts file in USS looks like: 9.12.6.107 twsce2e.itso.ibm.com twsce2e Note: In the previous example, we show use of the /etc/hosts file in USS. For DVIPA, it is advisable to use the DNS instead of the /etc/hosts file because the /etc/hosts definitions in general are defined locally on each machine (each z/OS image) in the sysplex.3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1end-to-end scheduling If you are running Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling and are going to update this environment to 8.2 level, you should plan for use of the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 137
  • Be aware especially of the new possibilities introduced by: Centralized script Are you using non-centralized script in the Tivoli Workload Scheduler for z/OS 8.1 scheduling environment? Will it be better or more efficient to use centralized scripts? If centralized scripts are going to be used, you should plan for necessary activities to have the non-centralized scripts consolidated in Tivoli Workload Scheduler for z/OS controller JOBLIB. JCL variables in centralized or non-centralized scripts or both In Tivoli Workload Scheduler for z/OS 8.2 you can use Tivoli Workload Scheduler for z/OS JCL variables in centralized and non-centralized scripts. If you have implemented some locally developed workaround in Tivoli Workload Scheduler for z/OS 8.1 to use JCL variables in the Tivoli Workload Scheduler for z/OS non-centralized script, you should consider using the new possibilities in Tivoli Workload Scheduler for z/OS 8.2. Recovery possible for jobs with non-centralized and centralized scripts. Will or can use of recovery in jobs with non-centralized or centralized scripts improve your end-to-end scheduling? Is it something you should use in your Tivoli Workload Scheduler for z/OS 8.2 environment? Should the Tivoli Workload Scheduler for z/OS 8.1 job definitions be updated or changed to use these new recovery possibilities? Here again, some planning and considerations will be of great value. New options and possibilities when defining fault-tolerant workstation jobs in Tivoli Workload Scheduler for z/OS and working with fault-tolerant workstations. Tivoli Workload Scheduler for z/OS 8.2 introduces some new options in the legacy ISPF dialog as well as in the JSC, when defining fault-tolerant jobs in Tivoli Workload Scheduler for z/OS. Furthermore, the legacy ISPF dialogs have changed and improved, and new options have been added to work more easily with fault-tolerant workstations. Be prepared to educate your planners and operations so they know how to use these new options and functions! End-to-end scheduling is greatly improved in Version 8.2 of Tivoli Workload Scheduler for z/OS. Together with this improvement, several initialization statements have been changed. Furthermore, the network configuration for the end-to-end environment can be designed in another way in Tivoli Workload138 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Scheduler for z/OS 8.2 because, for example, Tivoli Workload Scheduler for z/OS 8.2 supports more than one first-level domain manager. To summarize: Expect to take some time to plan your upgrade from Tivoli Workload Scheduler for z/OS Version 8.1 end-to-end scheduling to Version 8.2 end-to-end scheduling, because Tivoli Workload Scheduler for z/OS Version 8.2 has been improved with many new functions and initialization parameters. Plan to have some time to investigate and read the new Tivoli Workload Scheduler for z/OS 8.2 documentation (remember to use the April 2004 Revised versions) to get a good understanding of the new end-to-end scheduling possibilities in Tivoli Workload Scheduler for z/OS Version 8.2 compared to V8.1 Furthermore, plan time to test and verify the use of the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling.3.5 Planning for end-to-end scheduling with TivoliWorkload Scheduler In this section, we discuss how to plan end-to-end scheduling for Tivoli Workload Scheduler. We show how to configure your environment to fit your requirements, and we point you to special considerations that apply to the end-to-end solution with Tivoli Workload Scheduler for z/OS.3.5.1 Tivoli Workload Scheduler publications and documentation Hardcopy Tivoli Workload Scheduler documentation is not shipped with the product. The books are available in PDF format on the Tivoli Workload Scheduler 8.2 product CD-ROM. Note: The publications are also available for download in PDF format at: http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Look for books marked with “Revised April 2004,” as they have been updated with documentation changes that were introduced by service (fix pack) for Tivoli Workload Scheduler that was produced since the base version of the product was released in June 2003. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 139
  • 3.5.2 Tivoli Workload Scheduler service updates (fix packs) Before installing Tivoli Workload Scheduler, it is important to check for the latest service (fix pack) for Tivoli Workload Scheduler. Service for Tivoli Workload Scheduler is released in packages that normally contain a full replacement of the Tivoli Workload Scheduler code. These packages are called fix packs and are numbered FixPack 01, FixPack 02, and so forth. New fix packs are usually released every three months. The base version of Tivoli Workload Scheduler must be installed before a fix pack can be installed. Check for the latest fix pack level and download it so that you can update your Tivoli Workload Scheduler installation and test the end-to-end scheduling environment on the latest fix pack level. Tip: Fix packs for Tivoli Workload Scheduler can be downloaded from: ftp://ftp.software.ibm.com Log on with user ID anonymous and your e-mail address for the password. Fix packs for Tivoli Workload Scheduler are in this directory: /software/tivoli_support/patches/patches_8.2.0. At time of writing this book the latest fix pack for Tivoli Workload Scheduler was FixPack 04. When the fix pack is downloaded, installation guidelines can be found in the 8.2.0-TWS-FP04.README file. Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is described in a PDF file named FaultTolerantSwitch.README. The new Fault-Tolerant Switch Feature replaces and enhances the existing or traditional Fault-Tolerant Switch Manager for backup domain managers. The Tivoli Workload Scheduler documentation has been updated to FixPack 03 in the “April 2004 Revised” versions of the Tivoli Workload Scheduler manuals. As mentioned in 3.5.1, “Tivoli Workload Scheduler publications and documentation” on page 139, the latest versions of the Tivoli Workload Scheduler manuals can be downloaded from the IBM Web site.3.5.3 System and software requirements System and software requirements for installing and running Tivoli Workload Scheduler is described in great detail in the IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004), SC32-1277.140 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • It is very important to consult and read this release-notes document before installing Tivoli Workload Scheduler because release notes contain system and software requirements, as well as the latest installation and upgrade notes.3.5.4 Network planning and considerations Before you install Tivoli Workload Scheduler, be sure that you know about the various configuration examples. Each example has specific benefits and disadvantages. Here are some guidelines to help you to find the right choice: How large is your IBM Tivoli Workload Scheduler network? How many computers does it hold? How many applications and jobs does it run? The size of your network will help you decide whether to use a single domain or the multiple-domain architecture. If you have a small number of computers or a small number of applications to control with Tivoli Workload Scheduler, there may not be a need for multiple domains. How many geographic locations will be covered in your Tivoli Workload Scheduler network? How reliable and efficient is the communication between locations? This is one of the primary reasons for choosing a multiple-domain architecture. One domain for each geographical location is a common configuration. If you choose single domain architecture, you will be more reliant on the network to maintain continuous processing. Do you need centralized or decentralized management of Tivoli Workload Scheduler? A Tivoli Workload Scheduler network, with either a single domain or multiple domains, gives you the ability to manage Tivoli Workload Scheduler from a single node, the master domain manager. If you want to manage multiple locations separately, you can consider installing a separate Tivoli Workload Scheduler network at each location. Note that some degree of decentralized management is possible in a stand-alone Tivoli Workload Scheduler network by mounting or sharing file systems. Do you have multiple physical or logical entities at a single site? Are there different buildings with several floors in each building? Are there different departments or business functions? Are there different applications? These may be reasons for choosing a multi-domain configuration, such as a domain for each building, department, business function, or application (manufacturing, financial, engineering). Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 141
  • Do you run applications, such as SAP R/3, that operate with Tivoli Workload Scheduler? If they are discrete and separate from other applications, you may choose to put them in a separate Tivoli Workload Scheduler domain. Would you like your Tivoli Workload Scheduler domains to mirror your Windows NT domains? This is not required, but may be useful. Do you want to isolate or differentiate a set of systems based on performance or other criteria? This may provide another reason to define multiple Tivoli Workload Scheduler domains to localize systems based on performance or platform type. How much network traffic do you have now? If your network traffic is manageable, the need for multiple domains is less important. Do your job dependencies cross system boundaries, geographical boundaries, or application boundaries? For example, does the start of Job1 on workstation3 depend on the completion of Job2 running on workstation4? The degree of interdependence between jobs is an important consideration when laying out your Tivoli Workload Scheduler network. If you use multiple domains, you should try to keep interdependent objects in the same domain. This will decrease network traffic and take better advantage of the domain architecture. What level of fault tolerance do you require? An obvious disadvantage of the single domain configuration is the reliance on a single domain manager. In a multi-domain network, the loss of a single domain manager affects only the agents in its domain.3.5.5 Backup domain manager Each domain has a domain manager and, optionally, one or more backup domain managers. A backup domain manager (Figure 3-4 on page 143) must be in the same domain as the domain manager it is backing up. The backup domain managers must be fault-tolerant agents running the same product version of the domain manager they are supposed to replace, and must have the Resolve Dependencies and Full Status options enabled in their workstation definitions. If a domain manager fails during the production day, you can use either the Job Scheduling Console, or the switchmgr command in the console manager command line (conman), to switch to a backup domain manager. A switch manager action can be executed by anyone with start and stop access to the domain manager and backup domain manager workstations.142 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • A switch manager operation stops the backup manager then restarts it as thenew domain manager and converts the old domain manager to a fault-tolerantagent.The identities of the current domain managers are documented in the Symphonyfiles on each FTA and remain in effect until a new Symphony file is received fromthe master domain manager (OPCMASTER). MASTERDM z/OS Master Domain Manager OPCMASTER DomainA DomainB AIX AIX Domain Domain Manager Manager FDMA FDMB FTA1 FTA3 BDM for FTA2 BDM for FTA4 DomainA DomainB AIX OS/400 AIX SolarisFigure 3-4 Backup domain managers (BDM) within a end-to-end scheduling networkAs mentioned in 2.3.5, “Making the end-to-end scheduling system fault tolerant”on page 84, a switch to a backup domain manager remains in effect until a newSymphony file is received from the master domain manager (OPCMASTER inFigure 3-4). If the switch to the backup domain manager will be active across theTivoli Workload Scheduler for z/OS plan extension or replan, you must changethe topology definitions in the Tivoli Workload Scheduler for z/OS DOMRECinitialization statements. The backup domain manager fault-tolerant workstationshould be changed to the domain manager for the domain.Example 3-5 shows how DOMREC for DomainA is changed so that the backupdomain manager FTA1 in Figure 3-4 is the new domain manager for DomainA. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 143
  • Because the change is also made in the DOMREC topology definition (in connection with the switch domain manager from FDMA to FTA1), FTA1 remains domain manager even if the Symphony file is recreated with Tivoli Workload Scheduler for z/OS plan extend or replan jobs. Example 3-5 Change in DOMREC for long-term switch to backup domain manager FTA1 DOMREC DOMAIN(DOMAINA) DOMMGR(FDMA) DOMPARENT(OPCMASTER) Should be changed to: DOMREC DOMAIN(DOMAINA) DOMMGR(FTA1) DOMPARENT(OPCMASTER) Where FDMA is the name of the fault tolerant workstation that is domain manager before the switch.3.5.6 Performance considerations Tivoli Workload Scheduler 8.1 introduced some important performance-related initialization parameters. These can be used to optimize or tune Tivoli Workload Scheduler networks. If you suffer from poor performance and have already isolated the bottleneck on the Tivoli Workload Scheduler side, you may want to take a closer look at the localopts parameters listed in Table 3-5 (default values shown in the table). Table 3-5 Performance-related localopts parameter Syntax Default value mm cache mailbox=yes/no No mm cache size = bytes 32 sync level=low/medium/high High wr enable compression=yes/no No These localopts parameters are described in detail in the following sections. For more information, check the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273, and the redbook IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices, SG24-6628. Mailman cache (mm cache mailbox and mm cache size) Tivoli Workload Scheduler can read groups of messages from a mailbox and put them into a memory cache. Access to disk through cache is extremely faster than accessing to disk directly. The advantage is even more relevant if you think that traditional mailman needs at least two disk accesses for every mailbox message.144 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Important: mm cache mailbox parameter can be used on both UNIX and Windows workstations. This option is not applicable (has no effect) on USS.A special mechanism ensures that messages that are considered essential arenot put into cache but are handled immediately. This avoids loss of vitalinformation in case of a mailman failure. The settings in the localopts file regulatethe behavior of mailman cache: mm cache mailbox The default is no. Specify yes to enable mailman to use a reading cache for incoming messages. mm cache size Specify this option only if you use the mm cache mailbox option. The default is 32 bytes, which should be a reasonable value for most small and medium-sized Tivoli Workload Scheduler installations. The maximum value is 512, and higher values are ignored. Tip: If necessary, you can experiment with increasing this setting gradually for better performance. You can use values larger than 32 bytes for large networks, but in small networks do not set this value unnecessarily large, because this would reduce the available memory that could be allocated to other applications or other Tivoli Workload Scheduler processes.File system synchronization level (sync level)Sync level attribute specifies the frequency at which Tivoli Workload Schedulersynchronizes messages held on disk with those in memory. There are threepossible settings:Low Lets the operating system handle the speed of write access. This option speeds up all processes that use mailbox files. Disk usage is notably reduced, but if the file system is reliable the data integrity should be assured anyway.Medium Makes an update to the disk after a transaction has completed. This setting could be a good trade-off between acceptable performance and high security against loss of data. Write is transaction-based; data written is always consistent.High (default setting) Makes an update every time data is entered. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 145
  • Important considerations for the sync level usage: For most UNIX systems (especially new UNIX systems with reliable disk subsystems), a setting of low or medium is recommended. In end-to-end scheduling, we recommend that you set this at low, because host disk subsystems are considered as highly reliable systems. This option is not applicable on Windows systems. Regardless of the sync level value that you set in the localopts file, Tivoli Workload Scheduler makes an update every time data is entered for messages that are considered as essential (uses sync level=high for the essential messages). Essential messages are considered the utmost important by the Tivoli Workload Scheduler. Sinfonia file compression (wr enable compression) Starting with Tivoli Workload Scheduler 8.1, domain managers may distribute Sinfonia files to their FTAs in compressed form. Each Sinfonia record is compressed by mailman domain managers, then decompressed by writer FTA. A compressed Sinfonia record is about 7 times smaller in size. It can be particularly useful when a Symphony file is huge and network connection between two nodes is slow or not reliable (WAN). If any FTAs in the network have pre-8.1 versions of Tivoli Workload Scheduler, Tivoli Workload Scheduler domain managers can send Sinfonia files to these workstations in uncompressed form. The following localopts setting is used to set compression in Tivoli Workload Scheduler: wr enable compression=yes: This means that Sinfonia will be compressed. The default is no. Tip: Due to the overhead of compression and decompression, we recommend that you use compression if Sinfonia is 4 MB or larger.3.5.7 Fault-tolerant agent (FTA) naming conventions Each FTA represents a physical machine within a Tivoli Workload Scheduler network. Depending on the size of your distributed environment or network and how much it can grow in the future, it makes sense to think about naming conventions for your FTAs and eventually Tivoli Workload Scheduler domains. A good naming convention for FTAs and domains can help to identify an FTA easily in terms of where it is located or the business unit it belongs to. This becomes more important in end-to-end scheduling environments because the length of the workstation name for an FTA is limited in Tivoli Workload Scheduler for z/OS.146 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Note: The name of any workstation in Tivoli Workload Scheduler for z/OS workstations for fault-tolerant agents included end-to-end is limited to four digits. The name must be alphanumeric, where the first digit must be alphabetical or national.Figure 3-5 on page 147 shows a typical end-to-end network. It consists of twodomain managers at the first level, two backup domain managers, and someFTAs. MASTERDM z/OS Master Domain Manager OPCMASTER Domain1 Domain2 AIX AIX Domain Domain Manager Manager F100 F200 F101 F201 BDM for F102 BDM for F202 DomainA DomainB AIX OS/400 AIX SolarisFigure 3-5 Example of naming convention for FTA workstations in end-to-end networkIn Figure 3-5, we have illustrated one naming convention for the fault-tolerantworkstations in Tivoli Workload Scheduler for z/OS. The idea with this namingconvention is the following: First digit Character F is used to identify the workstation as an FTA. It will, for example, be possible to make lists in the legacy ISPF interface and in the JSC that shows all FTAs. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 147
  • Second digit Character or number used to identify the domain for the workstation. Third and fourth digits Use to allow a high number of uniquely named servers or machines. The last two digits are reserved for the numbering of each workstation. With this naming convention there will be room to define 1296 (that is, 36*36) fault-tolerant workstations for each domain named F1** to FZ**. If the domain manager fault-tolerant workstation for the first domain is named F100 (F000 is not used), it will be possible to define 35 domains with 1296 FTAs in each domain — that is, 45360 FTAs. This example is meant to give you an idea of the number of fault-tolerant workstations that can be defined, even using only four digits in the name. In the example, we did not change the first character in the workstation name: It was “fixed” at F. It is, of course, possible to use different characters here as well; for example, one could use D for domain managers and F for fault-tolerant agents. Changing the first digit in the workstation name increases the total number of fault-tolerant workstations that can be defined in Tivoli Workload Scheduler for z/OS and cannot cover all specific requirements. It demonstrates only that the naming needs careful consideration. Because a four-character name for the FTA workstation does not tell much about the server name or IP address for the server where the FTA is installed, another good naming convention for the FTA workstations is to have the server name (the DNS name or maybe the IP address) in the description field for the workstation in Tivoli Workload Scheduler for z/OS. The description field for workstations in Tivoli Workload Scheduler for z/OS allows up to 32 characters. This way, it is much easier to relate the four-character workstation name to a specific server in your distributed network. Example 3-6 shows how the description field can relate the four-character workstation name to the server name for the fault-tolerant workstations used in Figure 3-5 on page 147. Tip: The host name in the workstation description field, in conjunction with the four-character workstation name, provides an easy way to illustrate your configured environment. Example 3-6 Workstation description field (copy of workstation list in the ISPF panel) Work station T R Last update name description user date time148 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • F100 COPENHAGEN - AIX DM for Domain1 C A CCFBK 04/07/16 14.59 F101 STOCKHOLM - AIX BDM for Domain1 C A CCFBK 04/07/16 15.00 F102 OSLO - OS/400 LFTA in DM1 C A CCFBK 04/07/16 15.00 F200 ROM - AIX DM for Domain2 C A CCFBK 04/07/16 15.02 F201 MILANO - AIX BDM for Domain2 C A CCFBK 04/07/16 15.08 F202 VENICE - SOLARIS FTA in DM2 C A CCFBK 04/07/16 15.173.6 Planning for the Job Scheduling Console In this section, we discuss planning considerations for the Tivoli Workload Scheduler Job Scheduling Console (JSC). The JSC is not a required component when running end-to-end scheduling with Tivoli Workload Scheduler. The JSC provides a unified GUI to different job-scheduling engines, Tivoli Workload Scheduler for z/OS controller, and Tivoli Workload Scheduler master domain manager, domain managers, and fault-tolerant agents. Job Scheduling Console 1.3 is the version that is delivered and used with Tivoli Workload Scheduler 8.2 and Tivoli Workload Scheduler for z/OS 8.2. The JSC code is shipped together with the Tivoli Workload Scheduler for z/OS or the Tivoli Workload Scheduler code. With the JSC, it is possible to work with different Tivoli Workload Scheduler for z/OS controllers (such as test and production) from one GUI. Also, from this same GUI, the user can at the same time work with Tivoli Workload Scheduler master domain managers or fault-tolerant agents. In end-to-end scheduling environments, the JSC can be a helpful tool when analyzing problems with the end-to-end scheduling network or for giving some dedicated users access to their own servers (fault-tolerant agents). The JSC is installed locally on your personal desktop, laptop, or workstation. Before you can run and use the JSC, the following additional components must be installed and configured: Tivoli Management Framework, V3.7.1 or V4.1 Installed and configured in Tivoli Management Framework – Job Scheduling Services (JSS) – Tivoli Workload Scheduler connector – Tivoli Workload Scheduler for z/OS connector – JSC instances for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS environments Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 149
  • Server started task on mainframe used for JSC communication This server started task is necessary to communicate and work with Tivoli Workload Scheduler for z/OS from the JSC.3.6.1 Job Scheduling Console documentation The documentation for the Job Scheduling Console includes: IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes (Maintenance Release April 2004), SC32-1277 IBM Tivoli Workload Scheduler Job Scheduling Console Users Guide (Maintenance Release April 2004), SC32-1257. This manual contains information about how to: – Install and update the JSC. – Install and update JSS, Tivoli Workload Scheduler connector and Tivoli Workload Scheduler for z/OS connector. – Create Tivoli Workload Scheduler connector instances and Tivoli Workload Scheduler for z/OS connector instances. – Use the JSC to work with Tivoli Workload Scheduler. – Use the JSC to work with Tivoli Workload Scheduler for z/OS. The documentation is not shipped in hardcopy form with the JSC code, but is available in PDF format on the JSC Version 1.3 CD-ROM. Note: The publications are also available for download in PDF format at: http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Here you can find the newest versions of the books. Look for books marked with “Maintenance Release April 2004” because they have been updated with documentation changes introduced after the base version of the product was released in June 2003.3.6.2 Job Scheduling Console service (fix packs) Before installing the JSC, it is important to check for and, if necessary, download latest service (fix pack) level. Service for the JSC is released in packages that normally contain a full replacement of it. These packages are called fix packs and are numbered FixPack 01, FixPack 02, and so forth. Usually, a new fix pack is released once every three months. The base version of the JSC must be installed before a fix pack can be installed.150 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Tip: Fix packs for JSC can be downloaded from the IBM FTP site: ftp://ftp.software.ibm.com Log in with user ID anonymous and use your e-mail address for the password. Look for JSC fix packs in the /software/tivoli_support/patches/patches_1.3.0 directory. Installation guidelines are in the 1.3.0-JSC-FP05.README text file. As this book is written, the latest fix pack for the JSC was FixPack 05. It is important to note that the JSC fix pack level should correspond to the connector FixPack level, that is apply the same fix pack level to the JSC and to the connector at the same time. Note: FixPack 05 improves performance for the JSC in two areas: Response time improvements Memory consumption improvements3.6.3 Compatibility and migration considerations for the JSC The Job Scheduling Console feature level 1.3 can work with different versions of Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. Before installing the Job Scheduling Console, consider Table 3-6 and Table 3-7 on page 152, which summarize the supported interoperability combinations between the Job Scheduling Console, the connectors, and the Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS engines. Table 3-6 shows the supported combinations of JSC, Tivoli Workload Scheduler connectors, and Tivoli Workload Scheduler engine (master domain manager, domain manager, or fault-tolerant agent). Table 3-6 Tivoli Workload Scheduler connector and engine combinations Job Scheduling Console Connector Tivoli Workload Scheduler engine 1.3 8.2 8.2 1.3 8.1 8.1 1.2 8.2 8.2 Note: The engine can be a fault-tolerant agent, a domain manager, or a master domain manager. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 151
  • Table 3-7 shows the supported combinations of JSC, Tivoli Workload Scheduler for z/OS connectors, and Tivoli Workload Scheduler for z/OS engine (controller). Table 3-7 Tivoli Workload Scheduler for z/OS connector and engine combinations Job Scheduling Console Connector IBM Tivoli Workload Scheduler for z/OS engine (controller) 1.3 1.3 8.2 1.3 1.3 8.1 1.3 1.3 2.3 (Tivoli OPC) 1.3 1.2 8.1 1.3 1.2 2.3 (Tivoli OPC) 1.2 1.3 8.2 1.2 1.3 8.1 1.2 1.3 2.3 (Tivoli OPC) Note: If your environment comprises installations of updated and back-level versions of the products, some functionalities might not work correctly. For example, the new functionalities such as Secure Socket Layer (SSL) protocol support, return code mapping, late job handling, extended task name and recovery information for z/OS jobs are not supported by the Job Scheduling Console feature level 1.2. A warning message is displayed if you try to open an object created with the new functionalities, and the object is not opened. Satisfy the following requirements before installing The following software and hardware prerequisites and other considerations should be taken care of before installing the JSC. Software The following is required software: Tivoli Management Framework Version 3.7.1 with FixPack 4 or higher Tivoli Job Scheduling Services 1.2 Hardware The following is required hardware: CD-ROM drive Approximately 200 MB free disk space for installation of the JSC At least 256 MB RAM (preferably 512 MB RAM)152 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Other The Job Scheduling Console can be installed on any workstation that has a TCP/IP connection. It can connect only to a server or workstation that has properly configured installations of the following products: Job Scheduling Services and IBM Tivoli Workload Scheduler for z/OS connector (mainframe-only scheduling solution) Job Scheduling Services and Tivoli Workload Scheduler connector (distributed-only scheduling solution) Job Scheduling Services, IBM Tivoli Workload Scheduler for z/OS connector, and Tivoli Workload Scheduler Connector (end-to-end scheduling solution) The latest and most up-to-date system and software requirements for installing and running the Job Scheduling Console are described in great detail in the IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258 (remember to get the April 2004 revision). It is important to consult and read this release notes document before installing the JSC because release notes contain system and software requirements as well as the latest installation and upgrade notes.3.6.4 Planning for Job Scheduling Console availability The legacy GUI gconman and gcomposer are no longer included with Tivoli Workload Scheduler, so the Job Scheduling Console fills the roles of those program as the primary interface to Tivoli Workload Scheduler. Staff that work only with the JSC and are not familiar with the command line interface (CLI) depend on continuous JSC availability. This requirement must be taken into consideration when planning for a Tivoli Workload Scheduler backup domain manager. We therefore recommend that there be a Tivoli Workload Scheduler connector instance on the Tivoli Workload Scheduler backup domain manager. This guarantees JSC access without interruption. Because the JSC communicates with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler domain managers, and Tivoli Workload Scheduler backup domain managers through one IBM Tivoli Management Framework (Figure 3-6 on page 154), this framework can be a single point of failure. Consider establishing a backup Tivoli Management Framework or minimize the risk for outage in the framework by using (for example) clustering techniques. You can read more about how to make a Tivoli Management Framework fail-safe in the redbook High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632. Figure 3-6 on page 154 shows two domain managers at the first level directly connected to Tivoli Workload Scheduler for z/OS (OPC). In end-to-end scheduling environments it is, as Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 153
  • mentioned earlier, advisable to plan and install connectors and prerequisite components (Tivoli Management Framework and Job Scheduling Services) on all first-level domain managers. MASTERDM OPC z/OS Databases Master JSC Server Domain Current Plan Manager DomainA DomainB AIX TWS AIX TWS Domain TWS OPC Domain TWS OPC Manager Connector Connector Manager Connector Connector DMA Symphony Framework DMA Symphony Framework Other DMs and Other DMs and FTAs FTAs Job Scheduling Console Figure 3-6 JSC connections in an end-to-end environment3.6.5 Planning for server started task for JSC communication To use the JSC to communicate with Tivoli Workload Scheduler for z/OS, it is necessary for the z/OS system to have a started task that handles IP communication with the JSC (more precisely, with the Tivoli Workload Scheduler for z/OS (OPC) Connector in the Tivoli Management Framework) (Figure 3-6). The same server started task can be used for JSC communication and for the end-to-end scheduling. We recommend having two server started tasks, one dedicated for end-to-end scheduling and one dedicated for JSC communication. With two server started tasks, the JSC server started task can be stopped and started without any impact on the end-to-end scheduling network.154 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • The JSC server started task acts as the communication layer between the Tivoli Workload Scheduler for z/OS connector in the Tivoli Management Framework and the Tivoli Workload Scheduler for z/OS controller.3.7 Planning for migration or upgrade from previousversions If you are running end-to-end scheduling with Tivoli Workload Scheduler for z/OS Version 8.1 and Tivoli Workload Scheduler Version 8.1, you should plan how to do the upgrade or migration from Version 8.1 to Version 8.2. This is also the case if you are running an even older version, such as Tivoli OPC Version 2.3.0, Tivoli Workload Scheduler 7.0, or Maestro 6.1. Tivoli Workload Scheduler 8.2 supports backward compatibility so you can upgrade your network gradually, at different times, and in no particular order. You can upgrade top-down — that is, upgrade the Tivoli Workload Scheduler for z/OS controller (master) first, then the domain managers at the first level, then the subordinate domain managers and fault-tolerant agents — or upgrade bottom-up by starting with the fault-tolerant agents, then upgrading in sequence, leaving the Tivoli Workload Scheduler for z/OS controller (master) for last. However, if you upgrade the Tivoli Workload Scheduler for z/OS controller first, some new Version 8.2 functions, (firewall support, centralized script) will not work until the whole network is upgraded. During the upgrade procedure, the installation backs up all of the configuration information, installs the new product code, and automatically migrates old scheduling data and configuration information. However, it does not migrate user files or directories placed in the Tivoli Workload Scheduler for z/OS server work directory or in the Tivoli Workload Scheduler TWShome directory. Before doing the actual installation, you should decide on the migration or upgrade strategy that will be best in your end-to-end scheduling environment. This is also the case if you are upgrading from old Tivoli OPC tracker agents or if you decide to merge a stand-alone Tivoli Workload Scheduler environment with your Tivoli Workload Scheduler for z/OS environment to have a new end-to-end scheduling environment. Our experience is that installation and upgrading of an existing end-to-end scheduling environment takes some time, and the time required depends on the size of the environment. It is good to be prepared form the first day and to try to make some good and realistic implementation plans and schedules. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 155
  • Another important thing to remember is that Tivoli Workload Scheduler end-to-end scheduling has been improved and has changed considerably from Version 8.1 to Version 8.2. If you are running Tivoli Workload Scheduler 8.1 end-to-end scheduling and are planning to upgrade to Version 8.2 end-to-end scheduling, we recommend that you: 1. First do a “one-to-one” upgrade from Tivoli Workload Scheduler 8.1 end-to-end scheduling to Tivoli Workload Scheduler 8.2 end-to-end scheduling. 2. When the upgrade is completed and you are running Tivoli Workload Scheduler 8.2 end-to-end scheduling in the whole network, then start to implement the new functions and facilities that were introduced in Tivoli Workload Scheduler for z/OS 8.2 and Tivoli Workload Scheduler 8.2.3.8 Planning for maintenance or upgrades The Tivoli maintenance strategy for Tivoli Workload Scheduler introduces a new way to maintain the product more effectively and easily. On a quarterly basis, Tivoli provides updates with recent patches and offers a fix pack that is similar to a maintenance release. This fix pack can be ordered either via the common support Web page ftp://ftp.software.ibm.com/software/tivoli_support/patches, or shipped on a CD. Ask your local Tivoli support for more details. In this book, we have recommended upgrading your end-to-end scheduling environment to FixPack 04 level. This level will change with time, of course, so when you start the installation you should plan to download and install the latest fix pack level.156 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 4 Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling When planning as described in the previous chapter is completed, it is time to install the software (Tivoli Workload Scheduler for z/OS V8.2 and Tivoli Workload Scheduler V8.2 and, optionally, Tivoli Workload Scheduler Job Scheduling Console V1.3) and configure the installed software for end-to-end scheduling. In this chapter, we provide details on how to install and configure Tivoli Workload Scheduler end-to-end scheduling and Job Scheduling Console (JSC), including how to perform the installation and the necessary steps involved. We describe installation of: IBM Tivoli Workload Scheduler for z/OS V8.2 IBM Tivoli Workload Scheduler V8.2 IBM Tivoli Workload Scheduler Job Scheduling Console V1.3 We also describe installation of the components that are required to run the JSC.© Copyright IBM Corp. 2004 157
  • 4.1 Before the installation is started Before you start the installation, it is important to understand that Tivoli Workload Scheduler end-to-end scheduling involves two components: IBM Tivoli Workload Scheduler for z/OS IBM Tivoli Workload Scheduler The Tivoli Workload Scheduler Job Scheduling Console is not a required product, but our experience from working with the Tivoli Workload Scheduler end-to-end scheduling environment is that the JSC is a very nice and helpful tool to have for troubleshooting or for new users who do not know much about job scheduling, Tivoli Workload Scheduler, or Tivoli Workload Scheduler for z/OS. The overall installation and customization process is not complicated and can be narrowed down to the following steps: 1. Design the topology (for example, domain hierarchy or number of domains) for the distributed Tivoli Workload Scheduler network in which Tivoli Workload Scheduler for z/OS will do the workload scheduling. Use the guidelines in 3.5.4, “Network planning and considerations” on page 141 when designing the topology. 2. Install and verify the Tivoli Workload Scheduler for z/OS controller and end-to-end server tasks in the host environment. Installation and verification of Tivoli Workload Scheduler for z/OS end-to-end scheduling is described in 4.2, “Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 159. Note: If you run on a previous release of IBM Tivoli Workload Scheduler for z/OS (OPC), you should also migrate from this release to Tivoli Workload Scheduler for z/OS 8.2 as part of the installation. Migration steps are described in the Tivoli Workload Scheduler for z/OS Installation Guide, SH19-4543. Migration is performed with a standard program supplied with Tivoli Workload Scheduler for z/OS. 3. Install and verify the Tivoli Workload Scheduler distributed workstations (fault-tolerant agents). Installation and verification of the Tivoli Workload Scheduler distributed workstations is described in 4.3, “Installing Tivoli Workload Scheduler in an end-to-end environment” on page 207.158 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Important: These workstations can be installed and configured before the Tivoli Workload Scheduler for z/OS components, but it will not be possible to test the connections before the mainframe components are installed and ready. 4. Define and activate fault-tolerant workstations (FTWs) in the Tivoli Workload Scheduler for z/OS controller: – Define FTWs in the Tivoli Workload Scheduler for z/OS database. – Activate the FTW definitions by running the plan extend or replan batch job. – Verify that the workstations are active and linked. This is described in 4.4, “Define, activate, verify fault-tolerant workstations” on page 211. 5. Create fault-tolerant workstation jobs and job streams for the jobs to be executed on the FTWs, using either centralized script, non-centralized script, or a combination. This is described in 4.5, “Creating fault-tolerant workstation job definitions and job streams” on page 217. 6. Do a verification test of the Tivoli Workload Scheduler for z/OS end-to-end scheduling. The verification test is used to verify that the Tivoli Workload Scheduler for z/OS controller can schedule and track jobs on the FTWs. The verification test should also confirm that it is possible to browse the job log for completed jobs run on the FTWs. This is described in 4.6, “Verification test of end-to-end scheduling” on page 235. If you would like to use the Job Scheduling Console to work with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, or both, you should also activate support for the JSC. The necessary installation steps for activating support for the JSC are described in 4.7, “Activate support for the Tivoli Workload Scheduler Job Scheduling Console” on page 245.4.2 Installing Tivoli Workload Scheduler for z/OSend-to-end scheduling In this section, we guide you though the installation process of Tivoli Workload Scheduler for z/OS, especially the end-to-end feature. We do not duplicate the Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 159
  • entire installation of the base product, which is described in the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264. To activate support for end-to-end scheduling in Tivoli Workload Scheduler for z/OS to be able to schedule jobs on the Tivoli Workload Scheduler FTAs, follow these steps: 1. Run EQQJOBS and specify Y for the end-to-end feature. See 4.2.1, “Executing EQQJOBS installation aid” on page 162. 2. Define controller (engine) and tracker (agent) subsystems in SYS1.PARMLIB. See 4.2.2, “Defining Tivoli Workload Scheduler for z/OS subsystems” on page 167. 3. Allocate the end-to-end data sets running the EQQPCS06 sample generated by EQQJOBS. See 4.2.3, “Allocate end-to-end data sets” on page 168. 4. Create and customize the work directory by running the EQQPCS05 sample generated by EQQJOBS. See 4.2.4, “Create and customize the work directory” on page 170. 5. Create started task procedures for Tivoli Workload Scheduler for z/OS See 4.2.5, “Create started task procedures for Tivoli Workload Scheduler for z/OS” on page 173. 6. Define workstation (CPU) configuration and domain organization by using the CPUREC and DOMREC statements in a new PARMLIB member. (The default member name is TPLGINFO.) See 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 174, “DOMREC statement” on page 185, “CPUREC statement” on page 187, and Figure 4-6 on page 176. 7. Define Windows user IDs and passwords by using the USRREC statement in a new PARMLIB member. (The default member name is USRINFO.) It is important to remember that you have to define Windows user IDs and passwords only if you have fault-tolerant agents on Windows-supported platforms and want to schedule jobs to be run on these Windows platforms. See “USRREC statement” on page 195.160 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 8. Define the end-to-end configuration by using the TOPOLOGY statement in a new PARMLIB member. (The default member name is TPLGPARM.) The TOPOLOGY statement is described in “TOPOLOGY statement” on page 178. In the TOPOLOGY statement, you should specify the following: – For the TPLGYMEM keyword, write the name of the member used in step 6. (See Figure 4-6 on page 176.) – For the USRMEM keyword, write the name of the member used in step 7 on page 160. (See Figure 4-6 on page 176.)9. Add the TPLGYSRV keyword to the OPCOPTS statement in the Tivoli Workload Scheduler for z/OS controller to specify the server name that will be used for end-to-end communication. See “OPCOPTS TPLGYSRV(server_name)” on page 176.10.Add the TPLGYPRM keyword to the SERVOPTS statement in the Tivoli Workload Scheduler for z/OS end-to-end server to specify the member name used in step 8 on page 161. This step activates end-to-end communication in the end-to-end server started task. See “SERVOPTS TPLGYPRM(member name/TPLGPARM)” on page 177.11.Add the TPLGYPRM keyword to the BATCHOPT statement to specify the member name used in step 8 on page 161. This step activates the end-to-end feature in the plan extend, plan replan, and Symphony renew batch jobs. See “TPLGYPRM(member name/TPLGPARM) in BATCHOPT” on page 177.12.Optionally, you can customize the way the job name is generated in the Symphony file by the Tivoli Workload Scheduler for z/OS plan extend, replan, and Symphony renew batch jobs. The job name in the Symphony file can be tailored or customized by the JTOPTS TWSJOBNAME() parameter. See 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200 for more information. If you decide to customize the job name layout in the Symphony file, be aware that it can require that you reallocate the EQQTWSOU data set with larger record length. See “End-to-end input and output data sets” on page 168 for more information. Note: The JTOPTS TWSJOBNAME() parameter was introduced by APAR PQ77970. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 161
  • 13.Verify that the Tivoli Workload Scheduler for z/OS controller and server started tasks can be started (or restarted if already running) and verify that everything comes up correctly. Verification is described in 4.2.10, “Verify end-to-end installation in Tivoli Workload Scheduler for z/OS” on page 203.4.2.1 Executing EQQJOBS installation aid EQQJOBS is a CLIST-driven ISPF dialog that can help you install Tivoli Workload Scheduler for z/OS. EQQJOBS assists in the installation of the engine and agent by building batch-job JCL that is tailored to your requirements. To make EQQJOBS executable, allocate these libraries to the DD statements in your TSO session: SEQQCLIB to SYSPROC SEQQPNL0 to ISPPLIB SEQQSKL0 and SEQQSAMP to ISPSLIB Use EQQJOBS installation aid as follows: 1. To invoke EQQJOBS, enter the TSO command EQQJOBS from an ISPF environment. The primary panel shown in Figure 4-1 appears. EQQJOBS0 ------------ EQQJOBS application menu -------------- Select option ===> 1 - Create sample job JCL 2 - Generate OPC batch-job skeletons 3 - Generate OPC Data Store samples X - Exit from the EQQJOBS dialog Figure 4-1 EQQJOBS primary panel You only need to select options 1 and 2 for end-to-end specifications. We do not want to step through the whole EQQJOBS dialog so, instead, we show only the related end-to-end panels. (The referenced panel names are indicated in the top-left corner of the panels, as shown in Figure 4-1.) 2. Select option 1 in panel EQQJOBS0 (and press Enter twice), and make your necessary input into panel ID EQQJOBS8. (See Figure 4-2 on page 163.)162 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • EQQJOBS8---------------------------- Create sample job JCL -------------------- Command ===> END TO END FEATURE: Y (Y= Yes ,N= No) Installation Directory ===> /usr/lpp/TWS/V8R2M0_____________________ ===> ________________________________________ ===> ________________________________________ Work Directory ===> /var/inst/TWS___________________________ ===> ________________________________________ ===> ________________________________________ User for OPC address space ===> UID ___ Refresh CP group ===> GID __ RESTART AND CLEANUP (DATA STORE) N (Y= Yes ,N= No) Reserved destination ===> OPC_____ Connection type ===> SNA (SNA/XCF) SNA Data Store luname ===> ________ (only for SNA connection ) SNA FN task luname ===> ________ (only for SNA connection ) Xcf Group ===> ________ (only for XCF connection ) Xcf Data store member ===> ________ (only for XCF connection ) Xcf FL task member ===> ________ (only for XCF connection ) Press ENTER to create sample job JCLFigure 4-2 Server-related input panel The following definitions are important: – END-TO-END FEATURE Specify Y if you want to install end-to-end scheduling and run jobs on Tivoli Workload Scheduler fault-tolerant agents. – Installation Directory Specify the (HFS) path where SMP/E has installed the Tivoli Workload Scheduler for z/OS files for UNIX system services that apply the End-to-End enabler feature. This directory is the one containing the bin directory. The default path is /usr/lpp/TWS/V8R2M0. The installation directory is created by SMP/E job EQQISMKD and populated by applying the end-to-end feature (JWSZ103). This should be mounted Read-Only on every system in your sysplex. – Work Directory Specify where the subsystem-specific files are. Replace with a name that uniquely identifies your subsystem. Each subsystem that will use the fault-tolerant workstations must have its own work directory. Only the server and the daily planning batch jobs update the work directory. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 163
  • This directory is where the end-to-end processes have their working files (Symphony, event files, traces). It should be mounted Read/Write on every system in your sysplex. Important: To configure end-to-end scheduling in a sysplex environment successfully, make sure that the work directory is available to all systems in the sysplex. This way, in case of a takeover situation, the new server will be started on a new system in the sysplex, and the server must be able to access the work directory to continue processing. As described in Section 3.4.4, “Hierarchical File System (HFS) cluster” on page 124, we recommend having dedicated HFS clusters for each end-to-end scheduling environment (end-to-end server started task), that is: One HFS cluster for the installation binaries per environment (test, production, and so forth) One HFS cluster for the work files per environment (test, production and so forth) The work HFS clusters should be mounted in Read/Write mode and the HFS cluster with binaries should be mounted Read-Only. This is because the working directory is application-specific and contains application-related data. Besides, it makes your backup easier. The size of the cluster depends on the size of the Symphony file and how long you want to keep the stdlist files. We recommend that you allocate 2 GB of space. – User for OPC address space This information is used to create the EQQPCS05 sample job used to build the directory with the right ownership. In order to run the end-to-end feature correctly, the ownership of the work directory and the files contained in it must be assigned to the same user ID that RACF associates with the server started task. In the User for OPC address space field, specify the RACF user ID used for the Server address space. This is the name specified in the started-procedure table. – Refresh CP group This information is used to create the EQQPCS05 sample job used to build the directory with the right ownership. In order to create the new Symphony file, the user ID that is used to run the daily planning batch job must belong to the group that you specify in this field. Make sure that the user ID that is associated with the Server and Controller address spaces (the one specified in the User for OPC address space field) belongs to this group or has this group as a supplementary group.164 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • As you can see in Figure 4-3 on page 165, we defined RACF user ID TWSCE2E to the end-to-end server started task. User TWSCE2E belongs to RACF group TWSGRP. Therefore, all users of the RACF group TWSGRP and its supplementary group get access to create the Symphony file and to modify and read other files in the work directory. Tip: The Refresh CP group field can be used to give access to the HFS file as well as to protect the HFS directory from unauthorized access. EQQJOBS8 ------------------- Create sample job JCL ------------ EQQJOBS8 ------------------- Create sample job JCL ------------ Command ===> Command ===> HFS Binary Directory end-to-end FEATURE: end-to-end FEATURE: Y Y (Y= Yes , N= No) (Y= Yes , N= No) Where the TWS binaries that run in USS were HFS Installation Directory ===> HFS Installation Directory ===> /usr/lpp/TWS/V8R2M0______________ installed. E.g., translator, mailman, and /usr/lpp/TWS/V8R2M0______________ ===> ===> ___________________________ ___________________________ batchman. This should be the same as the ===> ===> ___________________________ ___________________________ value of the TOPOLOGY BINDIR parameter. HFS Work Directory HFS Work Directory ===> ===> /var/inst/TWS_____________ /var/inst/TWS_____________ ===> ===> ___________________________ ___________________________ HFS Working Directory ===> ===> ___________________________ ___________________________ Where the TWS files that change throughout User for OPC Address Space ===> User for OPC Address Space ===> E2ESERV_ E2ESERV_ Refresh CP Group ===> TWSGRP__ the day will reside. E.g., Symphony, mailbox Refresh CP Group ===> TWSGRP__ files, and logs for the TWS processes that run in ... ... USS. This should be the same as the value of the TOPOLOGY WRKDIR parameter. EQQPCS05 sample JCL User for End-to-end Server Task The user associated with the end-to-end server //TWS JOB ,TWS INSTALL,CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) /*JOBPARM SYSAFF=SC64 started task. //JOBLIB DD DSN=TWS.V8R2M0.SEQQLMD0,DISP=SHR //ALLOHFS EXEC PGM=BPXBATCH,REGION=4M Group for Batch Planning Jobs //STDOUT DD PATH=/tmp/eqqpcs05out, The group containing all users who will run // PATHOPTS=(OCREAT,OTRUNC,OWRONLY),PATHMODE=SIRWXU //STDIN DD PATH=/usr/lpp/TWS/V8R2M0/bin/config, batch planning jobs (CP extend, replan, refresh, // PATHOPTS=(ORDONLY) and Symphony renew). //STDENV DD * eqqBINDIR=/usr/lpp/TWS/V8R2M0 eqqWRKDIR=/var/inst/TWS eqqUID=E2ESERV eqqGID=TWSGRP /* //* //OUTPUT1 EXEC PGM=IKJEFT01 //STDOUT DD SYSOUT=*,DCB=(RECFM=V,LRECL=256) //OUTPUT DD PATH=/tmp/eqqpcs05out, // PATHOPTS=ORDONLY //SYSTSPRT DD DUMMY //SYSTSIN DD * OCOPY INDD(OUTPUT) OUTDD(STDOUT) BPXBATCH SH rm /tmp/eqqpcs05out /*Figure 4-3 Description of the input fields in the EQQJOBS8 panel 3. Press Enter to generate the installation job control language (JCL) jobs. Table 4-1 lists the subset of the sample JCL members created by EQQJOBS that relate to end-to-end scheduling. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 165
  • Table 4-1 Sample JCL members related to end-to-end scheduling (created by EQQJOBS) Member Description EQQCON Sample started task procedure for a Tivoli Workload Scheduler for z/OS controller and tracker in same address space. EQQCONO Sample started task procedure for the Tivoli Workload Scheduler for z/OS controller only. EQQCONP Sample initial parameters for a Tivoli Workload Scheduler for z/OS controller and tracker in same address space. EQQCONOP Sample initial parameters for a Tivoli Workload Scheduler for z/OS controller only. EQQPCS05 Creates the working directory in HFS used by the end-to-end server task. EQQPCS06 Allocates data sets necessary to run end-to-end schedulingend-to-end. EQQSER Sample started task procedure for a server task. EQQSERV Sample initialization parameters for a server task. 4. EQQJOBS is also used to create batch-job skeletons. That is, skeletons for the batch jobs (such as plan extend, replan, Symphony renew) that you can submit from Tivoli Workload Scheduler for z/OS legacy ISPF panels. To create batch-job skeletons, select option 2 in the EQQJOBS primary panel (see Figure 4-1 on page 162). Make your necessary entries until panel EQQJOBSA appears (Figure 4-4).166 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • EQQJOBSA -------------- Generate OPC batch-job skeletons ---------------------- Command ===> Specify if you want to use the following optional features: END TO END FEATURE: Y (Y= Yes ,N= No) (To interoperate with TWS fault tolerant workstations) RESTART AND CLEAN UP (DATA STORE): N (Y= Yes ,N= No) (To be able to retrieve job log, execute dataset clean up actions and step restart) FORMATTED REPORT OF TRACKLOG EVENTS: Y (Y= Yes ,N= No) EQQTROUT dsname ===> TWS.V8R20.*.TRACKLOG____________________________ EQQAUDIT output dsn ===> TWS.V8R20.*.EQQAUDIT.REPORT_____________________ Press ENTER to generate OPC batch-job skeletons Figure 4-4 Generate end-to-end skeletons 5. Specify Y for the END-TO-END FEATURE if you want to use end-to-end scheduling to run jobs on Tivoli Workload Scheduler fault-tolerant workstations. 6. Press Enter and the skeleton members for daily plan extend, replan, trial plan, long-term plan extend, replan, and trial plan are created with data sets related to end-to-end scheduling. Also, a new member is created. (See Table 4-2 on page 167.) Table 4-2 End-to-end skeletons Member Description EQQSYRES Tivoli Workload Scheduler Symphony renew4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems The subsystem for the Tivoli Workload Scheduler for z/OS controllers (engines) and trackers on the z/OS images (agents) must be defined in the active subsystem-name-table member of SYS1.PARMLIB. It is advisable to install at least two Tivoli Workload Scheduler for z/OS controlling systems, one for testing and one for your production environment. Note: We recommend that you install the trackers (agents) and the Tivoli Workload Scheduler for z/OS controller (engine) in separate address spaces. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 167
  • To define the subsystems, update the active IEFSSNnn member in SYS1.PARMLIB. The name of the subsystem initialization module for Tivoli Workload Scheduler for z/OS is EQQINITF. Include records, as in the following example. Example 4-1 Subsystem definition record (IEFSSNnn member of SYS1.PARMLIB) SUBSYS SUBNAME(subsystem name) /* TWS for z/OS subsystem */ INITRTN(EQQINITF) INITPARM(maxecsa,F) Note that the subsystem name must be two to four characters: for example, TWSC for the controller subsystem and TWST for the tracker subsystems. Check the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264, for more information.4.2.3 Allocate end-to-end data sets Member EQQPCS06, created by EQQJOBS in your sample job JCL library, allocates the following VSAM and sequential data sets needed for end-to-end scheduling: End-to-end script library (EQQSCLIB) for non-centralized script End-to-end input and output events data sets (EQQTWSIN and EQQTWSOU) Current plan backup copy data set to create Symphony (EQQSCPDS) End-to-end centralized script data library (EQQTWSCS) We explain the use and allocation of these data sets in more detail. End-to-end script library (EQQSCLIB) This script library data set includes members containing the commands or the job definitions for fault-tolerant workstations. It is required in the controller if you want to use the end-to-end scheduling feature. See Section 4.5.3, “Definition of non-centralized scripts” on page 221 for details about the JOBREC, RECOVERY, and VARSUB statements. Tip: Do not compress members in this PDS. For example, do not use the ISPF PACK ON command, because Tivoli Workload Scheduler for z/OS does not use ISPF services to read it. End-to-end input and output data sets These data sets are required by every Tivoli Workload Scheduler for z/OS address space that uses the end-to-end feature. They record the descriptions of168 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • related events with operations running on FTWs and are used by both theend-to-end enabler task and the translator process in the scheduler’s server.The data sets are device-dependent and can have only primary space allocation.Do not allocate any secondary space. They are automatically formatted by TivoliWorkload Scheduler for z/OS the first time they are used. Note An SD37 abend code is produced when Tivoli Workload Scheduler for z/OS formats a newly allocated data set. Ignore this error.EQQTWSIN and EQQTWSOU are wrap-around data sets. In each data set, theheader record is used to track the amount of read and write records. To avoid theloss of event records, a writer task will not write any new records until morespace is available when all existing records have been read.The quantity of space that you need to define for each data set requires someattention. Because the two data sets are also used for job log retrieval, the limitfor the job log length is half the maximum number of records that can be stored inthe input events data set. Two cylinders are sufficient for most installations.The maximum length of the events logged in those two data sets, including thejob logs, is 120 bytes. Anyway, it is possible to allocate the data sets with a longerlogical record length. Using record lengths greater than 120 bytes does notproduce either advantages or problems. The maximum allowed value is 32000bytes; greater values will cause the end-to-end server started task to terminate.In both data sets there must be enough space for at least 1000 events. (Themaximum number of job log events is 500.) Use this as a reference if you plan todefine a record length greater than 120 bytes. So, when the record length of 120bytes is used, the space allocation must be at least 1 cylinder. The data setsmust be unblocked and the block size must be the same as the logical recordlength.A minimum record length of 160 bytes is necessary for the EQQTWSOU data setin order to be able to decide how to build the job name in the Symphony file.(Refer to the TWSJOBNAME parameter in the JTOPTS statement inSection 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200.)For good performance, define the data sets on a device with plenty of availability.If you run programs that use the RESERVE macro, try to allocate the data setson a device that is not, or only slightly, reserved.Initially, you may need to test your system to get an idea of the number and typesof events that are created at your installation. After you have gathered enoughinformation, you can reallocate the data sets. Before you reallocate a data set,ensure that the current plan is entirely up-to-date. You must also stop the Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 169
  • end-to-end sender and receiver task on the controller and the translator thread on the server that add this data set. Tip: Do not move these data sets after they have been allocated. They contain device-dependent information and cannot be copied from one type of device to another, or moved around on the same volume. An end-to-end event data set that is moved will be re-initialized. This causes all events in the data set to be lost. If you have DFHSM or a similar product installed, you should specify that end-to-end event data sets are not migrated or moved. Current plan backup copy data set (EQQSCPDS) EQQSCPDS is the current plan backup copy data set that is used to create the Symphony file. During the creation of the current plan, the SCP data set is used as a CP backup copy for the production of the Symphony file. This VSAM is used when the end-to-end feature is active. It should be allocated with the same size as the CP1/CP2 and NCP VSAM data sets. End-to-end centralized script data set (EQQTWSCS) Tivoli Workload Scheduler for z/OS uses the end-to-end centralized script data set to temporarily store a script when it is downloaded from the JOBLIB data set to the agent for its submission. Set the following attributes for EQQTWSCS: DSNTYPE=LIBRARY, SPACE=(CYL,(1,1,10)), DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120) If you want to use centralized script support when scheduling end-to-end, use the EQQTWSCS DD statement in the controller and server started tasks. The data set must be a partitioned extended-data set.4.2.4 Create and customize the work directory To install the end-to-end feature, you must allocate the files that the feature uses. Then, on every Tivoli Workload Scheduler for z/OS controller that will use this feature, run the EQQPCS05 sample to create the directories and files. The EQQPCS05 sample must be run by a user with one of the following permissions: UNIX System Services (USS) user ID (UID) equal to 0 BPX.SUPERUSER FACILITY class profile in RACF170 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • UID specified in the JCL in eqqUID and belonging to the group (GID) specified in the JCL in eqqGID If the GID or the UID has not been specified in EQQJOBS, you can specify them in the STDENV DD before running the EQQPCS05. The EQQPCS05 job runs a configuration script (named config) residing in the installation directory. This configuration script creates a working directory with the right permissions. It also creates several files and directories in this working directory. (See Figure 4-5 on page 171.) z/OS EQQPCS05 must be run as: EQQPCS05 must be run as: Sample JCL for • a user associated with USS UID 0; or • a user associated with USS UID 0; or installation of • • a user with the BPX.SUPERUSER a user with the BPX.SUPERUSER End-to-end feature facility in RACF; or facility in RACF; or • • the user that will be specified in eqqUID the user that will be specified in eqqUID EQQPCS05 EQQPCS05 (the user associated with the end-to-end (the user associated with the end-to-end server started task) server started task) USS BINDIR WRKDIR Permissions Owner Group Size Date Time File Name__________ config config -rw-rw---- 1 E2ESERV TWSGRP 755 Feb 3 13:01 NetConf -rw-rw---- 1 E2ESERV TWSGRP 1122 Feb 3 13:01 TWSCCLog.properties -rw-rw---- 1 E2ESERV TWSGRP 2746 Feb 3 13:01 localopts configure configure drwxrwx--- 2 E2ESERV TWSGRP 8192 Feb 3 13:01 mozart drwxrwx--- 2 E2ESERV TWSGRP 8192 Feb 3 13:01 pobox drwxrwxr-x 3 E2ESERV TWSGRP 8192 Feb 11 09:48 stdlist The configure script creates subdirectories; copies configuration files; and sets The configure script creates subdirectories; copies configuration files; and sets the owner, group, and permissions of these directories and files. This last step the owner, group, and permissions of these directories and files. This last step is the reason EQQPCS05 must be run as a user with sufficient priviliges. is the reason EQQPCS05 must be run as a user with sufficient priviliges.Figure 4-5 EQQPCS05 sample JCL and the configure script After running EQQPCS05, you can find the following files in the work directory: localopts Defines the attributes of the local workstation (OPCMASTER) for batchman, mailman, netman, and writer processes and for SSL. Only a subset of these attributes is used by the end-to-end server on z/OS. Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for information about customizing this file. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 171
  • mozart/globalopts Defines the attributes of the IBM Tivoli Workload Scheduler network (OPCMASTER ignores them). Netconf Netman configuration files TWSCCLOG.properties Defines attributes for trace function in the end-to-end server USS. You will also find the following directories in the work directory: mozart pobox stdlist stdlist/logs (contains the log files for USS processes) Do not touch or delete any of these files or directories, which are created in the work directory by the EQQPCS05 job, unless you are directed to do so, for example in error situations. Tip: If you execute this job in a sysplex that cannot share the HFS (prior OS/390 V2R9) and get messages like cannot create directory, you may want a closer look on which machine the job really ran. Because without system affinity, every member that has the initiater in the right class started can execute the job so you must add a /*JOBPARM SYSAFF to make sure that the job runs on the system where the work HFS is mounted. Note that the EQQPCS05 job does not define the physical HFS (or z/OS) data set. The EQQPCS05 initiates an existing HFS data set with the necessary files and directories for the end-to-end server started task. The physical HFS data set can be created with a job that contains an IEFBR14 step, as shown in Example 4-2. Example 4-2 HFS data set creation //USERHFS EXEC PGM=IEFBR14 //D1 DD DISP=(,CATLG),DSNTYPE=HFS, // SPACE=(CYL,(prispace,secspace,1)), // DSN=OMVS.TWS820.TWSCE2E.HFS Allocate the HFS work data set with enough space for your end-to-end server started task. In most installations, 2 GB disk space is enough.172 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 4.2.5 Create started task procedures for Tivoli Workload Schedulerfor z/OS Perform this task for a Tivoli Workload Scheduler for z/OS tracker (agent), controller (engine), and server started task. You must define a started task procedure or batch job for each Tivoli Workload Scheduler for z/OS address space. The EQQJOBS dialog generates several members in the output sample library that you specified when running the EQQJOBS installation aid program. These members contain started task JCL that is tailored with the values you entered in the EQQJOBS dialog. Tailor these members further, according to the data sets you require. See Figure 4-1 on page 166. Because the end-to-end server started task uses TCP/IP communication, you should do the following: First, you have to modify the JCL of EQQSER in the following way: Make sure that the end-to-end server started task has access to the C runtime libraries, either as STEPLIB (include the CEE.SCEERUN in the STEPLIB concatenation) or by LINKLIST (the CEE.SCEERUN is in the LINKLIST concatenation). If you have multiple TCP/IP stacks, or if the name you used for the procedure that started up the TCPIP address space was not the default (TCPIP), change the end-to-end server started task procedure to include the SYSTCPD DD card to point to a data set containing the TCPIPJOBNAME parameter. The standard method to determine the connecting TCP/IP image is: – Connect to the TCP/IP specified by TCPIPJOBNAME in the active TCPIP.DATA. – Locate TCPIP.DATA using the SYSTCPD DD card. You can also use the end-to-end server TOPOLOGY TCPIPJOBNAME() parameter to specify the TCP/IP started task name that is used by the end-to-end server. This parameter can be used if you have multiple TCP/IP stacks or if the TCP/IP started task name is different form TCPIP. You must have a server started task to handle end-to-end scheduling. You can use the same server also to communicate with the Job Scheduling Console. In fact, the server can also handle APPC communication if configured to this. In Tivoli Workload Scheduler for z/OS 8.2, the type of communication that should be handled by the server started task is defined in the new SERVOPTS PROTOCOL() parameter. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 173
  • In the PROTOCOL() parameter, you can specify any combination of: APPC: The server should handle APPC communication. JSC: The server should handle JSC communication. E2E: The server should handle end-to-end communication. Recommendations: The Tivoli Workload Scheduler for z/OS controller and end-to-end server use TCP/IP services. Therefore it is necessary to define a USS segment for the controller and end-to-end server started task userids. No special authorization necessary; it is only required to be defined in USS with any user ID. Even though it is possible to have one server started task handling end-to-end scheduling, JSC communication, and even APPC communication as well, we recommend having a server started task dedicated for end-to-end scheduling (SERVOPTS PROTOCOL(E2E)). This has the advantage that you do not have to stop the whole server processes if the JSC server must be restarted. The server started task is important for handling JSC and end-to-end communication. We recommend setting the end-to-end and JSC server started tasks as non-swappable and giving it at least the same dispatching priority as the Tivoli Workload Scheduler for z/OS controller (engine). The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to communicate events to the FTAs. The end-to-end server will start multiple tasks and processes using the UNIX System Services.4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OSend-to-end scheduling Initialization statements for end-to-end scheduling fit into two categories: 1. Statements used to configure the Tivoli Workload Scheduler for z/OS controller (engine) and end-to-end server: a. OPCOPTS and TPLGYPRM statements for the controller b. SERVOPTS statement for the end-to-end server 2. Statements used to define the end-to-end topology (the network topology for the distributed Tivoli Workload Scheduler network). The end-to-end topology statements fall into two categories: a. Topology statements used to initialize the end-to-end server environment in USS on the mainframe: • The TOPOLOGY statement174 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • b. Statements used to describe the distributed Tivoli Workload Scheduler network and the responsibilities for the different Tivoli Workload Scheduler agents in this network: • The DOMREC, CPUREC, and USRREC statements These statements are used by the end-to-end server and the plan extend, plan replan, and Symphony renew batch jobs. The batch jobs use the information when the Symphony file is created. See “Initialization statements used to describe the topology” on page 184.We go through each initialization statement in detail and give you an example ofhow a distributed Tivoli Workload Scheduler network can be reflected in TivoliWorkload Scheduler for z/OS using the topology statements.Table 4-3 Initialization members related to end-to-end scheduling Initialization member Description TPLGYSRV Activates end-to-end in the Tivoli Workload Scheduler for z/OS controller. TPLGYPRM Activates end-to-end in the Tivoli Workload Scheduler for server and batch jobs (plan jobs). TOPOLOGY Specifies all the statements for end-to-end. DOMREC Defines domains in a distributed Tivoli Workload Scheduler network. CPUREC Defines agents in a Tivoli Workload Scheduler distributed network. USRREC Specifies user ID and password for NT users.You can find more information in Tivoli Workload Scheduler for z/OSCustomization and Tuning, SH19-4544.Figure 4-6 on page 176 illustrates the relationship between the initializationstatements and members related to end-to-end scheduling. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 175
  • OPC Controller Note: It is possible to run many Daily Planning Batch Jobs TWSC servers, but only one server can be (CPE, LTPE, etc.) OPCOPTS the end-to-end server (also called the BATCHOPT TPLGYSRV(TWSCE2E) topology server). Specify this server ... using the TPLGYSRV controller SERVERS(TWSCJSC,TWSCE2E) option. The SERVERS option TPLGYPRM(TPLGPARM) ... specifies the servers that will be ... started when the controller starts. JSC Server End-to-end Server TWSCJSC TWSCE2E Topology Records SERVOPTS SERVOPTS SUBSYS(TWSCC) SUBSYS(TWSC) EQQPARM(TPLGINFO) PROTOCOL(JSC) PROTOCOL(E2E) DOMREC ... CODEPAGE(500) TPLGYPRM(TPLGPARM) DOMREC ... JSCHOSTNAME(TWSCJSC) ... CPUREC ... PORTNUMBER(42581) CPUREC ... Topology Parameters USERMAP(USERMAP) CPUREC ... ... EQQPARM(TPLGPARM) CPUREC ... User Map TOPOLOGY ... BINDIR(/tws) EQQPARM(USERMAP) WRKDIR(/tws/wrkdir) User Records USER ROOT@M-REGION HOSTNAME(TWSC.IBM.COM) EQQPARM(USRINFO) RACFUSER(TMF) PORTNUMBER(31182) USRREC ... RACFGROUP(TIVOLI) TPLGYMEM(TPLGINFO) USRREC ... ... USRMEM(USERINFO) USRREC ... TRCDAYS(30) ... LOGLINES(100) If you plan to use Job Scheduling Console to work with OPC, it is a good idea to run two separate servers: one for JSC connections (JSCSERV), and another for the connection with the TWS network (E2ESERV).Figure 4-6 Relationship between end-to-end initialization statements and members In the following sections, we cover the different initialization statements and members and describe their meaning and usage one by one. Refer to Figure 4-6 when reading these sections. OPCOPTS TPLGYSRV(server_name) Specify this keyword if you want to activate the end-to-end feature in the Tivoli Workload Scheduler for z/OS (OPC) controller (engine). Activates the end-to-end feature in the controller. If you specify this keyword, the IBM Tivoli Workload Scheduler Enabler task is started. The specified server_name is that of the end-to-end server that handles the events to and from the FTAs. Only one server can handle events to and from the FTAs. This keyword is defined in OPCOPTS.176 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Tip: If you want let the Tivoli Workload Scheduler for z/OS controller start and stop the end-to-end server, use the servers keyword in OPCOPTS parmlib member (see Figure 4-6 on page 176)SERVOPTS TPLGYPRM(member name/TPLGPARM)The SERVOPTS statement is the first statement read by the end-to-end serverstarted task. In the SERVOPTS, you specify different initialization options for theserver started task, such as: The name of the Tivoli Workload Scheduler for z/OS controller that the server should communicate with (serve). The name is specified with the SUBSYS() keyword. The type of protocol. The PROTOCOL() keyword is used to specify the type of communication used by the server. In Tivoli Workload Scheduler for z/OS 8.2, you can specify any combination of the following values separated by comma: E2E, JSC, APPC. Note: With Tivoli Workload Scheduler for z/OS 8.2, the TCPIP value has been replaced by the combination of the E2E and JSC values, but the TCPIP value is still allowed for backward compatibility. The TPLGYPRM() parameter is used to define the member name of the member in parmlib with the TOPOLOGY definitions for the distributed Tivoli Workload Scheduler network. The TPLGYPRM() parameter must be specified if PROTOCOL(E2E) is specified.See Figure 4-6 on page 176 for an example of the required SERVOPTSparameters for an end-to-end server (TWSCE2E in Figure 4-6 on page 176).TPLGYPRM(member name/TPLGPARM) in BATCHOPTIt is important to remember to add the TPLGYPRM() parameter to theBATCHOPT initialization statement that is used by the Tivoli Workload Schedulerfor z/OS planning jobs (trial plan extend, plan extend, plan replan) andSymphony renew.If the TPLGYPRM() parameter is not specified in the BATCHOP initializationstatement that is used by the plan jobs, no Symphony file will be created and nojobs will run in the distributed Tivoli Workload Scheduler network.See Figure 4-6 on page 176 for an example of how to specify the TPLGYPRM()parameter in the BATCHOPT initialization statement. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 177
  • Note: The topology definitions in TPLGYPRM() in the BATCHOPT initialization statement is read and verified by the trial plan extend job in Tivoli Workload Scheduler for z/OS. This means that the trial plan extend job can be used to verify the TOPLOGY definitions such as DOMREC, CPUREC, and USRREC for syntax errors or logical errors before the plan extend or plan replan job is executed. Also note that the trial plan extend job does not create a new Symphony file because it does not update the current plan in Tivoli Workload Scheduler for z/OS. TOPOLOGY statement This statement includes all of the parameters that are related to the end-to-end feature. TOPOLOGY is defined in the member of the EQQPARM library as specified by the TPLGYPRM parameter in the BATCHOPT and SERVOPTS statements. Figure 4-8 on page 185 shows the syntax of the topology member.178 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 4-7 The statements that can be specified in the topology memberDescription of the topology statementsThe topology parameters are described in the following sections.BINDIR(directory name)Specifies the name of the base file system (HFS or zOS) directory wherebinaries, catalogs, and the other files are installed and shared amongsubsystems.The specified directory must be the same as the directory where the binaries are,without the final bin. For example, if the binaries are installed in/usr/lpp/TWS/V8R2M0/bin and the catalogs are in Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 179
  • /usr/lpp/TWS/V8R2M0/catalog/C, the directory must be specified in the BINDIR keyword as follows: /usr/lpp/TWS/V8R2M0. CODEPAGE(host system codepage/IBM-037) Specifies the name of the host code page and applies to the end-to-end feature. The value is used by the input translator to convert data received from Tivoli Workload Scheduler domain managers at the first level from UTF-8 format to EBCIDIC format. You can provide the IBM – xxx value, where xxx is the EBCDIC code page. The default value, IBM – 037, defines the EBCDIC code page for US English, Portuguese, and Canadian French. For a complete list of available code pages, refer to Tivoli Workload Scheduler for z/OS Customization and Tuning, SH19-4544. ENABLELISTSECCHK(YES/NO) This security option controls the ability to list objects in the plan on an FTA using conman and the Job Scheduling Console. Put simply, this option determines whether conman and the Tivoli Workload Scheduler connector programs will check the Tivoli Workload Scheduler Security file before allowing the user to list objects in the plan. If set to YES, objects in the plan are shown to the user only if the user has been granted the list permission in the Security file. If set to NO, all users will be able to list objects in the plan on FTAs, regardless of whether list access is granted in the Security file. The default value is NO. Change the value to YES if you want to check for the list permission in the security file. GRANTLOGONASBATCH(YES/NO) This is only for jobs running on Windows platforms. If set to YES, the logon users for Windows jobs are automatically granted the right to log on as batch job. If set to NO or omitted, the right must be granted manually to each user or group. The right cannot be granted automatically for users running jobs on a backup domain controller, so you must grant those rights manually. HOSTNAME(host name /IP address/ local host name) Specifies the host name or the IP address used by the server in the end-to-end environment. The default is the host name returned by the operating system. If you change the value, you also must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. As described in Section 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129, you can define a virtual IP address for each server of the active controller and the standby controllers. If you use a dynamic virtual IP address in a sysplex environment, when the active controller fails and the180 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • standby controller takes over the communication, the FTAs automatically switchthe communication to the server of the standby controller.To change the HOSTNAME of a server, perform the following actions:1. Set the nm ipvalidate keyword to off in the localopts file on the first-level domain managers.2. Change the HOSTNAME value of the server using the TOPOLOGY statement.3. Restart the server with the new HOSTNAME value.4. Renew the Symphony file.5. If the renewal ends successfully, you can set the ipvalidate to full on the first-level domain managers.See 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129for a description of how to define DVIPA IP addressLOGLINES(number of lines/100)Specifies the maximum number of lines that the job log retriever returns for asingle job log. The default value is 100. In all cases, the job log retriever does notreturn more than half of the number of records that exist in the input queue.If the job log retriever does not return all of the job log lines because there aremore lines than the LOGLINES() number of lines, a notice similar to this appearsin the retrieved job log output: *** nnn lines have been discarded. Final part of Joblog ... ******The line specifies the number (nnn) of job log lines not displayed, between thefirst lines and the last lines of the job log.NOPTIMEDEPENDENCY(YES/NO)With this option, you can change the behavior of noped operations that aredefined on fault-tolerant workstations and have the centralized script option set toN. In fact, Tivoli Workload Scheduler for z/OS completes the noped operationswithout waiting for the time dependency resolution: With this option set to YES,the operation can be completed in the current plan after the time dependencyhas been resolved. The default value is NO. Note: This statement is introduced by APAR PQ84233.PLANAUDITLEVEL(0/1)Enables or disables plan auditing for FTAs. Each Tivoli Workload Schedulerworkstation maintains its own log. Valid values are 0 to disable plan auditing and Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 181
  • 1 to activate plan auditing. Auditing information is logged to a flat file in the TWShome/audit/plan directory. Only actions, not the success or failure of any action are logged in the auditing file. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. PORTNUMBER(port/31111) Defines the TCP/IP port number that is used by the server to communicate with the FTAs. This value has to be different from that specified in the SERVOPTS member. The default value is 31111, and accepted values are from 0 to 65535. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. Important: The port number must be unique within a Tivoli Workload Scheduler network. SSLLEVEL(ON/OFF/ENABLED/FORCE) Defines the type of SSL authentication for the end-to-end server (OPCMASTER workstation). It must have one of the following values: ON The server uses SSL authentication only if another workstation requires it. OFF (default value) The server does not support SSL authentication for its connections. ENABLED The server uses SSL authentication only if another workstation requires it. FORCE The server uses SSL authentication for all of its connections. It refuses any incoming connection if it is not SSL. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. SSLPORT(SSL port number/31113) Defines the port used to listen for incoming SSL connections on the server. It substitutes the value of nm SSL port in the localopts file, activating SSL support on the server. If SSLLEVEL is specified and SSLPORT is missing, 31113 is used as the default value. If SSLLEVEL is not specified, the default value of this parameter is 0 on the server, which indicates that no SSL authentication is required. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file.182 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • TCPIPJOBNAME(TCP/IP started-task name/TCPIP)Specifies the TCP/IP started-task name used by the server. Set this keywordwhen you have multiple TCP/IP stacks or a TCP/IP started task with a namedifferent from TCPIP. You can specify a name from one to eight alphanumeric ornational characters, where the first character is alphabetic or national.TPLGYMEM(member name/TPLGINFO)Specifies the PARMLIB member where the domain (DOMREC) and workstation(CPUREC) definition specific to the end-to-end are. The default value isTPLGINFO.If you change the value, you must restart the Tivoli Workload Scheduler for z/OSserver and renew the Symphony file.TRCDAYS(days/14)Specifies the number of days the trace files and file in the stdlist directory arekept before being deleted. Every day the USS code creates the new stdlistdirectory to contain the logs for the day. All log directories that are older than thenumber of days specified in TRCDAYS() are deleted automatically. The defaultvalue is 14. Specify 0 if you do not want the trace files to be deleted. Recommendation: Monitor the size of your working directory (that is, the size of the HFS cluster with work files) to prevent the HFS cluster from becoming full. The trace files and files in the stdlist directory contain internal logging information and Tivoli Workload Scheduler messages that may be useful for troubleshooting. You should consider deleting them on a regular interval using the TRCDAYS() parameter.USRMEM(member name/USRINFO)Specifies the PARMLIB member where the user definitions are. This keyword isoptional except if you are going to schedule jobs on Windows operating systems,in which case, it is required.The default value is USRINFO.If you change the value, you must restart the Tivoli Workload Scheduler for z/OSserver and renew the Symphony file.WRKDIR(directory name)Specifies the location of the working files for an end-to-end server started task.Each Tivoli Workload Scheduler for z/OS end-to-end server must have its ownWRKDIR. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 183
  • ENABLESWITCHFT(Y/N) New parameter (not shown in Figure 4-7 on page 179) that was introduced in FixPack 04 for Tivoli Workload Scheduler and APAR PQ81120 for Tivoli Workload Scheduler. It is used to activated the enhanced fault-tolerant mechanism on domain managers. The default is N, meaning that the enhanced fault-tolerant mechanism is not activated. For more information, check the documentation in the FaultTolerantSwitch.README.pdf file delivered with FixPack 04 for Tivoli Workload Scheduler.4.2.7 Initialization statements used to describe the topology With the last three parameters listed in Table 4-3 on page 175, DOMREC, CPUREC, and USRREC, you define the topology of the distributed Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS. The defined topology is used by the plan extend, replan, and Symphony renew batch jobs when creating the Symphony file for the distributed Tivoli Workload Scheduler network. Figure 4-8 on page 185 shows how the distributed Tivoli Workload Scheduler topology is described using CPUREC and DOMREC initialization statements for the Tivoli Workload Scheduler for z/OS server and plan programs. The Tivoli Workload Scheduler for z/OS fault-tolerant workstations are mapped to physical Tivoli Workload Scheduler agents or workstations using the CPUREC statement. The DOMREC statement is used to describe the domain topology in the distributed Tivoli Workload Scheduler network. Note that the MASTERDM domain is predefined in Tivoli Workload Scheduler for z/OS. It is not necessary to specify a DOMREC parameter for the MASTERDM domain. Also note that the USRREC parameters are not depicted in Figure 4-8 on page 185.184 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 4-8 The topology definitions for server and plan programs In the following sections, we walk through the DOMREC, CPUREC, and USRREC statements. DOMREC statement This statement begins a domain definition. You must specify one DOMREC for each domain in the Tivoli Workload Scheduler network, with the exception of the master domain. The domain name used for the master domain is MASTERDM. The master domain consists of the controller, which acts as the master domain manager. The CPU name used for the master domain manager is OPCMASTER. You must specify at least one domain, child of MASTERDM, where the domain manager is a fault-tolerant agent. If you do not define this domain, Tivoli Workload Scheduler for z/OS tries to find a domain definition that can function as a child of the master domain. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 185
  • DOMRECs in topology member MASTERDM OPCMASTER EQQPARM(TPLGINFO) EQQSCLIB(MYJOB) DOMREC DOMAIN(DOMAINA) DomainA DomainB DOMMNGR(A000) DOMPARENT(MASTERDM) A000 B000 Symphony DOMREC DOMAIN(DOMAINB) DOMMNGR(B000) DOMPARENT(MASTERDM) A001 A002 B001 B002 ... OPC doesn’t have a built-in place to store information about TWS domains. Domains and their relationships are defined in DOMRECs. There is no DOMREC for the Master Domain, MASTERDM. DOMRECs are used to add information about TWS domains to the Symphony file.Figure 4-9 Example of two DOMREC statements for a network with two domains DOMREC is defined in the member of the EQQPARM library that is specified by the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on page 176 and Figure 4-9). Figure 4-10 illustrates the DOMREC syntax.Figure 4-10 Syntax for the DOMREC statement DOMAIN(domain name) The name of the domain, consisting of up to 16 characters starting with a letter. It can contain dashes and underscores.186 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • DOMMNGR(domain manager name)The Tivoli Workload Scheduler workstation name of the domain manager. It mustbe a fault-tolerant agent running in full status mode.DOMPARENT(parent domain)The name of the parent domain.CPUREC statementThis statement begins a Tivoli Workload Scheduler workstation (CPU) definition.You must specify one CPUREC for each workstation in the Tivoli WorkloadScheduler network, with the exception of the controller that acts as masterdomain manager. You must provide a definition for each workstation of TivoliWorkload Scheduler for z/OS that is defined in the database as a Tivoli WorkloadScheduler fault-tolerant workstation.CPUREC is defined in the member of the EQQPARM library that is specified bythe TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 onpage 176 and Figure 4-11 on page 188). Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 187
  • Workstations CPURECs in topology member in OPC EQQPARM(TPLGINFO) MASTERDM A000 EQQSCLIB(MYJOB) OPCMASTER CPUREC CPUNAME(A000) B000 CPUOS(AIX) A001 CPUNODE(stockholm) DomainA DomainB A002 CPUTCPIP(31281) Symphony A000 B000 B001 CPUDOMAIN(DomainA) B002 CPUTYPE(FTA) ... CPUAUTOLINK(ON) CPUFULLSTAT(ON) A001 A002 B001 B002 CPURESDEP(ON) CPULIMIT(20) CPUTZ(ECT) CPUUSER(root) OPC does not have fields to CPUREC CPUNAME(A001) contain the extra information in a CPUOS(WNT) TWS workstation definition; OPC CPUNODE(copenhagen) CPUDOMAIN(DOMAINA) Valid CPUOS workstations marked fault tolerant CPUTYPE(FTA) values: must also have a CPUREC. The CPUAUTOLINK(ON) AIX workstation name in OPC acts as CPULIMIT(10) HPUX CPUTZ(ECT) POSIX a pointer to the CPUREC. CPUUSER(Administrator) UNIX There is no CPUREC for the FIREWALL(Y) WNT Master Domain manager, SSLLEVEL(FORCE) OTHER SSLPORT(31281) OPCMASTER. ... CPURECs are used to add information about DMs & FTAs to the Symphony file.Figure 4-11 Example of two CPUREC statements for two workstations Figure 4-12 on page 189 illustrates the CPUREC syntax.188 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 4-12 Syntax for the CPUREC statementCPUNAME(cpu name)The name of the Tivoli Workload Scheduler workstation, consisting of up to fouralphanumerical characters, starting with a letter. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 189
  • CPUOS(operating system) The host CPU operating system related to the Tivoli Workload Scheduler workstation. The valid entries are AIX, HPUX, POSIX, UNIX, WNT, and OTHER. CPUNODE(node name) The node name or the IP address of the CPU. Fully-qualified domain names up to 52 characters are accepted. CPUTCPIP(port number/ 31111) The TCP port number of netman on this CPU. It comprises five numbers and, if omitted, uses the default value, 31111. CPUDOMAIN(domain name) The name of the Tivoli Workload Scheduler domain of the CPU. CPUHOST(cpu name) The name of the host CPU of the agent. It is required for standard and extended agents. The host is the Tivoli Workload Scheduler CPU with which the standard or extended agent communicates and where its access method resides. Note: The host cannot be another standard or extended agent. CPUACCESS(access method) The name of the access method. It is valid for extended agents and must be the name of a file that resides in the Tivoli Workload Scheduler <home>/methods directory on the host CPU of the agent. CPUTYPE(SAGENT/ XAGENT/ FTA) The CPU type specified as one of the following: FTA (default) Fault-tolerant agent, including domain managers and backup domain managers. SAGENT Standard agent XAGENT Extended agent Note: If the extended-agent workstation is manually set to Link, Unlink, Active, or Offline, the command is sent to its host CPU. CPUAUTOLNK(OFF/ON) Autolink is most effective during the initial start-up sequence of each plan. Then a new Symphony file is created and all workstations are stopped and restarted.190 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • For a fault-tolerant agent or standard agent, specify ON so that, when the domainmanager starts, it sends the new production control file (Symphony) to start theagent and open communication with it.For the domain manager, specify On so that when the agents start they opencommunication with the domain manager.Specify OFF to initialize an agent when you submit a link command manuallyfrom the Tivoli Workload Scheduler for z/OS Modify Current Plan ISPF dialogs orfrom the Job Scheduling Console. Note: If the X-agent workstation is manually set to Link, Unlink, Active, or Offline, the command is sent to its host CPU.CPUFULLSTAT(ON/OFF)This applies only to fault-tolerant agents. If you specify OFF for a domainmanager, the value is forced to ON.Specify ON for the link from the domain manager to operate in Full Status mode.In this mode, the agent is kept updated about the status of jobs and job streamsthat are running on other workstations in the network.Specify OFF for the agent to receive status information only about the jobs andschedules on other workstations that affect its own jobs and schedules. This canimprove the performance by reducing network traffic.To keep the production control file for an agent at the same level of detail as itsdomain manager, set CPUFULLSTAT and CPURESDEP (see below) to ON.Always set these modes to ON for backup domain managers.You should also be aware of the new TOPOLOGY ENABLESWITCHFT()parameter described in “ENABLESWITCHFT(Y/N)” on page 184.CPURESDEP(ON/OFF)This applies only to fault-tolerant agents. If you specify OFF for a domainmanager, the value is forced to ON.Specify ON to have the agent’s production control process operate in Resolve AllDependencies mode. In this mode, the agent tracks dependencies for all of itsjobs and schedules, including those running on other CPUs. Note: CPUFULLSTAT must also be ON so that the agent is informed about the activity on other workstations. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 191
  • Specify OFF if you want the agent to track dependencies only for its own jobs and schedules. This reduces CPU usage by limiting processing overhead. To keep the production control file for an agent at the same level of detail as its domain manager, set CPUFULLSTAT and CPURESDEP to ON. Always set these modes to ON for backup domain managers. You should also be aware of the new TOPOLOGY ENABLESWITCHFT() parameter that is described in “ENABLESWITCHFT(Y/N)” on page 184. CPUSERVER(server ID) This applies only to fault-tolerant and standard agents. Omit this option for domain managers. This keyword can be a letter or a number (A-Z or 0-9) and identifies a server (mailman) process on the domain manager that sends messages to the agent. The IDs are unique to each domain manager, so you can use the same IDs for agents in different domains without conflict. If more than 36 server IDs are required in a domain, consider dividing it into two or more domains. If a server ID is not specified, messages to a fault-tolerant or standard agent are handled by a single mailman process on the domain manager. Entering a server ID causes the domain manager to create an additional mailman process. The same server ID can be used for multiple agents. The use of servers reduces the time that is required to initialize agents and generally improves the timeliness of messages. Notes on multiple mailman processes: When setting up multiple mailman processes, do not forget that each mailman server process uses extra CPU resources on the workstation on which it is created, so be careful not to create excessive mailman processes on low-end domain managers. In most of the cases, using extra domain managers is a better choice than configuring extra mailman processes. Cases in which use of extra mailman processes might be beneficial include: – Important FTAs that run mission critical jobs. – Slow-initializing FTAs that are at the other end of a slow link. (If you have more than a couple of workstations over a slow link connection to the OPCMASTER, a better idea is to place a remote domain manager to serve these workstations.) If you have unstable workstations in the network, do not put them under the same mailman server ID with your critical servers.192 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • See Figure 4-13 for an example of CPUSERVER() use. The figure shows thatone mailman process on domain manager FDMA has to handle all outboundcommunication with the five FTAs (FTA1 to FTA5) if these workstations (CPUs)are defined without the CPUSERVER() parameter. If FTA1 and FTA2 are definedwith CPUSERVER(A), and FTA3 and FTA4 are defined with CPUSERVER(1),the domain manager FDMA will start two new mailman processes for these twoserver IDs (A and 1). parent domain manager • No Server IDs DomainA AIX Domain The main mailman Manager FDMA process on DMA mailman mailman handles all outbound communications with the FTAs in the domain. FTA1 FTA2 FTA3 FTA4 FTA5 Linux Solaris Windows 2000 HPUX OS/400 No Server ID No Server ID No Server ID No Server ID No Server ID parent domain manager DomainA AIX • 2 Different Server IDs Domain Manager An extra mailman FDMA process is spawned for SERVERA SERVERA mailman mailman SERVER1 SERVER1 mailman mailman mailman mailman each server ID in the domain. FTA1 FTA2 FTA3 FTA4 FTA5 Linux Solaris Windows 2000 HPUX OS/400 Server ID A Server ID A Server ID 1 Server ID 1 No Server IDFigure 4-13 Usage of CPUSERVER() IDs to start extra mailman processesCPULIMIT(value/1024)Specifies the number of jobs that can run at the same time in a CPU. The defaultvalue is 1024. The accepted values are integers from 0 to 1024. If you specify 0,no jobs are launched on the workstation.CPUTZ(timezone/UTC)Specifies the local time zone of the FTA. It must match the time zone of theoperating system in which the FTA runs. For a complete list of valid time zones,refer to the appendix of the IBM Tivoli Workload Scheduler Reference Guide,SC32-1274. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 193
  • If the time zone does not match that of the agent, the message AWSBHT128I is displayed in the log file of the FTA. The default is UTC (universal coordinated time). To avoid inconsistency between the local date and time of the jobs and of the Symphony creation, use the CPUTZ keyword to set the local time zone of the fault-tolerant workstation. If the Symphony creation date is later than the current local date of the FTW, Symphony is not processed. In the end-to-end environment, time zones are disabled by default when installing or upgrading Tivoli Workload Scheduler for z/OS. If the CPUTZ keyword is not specified, time zones are disabled. For additional information about how to set the time zone in an end-to-end network, see the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. CPUUSER(default user/tws) Specifies the default user for the workstation. The maximum length is 47 characters. The default value is tws. The value of this option is used only if you have not defined the user in the JOBUSR option of the SCRPTLIB JOBREC statement or supply it with the Tivoli Workload Scheduler for z/OS job submit exit EQQUX001 for centralized script. SSLLEVEL(ON/OFF/ENABLED/FORCE) Must have one of the following values: ON The workstation uses SSL authentication when it connects with its domain manager. The domain manager uses the SSL authentication when it connects with a domain manager of a parent domain. However, it refuses any incoming connection from its domain manager if the connection does not use the SSL authentication. OFF (default) The workstation does not support SSL authentication for its connections. ENABLED The workstation uses SSL authentication only if another workstation requires it. FORCE The workstation uses SSL authentication for all of its connections. It refuses any incoming connection if it is not SSL. If this attribute is set to OFF or omitted, the workstation is not intended to be configured for SSL. In this case, any value for SSLPORT (see below) will be ignored. You should also set the nm ssl port local option to 0 (in the localopts file) to be sure that this port is not opened by netman.194 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • SSLPORT(SSL port number|/31113)Defines the port used to listen for incoming SSL connections. This value mustmatch the one defined in the nm SSL port local option (in the localopts file) of theworkstation (the server with Tivoli Workload Scheduler installed). It must bedifferent from the nm port local option (in the localopts file) that defines the portused for normal communications. If SSLLEVEL is specified but SSLPORT ismissing, 31113 is used as the default value. If not even SSLLEVEL is specified,the default value of this parameter is 0 on FTWs, which indicates that no SSLauthentication is required.FIREWALL(YES/NO)Specifies whether the communication between a workstation and its domainmanager must cross a firewall. If you set the FIREWALL keyword for aworkstation to YES, it means that a firewall exists between that particularworkstation and its domain manager, and that the link between the domainmanager and the workstation (which can be another domain manager itself) isthe only link that is allowed between the respective domains. Also, for allworkstations having this option set to YES, the commands to start (startworkstation) or stop (stop workstation) the workstation or to get the standard list(showjobs) are transmitted through the domain hierarchy instead of opening adirect connection between the master (or domain manager) and the workstation.The default value for FIREWALL is NO, meaning that there is no firewallboundary between the workstation and its domain manager.To specify that an extended agent is behind a firewall, set the FIREWALLkeyword for the host workstation. The host workstation is the Tivoli WorkloadScheduler workstation with which the extended agent communicates and whereits access method resides.USRREC statementThis statement defines the passwords for the users who need to schedule jobs torun on Windows workstations.USRREC is defined in the member of the EQQPARM library as specified by theUSERMEM keyword in the TOPOLOGY statement. (See Figure 4-6 on page 176and Figure 4-15 on page 197.)Figure 4-14 illustrates the USRREC syntax.Figure 4-14 Syntax for the USRREC statement Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 195
  • USRCPU(cpuname) The Tivoli Workload Scheduler workstation on which the user can launch jobs. It consists of four alphanumerical characters, starting with a letter. It is valid only on Windows workstations. USRNAM(logon ID) The user name of a Windows workstation. It can include a domain name and can consist of 47 characters. Windows user names are case-sensitive. The user must be able to log on to the computer on which Tivoli Workload Scheduler has launched jobs, and must also be authorized to log on as batch. If the user name is not unique in Windows, it is considered to be either a local user, a domain user, or a trusted domain user, in that order. USRPWD(password) The user password for the user of a Windows workstation (Figure 4-15 on page 197). It can consist of up to 31 characters and must be enclosed in single quotation marks. Do not specify this keyword if the user does not need a password. You can change the password every time you create a Symphony file (when creating a CP extension). Attention: The password is not encrypted. You must take the necessary action to protect the password from unauthorized access. One way to do this is to place the USRREC definitions in a separate member in a separate library. This library should then be protected with RACF so it can be accessed only by authorized persons. The library should be added in the EQQPARM data set concatenation in the end-to-end server started task and in the plan extend, replan, and Symphony renew batch jobs. Example JCL for plan replan, extend, and Symphony renew batch jobs: //EQQPARM DD DISP=SHR,DSN=TWS.V8R20.PARMLIB(BATCHOPT) // DD DISP=SHR,DSN=TWS.V8R20.PARMUSR In this example, the USRREC member is placed in the TWS.V8R20.PARMUSR library. This library can then be protected with RACF according to your standards. All other BATCHOPT initialization statements are placed in the usual parameter library. In the example, this library is named TWS.V8R20.PARMLIB and the member is BATCHOPT.196 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • USRRECs in user member MASTERDM OPCMASTER EQQPARM(USERINFO) USRREC DomainA DomainB USRCPU(F202) USERNAM(tws) A000 B000 USRPSW(tivoli00) Symphony USRREC USRCPU(F202) A001 A002 B001 B002 USERNAM(Jim Smith) tws SouthMUser1 USRPSW(ibm9876) Jim Smith USRREC USRCPU(F302) OPC doesn’t have a built- USERNAM(SouthMUser1) in way to store Windows USRPSW(d9fj4k) ... users and passwords; for this reason, the users are defined by adding USRRECs to the user member of EQQPARM. USRRECs are used to add Windows NT user definitions to the Symphony file.Figure 4-15 Example of three USRREC definitions: for a local and domain Windows user4.2.8 Example of DOMREC and CPUREC definitions We have explained how to use DOMREC and CPUREC statements to define the network topology for a Tivoli Workload Scheduler network in a Tivoli Workload Scheduler for z/OS end-to-end environment. We now use these statements to define a simple Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS. As an example, Figure 4-16 on page 198 illustrates a simple Tivoli Workload Scheduler network. In this network there is one domain, DOMAIN1, under the master domain (MASTERDM). Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 197
  • M ASTERDM z /O S M a s te r D o m a in M anager O PCMASTER D O M A IN 1 D o m a in A IX M anager c o p e n h a g e n .d k .ib m .c o m F100 F101 B D M for F102 D om ain A A IX W in do w s lon don .u k .ibm .c om s to c k h o lm .se .ib m .c o m Figure 4-16 Simple end-to-end scheduling environment Example 4-3 describes the DOMAIN1 domain with the DOMAIN topology statement. Example 4-3 Domain definition DOMREC DOMAIN(DOMAIN1) /* Name of the domain is DOMAIN1 */ DOMMMNGR(F100) /* F100 workst. is domain mng. */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */ In end-to-end, the master domain (MASTERDM) is always the Tivoli Workload Scheduler for z/OS controller. (It is predefined and cannot be changed.) Since the DOMAIN1 domain is under the MASTERDM domain, MASTERDM must be defined in the DOMPARENT parameter. The DOM;MNGR keyword represents the name of the workstation. There are three workstations (CPUs) in the DOMAIN1 domain. To define these workstations in the Tivoli Workload Scheduler for z/OS end-to-end network, we must define three CPURECs, one for each workstation (server) in the network. Example 4-4 Workstation (CPUREC) definitions for the three FTWs CPUREC CPUNAME(F100) /* Domain manager for DM100 */ CPUOS(AIX) /* AIX operating system */198 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • CPUNODE(copenhagen.dk.ibm.com) /* IP address of CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(Europe/Copenhagen) /* Time zone for this CPU */ CPUUSER(twstest) /* default user for CPU */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */CPUREC CPUNAME(F101) /* fault tolerant agent in DM100 */ CPUOS(AIX) /* AIX operating system */ CPUNODE(london.uk.ibm.com) /* IP address of CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for BDM */ CPURESDEP(ON) /* Resolve dependencies on BDM */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(A) /* Start extra mailman process */ CPUTZ(Europe/London) /* Time zone for this CPU */ CPUUSER(maestro) /* default user for ws */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */CPUREC CPUNAME(F102) /* fault tolerant agent in DM100 */ CPUOS(WNT) /* Windows operating system */ CPUNODE(stockholm.se.ibm.com) /* IP address for CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dependencies off FTA */ CPULIMIT(10) /* Number of jobs in parallel */ CPUSERVER(A) /* Start extra mailman process */ CPUTZ(Europe/Stockholm) /* Time zone for this CPU */ CPUUSER(twstest) /* default user for ws */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */ Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 199
  • Because F101 is going to be backup domain manager for F100, F101 is defined with CPUFULLSTATUS (ON) and CPURESDEP(ON). F102 is a fault-tolerant agent without extra responsibilities, so it is defined with CPUFULLSTATUS(OFF) and CPURESDEP(OFF) because dependency resolution within the domain is the task of the domain manager. This improves performance by reducing network traffic. Note: CPUOS(WNT) applies for all Windows platforms. Finally, since F102 runs on a Windows server, we must create at least one USRREC definition for this server. In our example, we would like to be able to run jobs on the Windows server under either the Tivoli Workload Scheduler installation user (twstest) or the database user, databusr. Example 4-5 USRREC definition for tws F102 Windows users, twstest and databusr USRREC USRCPU(F102) /* Definition for F102 Windows CPU */ USRNAM(twstest) /* The user name (local user) */ USRPSW(twspw01) /* The password for twstest */ USRREC USRCPU(F102) /* Definition for F102 Windows CPU */ USRNAM(databusr) /* The user name (local user) */ USRPSW(data01ad) /* Password for databusr */4.2.9 The JTOPTS TWSJOBNAME() parameter With the JTOPTS TWSJOBNAME() parameter, it is possible to specify different criteria that Tivoli Workload Scheduler for z/OS should use when creating the job name in the Symphony file in USS. The syntax for the JTOPTS TWSJOBNAME() parameter is: TWSJOBNAME(EXTNAME/EXTNOCC/JOBNAME/OCCNAME) If you do not specify the TWSJOBNAME() parameter, the value OCCNAME is used by default. When choosing OCCNAME, the job names in the Symphony file will be generated with one of the following formats: <X>_<Num>_<Application Name> when the job is created in the Symphony file <X>_<Num>_<Ext>_<Application Name> when the job is first deleted and then recreated in the current plan In these examples, <X> can be J for normal jobs (operations), P for jobs representing pending predecessors, and R for recovery jobs.200 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • <Num> is the operation number. <Ext> is a sequential decimal number that is increased every time an operation is deleted and then recreated. <Application Name> is the name of the occurrence that the operation belongs to. See Figure 4-17 for an example of how the job names (and job stream names) are generated by default in the Symphony file when JTOPTS TWSJOBNAME(OCCNAME) is specified or defaulted. Note that occurrence in Tivoli Workload Scheduler for z/OS is the same as JSC job stream instance (that is, a job stream or and application that is on the plan in Tivoli Workload Scheduler for z/OS). CP OPC Current Plan Symphony File Symphony Job Stream Instance Input Arr. Occurence Token Job Stream Instance (Application Occurence) Time (Schedule) DAILY 0800 B8FF08015E683C44 B8FF08015E683C44 Operation Job Job Instance Number (Operation) J_010_DAILY 010 DLYJOB1 J_015_DAILY 015 DLYJOB2 J_020_DAILY 020 DLYJOB3 Job Stream Instance Input Arr. Occurence Token Job Stream Instance (Application Occurence) Time (Schedule) DAILY 0900 B8FFF05B29182108 B8FFF05B29182108 Operation Job Job Instance Number (Operation) J_010_DAILY 010 DLYJOB1 J_015_DAILY 015 DLYJOB2 J_020_DAILY 020 DLYJOB3 Each instance of a job stream in OPC is assigned a unique occurence token. If the job stream is added to the TWS Symphony file, the occurence token is used as the job stream name in the Symphony file.Figure 4-17 Generation of job and job stream names in the Symphony file Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 201
  • If any of the other values (EXTNAME, EXTNOCC, or JOBNAME) is specified in the JTOPTS TWSJOBNAME() parameter, the job name in the Symphony file is created according to one of the following formats: <X><Num>_<JobInfo> when the job is created in the Symphony file <X><Num>_<Ext>_<JobInfo> when the job is first deleted and then recreated in the current plan In these examples: <X> can be J for normal jobs (operations), P for jobs representing pending predecessors, and R for recovery jobs. For jobs representing pending predecessors, the job name is in all cases generated by using the OCCNAME criterion. This is because, in the case of pending predecessors, the current plan does not contain the required information (excepting the name of the occurrence) to build the Symphony name according to the other criteria. <Num> is the operation number. <Ext> is the hexadecimal value of a sequential number that is increased every time an operation is deleted and then recreated. <JobInfo> depends on the chosen criterion: – For EXTNAME: <JobInfo> is filled with the first 32 characters of the extended job name associated with that job (if it exists) or with the eight-character job name (if the extended name does not exist). Note that the extended job name, in addition to being defined in the database, must also exist in the current plan. – For EXTNOCC: <JobInfo> is filled with the first 32 characters of the extended job name associated with that job (if it exists) or with the application name (if the extended name does not exist). Note that the extended job name, in addition to being defined in the database, must also exist in the current plan. – For JOBNAME: <JobInfo> is filled with the 8-character job name. The criterion that is used to generate a Tivoli Workload Scheduler job name will be maintained throughout the entire life of the job. Note: In order to choose the EXTNAME, EXTNOCC, or JOBNAME criterion, the EQQTWSOU data set must have a record length of 160 bytes. Before using any of the above keywords, you must migrate the EQQTWSOU data set if you have allocated the data set with a record length less than 160 bytes. Sample EQQMTWSO is available to migrate this data set from record length 120 to 160 bytes.202 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Limitations when using the EXTNAME and EXTNOCC criteria: The job name in the Symphony file can contain only alphanumeric characters, dashes, and underscores. All other characters that are accepted for the extended job name are converted into dashes. Note that a similar limitation applies with JOBNAME: When defining members of partitioned data sets (such as the script or the job libraries), national characters can be used, but they are converted into dashes in the Symphony file. The job name in the Symphony file must be in uppercase. All lowercase characters in the extended name are automatically converted to uppercase by Tivoli Workload Scheduler for z/OS. Note: Using the job name (or the extended name as part of the job name) in the Symphony file implies that it becomes a key for identifying the job. This also means that the extended name - job name is used as a key for addressing all events that are directed to the agents. For this reason, be aware of the following facts for the operations that are included in the Symphony file: Editing the extended name is inhibited for operations that are created when the TWSJOBNAME keyword was set to EXTNAME or EXTNOCC. Editing the job name is inhibited for operations created when the TWSJOBNAME keyword was set to EXTNAME or JOBNAME.4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler forz/OS When all installation tasks as described in the previous sections have been completed, and all initialization statements and data sets related to end-to-end scheduling have been defined in the Tivoli Workload Scheduler for z/OS controller, end-to-end server, and plan extend, replan, and Symphony renew batch jobs, it is time to do the first verification of the mainframe part. Note: This verification can be postponed until workstations for the fault-tolerant agents have been defined in Tivoli Workload Scheduler for z/OS and, optionally, Tivoli Workload Scheduler has been installed on the fault-tolerant agents (the Tivoli Workload Scheduler servers or agents). Verify the Tivoli Workload Scheduler for z/OS controller After the customization steps haven been completed, simply start the Tivoli Workload Scheduler controller. Check the controller message log (EQQMLOG) for any unexpected error or warning messages. All Tivoli Workload Scheduler z/OS messages are prefixed with EQQ. See the IBM Tivoli Workload Scheduler Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 203
  • for z/OS Messages and Codes Version 8.2 (Maintenance Release April 2004), SC32-1267. Because we have activated the end-to-end feature in the controller initialization statements by specifying the OPCOPTS TPLGYSRV() parameter and we have asked the controller to start our end-to-end server by the SERVERS(TWSCE2E) parameter, we will see messages as shown in Example 4-6 in the Tivoli Workload Scheduler for z/OS controller message log (EQQMLOG). Example 4-6 IBM Tivoli Workload Scheduler for z/OS controller messages for end-to-end EQQZ005I OPC SUBTASK E2E ENABLER IS BEING STARTED EQQZ085I OPC SUBTASK E2E SENDER IS BEING STARTED EQQZ085I OPC SUBTASK E2E RECEIVER IS BEING STARTED EQQG001I SUBTASK E2E ENABLER HAS STARTED EQQG001I SUBTASK E2E SENDER HAS STARTED EQQG001I SUBTASK E2E RECEIVER HAS STARTED EQQW097I END-TO-END RECEIVER STARTED SYNCHRONIZATION WITH THE EVENT MANAGER EQQW097I 0 EVENTS IN EQQTWSIN WILL BE REPROCESSED EQQW098I END-TO-END RECEIVER FINISHED SYNCHRONIZATION WITH THE EVENT MANAGER EQQ3120E END-TO-END TRANSLATOR SERVER PROCESS IS NOT AVAILABLE EQQZ193I END-TO-END TRANSLATOR SERVER PROCESSS NOW IS AVAILABLE Note: If you do not see all of these messages in your controller message log, you probably have not applied all available service updates. See 3.4.2, “Service updates (PSP bucket, APARs, and PTFs)” on page 117. The messages in Example 4-6 are extracted from the Tivoli Workload Scheduler for z/OS controller message log. There will be several other messages between the messages shown in Example 4-6 if you look in your controller message log. If the Tivoli Workload Scheduler for z/OS controller is started with empty EQQTWSIN and EQQTWSOU data sets, messages shown in Example 4-7 will be issued in the controller message log (EQQMLOG). Example 4-7 Formatting messages when EQQTWSOU and EQQTWSIN are empty EQQW030I A DISK DATA SET WILL BE FORMATTED, DDNAME = EQQTWSOU EQQW030I A DISK DATA SET WILL BE FORMATTED, DDNAME = EQQTWSIN EQQW038I A DISK DATA SET HAS BEEN FORMATTED, DDNAME = EQQTWSOU EQQW038I A DISK DATA SET HAS BEEN FORMATTED, DDNAME = EQQTWSIN204 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Note: In the Tivoli Workload Scheduler for z/OS system messages, there will also be two IEC031I messages related to the formatting messages in Example 4-7. These messages can be ignored because they are related to the formatting of the EQQTWSIN and EQQTWSOU data sets. The IEC031I messages look like: IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSOU,........................ IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSIN,.............................The messages in Example 4-8 and Example 4-9 show that the controller isstarted with the end-to-end feature active and that it is ready to run jobs in theend-to-end environment.When the Tivoli Workload Scheduler for z/OS controller is stopped, theend-to-end related messages shown in Example 4-8 will be issued.Example 4-8 Controller messages for end-to-end when controller is stoppedEQQG003I SUBTASK E2E RECEIVER HAS ENDEDEQQG003I SUBTASK E2E SENDER HAS ENDEDEQQZ034I OPC SUBTASK E2E SENDER HAS ENDED.EQQZ034I OPC SUBTASK E2E RECEIVER HAS ENDED.EQQZ034I OPC SUBTASK E2E ENABLER HAS ENDED.Verify the Tivoli Workload Scheduler for z/OS serverAfter the customization steps haven been completed for the Tivoli WorkloadScheduler end-to-end server started task, simply start the end-to-end serverstarted task. Check the server message log (EQQMLOG) for any unexpectederror or warning messages. All Tivoli Workload Scheduler z/OS messages areprefixed with EQQ. See the IBM Tivoli Workload Scheduler for z/OS Messagesand Codes, Version 8.2 (Maintenance Release April 2004), SC32-1267.When the end-to-end server is started for the first time, check that the messagesshown in Example 4-9 appear in the Tivoli Workload Scheduler for z/OSend-to-end server EQQMLOG.Example 4-9 End-to-end server messages first time the end-to-end server is startedEQQPH00I SERVER TASK HAS STARTEDEQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTEDEQQZ024I Initializing wait parametersEQQPT01I Program "/usr/lpp/TWS/TWS810/bin/translator" has been started, pid is 67371783EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started, pid is 67371919EQQPT56W The /DD:EQQTWSIN queue has not been formatted yet Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 205
  • EQQPT22I Input Translator thread stopped until new Symphony will be available The messages shown in Example 4-9 on page 205 is normal when the Tivoli Workload Scheduler for z/OS end-to-end server is started for the first time and there is no Symphony file created. Furthermore the end-to-end server message EQQPT56W is normally only issued for the EQQTWSIN data set, if the EQQTWSIN and EQQTWSOU data sets are both empty and there is no Symphony file created. If the Tivoli Workload Scheduler for z/OS controller and end-to-end server is started with an empty EQQTWSOU data set (for example reallocated with a new record length), message EQQPT56W will be issued for the EQQTWSOU data set: EQQPT56W The /DD:EQQTWSOU queue has not been formatted yet If a Symphony file has been created the end-to-end server messages log contains the messages in the following example. Example 4-10 End-to-end server messages when server is started with Symphony file EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED EQQZ024I Initializing wait parameters EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/translator" has been started, pid is 33817341 EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/netman" has been started, pid is 262958 EQQPT20I Input Translator waiting for Batchman and Mailman are started EQQPT21I Input Translator finished waiting for Batchman and Mailman The messages shown in Example 4-10 are the normal start-up messages for an Tivoli Workload Scheduler for z/OS end-to-end server with a Symphony file. When the end-to-end server is stopped the messages shown in Example 4-11 should be issued in the EQQMLOG. Example 4-11 End-to-end server messages when server is stopped EQQZ000I A STOP OPC COMMAND HAS BEEN RECEIVED EQQPT04I Starter has detected a stop command EQQPT40I Input Translator thread is shutting down EQQPT12I The Netman process (pid=262958) ended successfully EQQPT40I Output Translator thread is shutting down EQQPT53I Output Translator thread has terminated EQQPT53I Input Translator thread has terminated EQQPT40I Input Writer thread is shutting down EQQPT53I Input Writer thread has terminated EQQPT12I The Translator process (pid=33817341) ended successfully206 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • EQQPT10I All Starters sons ended EQQPH34I THE END-TO-END PROCESSES HAVE ENDED EQQPH01I SERVER TASK ENDED After successful completion of the verification, move on to the next step in the end-to-end installation.4.3 Installing Tivoli Workload Scheduler in anend-to-end environment In this section, we describe how to install Tivoli Workload Scheduler in an end-to-end environment. Important: Maintenance releases of Tivoli Workload Scheduler are made available about every three months. We recommend that, before installing, check for the latest available update at: ftp://ftp.software.ibm.com The latest release (as we write this book) for IBM Tivoli Workload Scheduler is 8.2-TWS-FP04 and is available at: ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_8.2.0/8.2 .0-TWS-FP04/ Installing a Tivoli Workload Scheduler agent in an end-to-end environment is not very different from installing Tivoli Workload Scheduler when Tivoli Workload Scheduler for z/OS is not involved. Follow the installation instructions in the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. The main differences to keep in mind are that in an end-to-end environment, the master domain manager is always the Tivoli Workload Scheduler for z/OS engine (known by the Tivoli Workload Scheduler workstation name OPCMASTER), and the local workstation name of the fault-tolerant workstation is limited to four characters.4.3.1 Installing multiple instances of Tivoli Workload Scheduler onone machine As mentioned in Chapter 2, “End-to-end scheduling architecture” on page 25, there are often good reasons to install multiple instances of the Tivoli Workload Scheduler engine on the same machine. If you plan to do this, there are some Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 207
  • important considerations that should be made. Careful planning before installation can save you a considerable amount of work later. The following items must be unique for each instance of the Tivoli Workload Scheduler engine that is installed on a computer: The Tivoli Workload Scheduler user name and ID associated with the instance The home directory of the Tivoli Workload Scheduler user The component group (only on tier-2 platforms: LinuxPPC, IRIX, Tru64 UNIX, Dynix, HP-UX 11i Itanium) The netman port number (set by the nm port option in the localopts file) First, the user name and ID must be unique. There are many different ways to these users. Choose user names that make sense to you. It may simplify things to create a group called IBM Tivoli Workload Scheduler and make all Tivoli Workload Scheduler users members of this group. This would enable you to add group access to files to grant access to all Tivoli Workload Scheduler users. When installing Tivoli Workload Scheduler on UNIX, the Tivoli Workload Scheduler user is specified by the -uname option of the UNIX customize script. It is important to specify the Tivoli Workload Scheduler user because otherwise the customize script will choose the default user name maestro. Obviously, if you plan to install multiple Tivoli Workload Scheduler engines on the same computer, they cannot both be installed as the user maestro. Second, the home directory must be unique. In order to keep two different Tivoli Workload Scheduler engines completely separate, each one must have its own home directory. Note: Previous versions of Tivoli Workload Scheduler installed files into a directory called unison in the parent directory of the Tivoli Workload Scheduler home directory. Tivoli Workload Scheduler 8.2 simplifies things by placing the unison directory inside the Tivoli Workload Scheduler home directory. The unison directory is a relic of the days when Unison Software’s Maestro program (the direct ancestor of IBM Tivoli Workload Scheduler) was one of several programs that all shared some common data. The unison directory was where the common data shared between Unison’s various products was stored. Important information is still stored in this directory, including the workstation database (cpudata) and the NT user database (userdata). The Tivoli Workload Scheduler Security file is no longer stored in the unison directory; it is now stored in the Tivoli Workload Scheduler home directory.208 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Figure 4-18 should give you an idea of how two Tivoli Workload Schedulerengines might be installed on the same computer. You can see that each enginehas its own separate Tivoli Workload Scheduler directory. / tivoli tws TWS Engine A TWS Engine B tws-a tws-b … … network Security bin mozart network Security bin mozart … … cpudata userdata mastsked jobs cpudata userdata mastsked jobsFigure 4-18 Two separate Tivoli Workload Scheduler engines on one computerExample 4-12 shows the /etc/passwd entries that correspond to the two TivoliWorkload Scheduler users.Example 4-12 Excerpt from /etc/passwd: two different Tivoli Workload Scheduler userstws-a:!:31111:9207:TWS Engine A User:/tivoli/tws/tws-a:/usr/bin/kshtws-b:!:31112:9207:TWS Engine B User:/tivoli/tws/tws-b:/usr/bin/kshNote that each Tivoli Workload Scheduler user has a unique name, ID, and homedirectory.On tier-2 platforms only (Linux/PPC, IRIX, Tru64 UNIX, Dynix, HP-UX11i/Itanium), Tivoli Workload Scheduler still uses the /usr/unison/componentsfile to keep track of each installed Tivoli Workload Scheduler engine. Each TivoliWorkload Scheduler engine on a tier-2 platform computer must have a uniquecomponent group name. The component group is arbitrary; it is just a name thatis used by Tivoli Workload Scheduler programs to keep each engine separate.The name of the component group is entirely up to you. It can be specified usingthe -group option of the UNIX customize script during installation on a tier-2platform machine. It is important to specify a different component group name foreach instance of the Tivoli Workload Scheduler engine installed on a computer. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 209
  • Component groups are stored in the file /usr/unison/components. This file contains two lines for each component group. Example 4-13 shows the components file corresponding to the two Tivoli Workload Scheduler engines. Example 4-13 Sample /usr/unison/components file for tier-2 platforms netman 1.8.1 /tivoli/TWS/TWS-A/tws TWS-Engine-A maestro 8.1 /tivoli/TWS/TWS-A/tws TWS-Engine-A netman 1.8.1.1 /tivoli/TWS/TWS-B/tws TWS-Engine-B maestro 8.1 /tivoli/TWS/TWS-B/tws TWS-Engine-B The component groups are called TWS-Engine-A and TWS-Engine-B. For each component group, the version and path for netman and maestro (the Tivoli Workload Scheduler engine) are listed. In this context, maestro refers simply to the Tivoli Workload Scheduler home directory. Important: The /usr/unison/components file is used only on tier-2 platforms. On tier-1 platforms (such as AIX, Linux/x86, Solaris, HP-UX, and Windows XP), there is no longer a need to be concerned with component groups because the new ISMP installer automatically keeps track of each installed Tivoli Workload Scheduler engine. It does so by writing data about each engine to a file called /etc/TWS/TWS Registry.dat. Important: Do not edit or remove the /etc/TWS/TWS Registry.dat file because this could cause problems with uninstalling Tivoli Workload Scheduler or with installing fix packs. Do not remove this file unless you intend to remove all installed Tivoli Workload Scheduler 8.2 engines from the computer. Finally, because netman listens for incoming TCP link requests from other Tivoli Workload Scheduler agents, it is important that the netman program for each Tivoli Workload Scheduler engine listen to a unique port. This port is specified by the nm port option in the Tivoli Workload Scheduler localopts file. If you change this option, you must shut down netman and start it again to make the change take effect. In our test environment, we chose a netman port number and user ID that was the same for each Tivoli Workload Scheduler engine. This makes it easier to remember and simpler when troubleshooting. Table 4-4 on page 211 shows the names and numbers we used in our testing.210 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Table 4-4 If possible, choose user IDs and port numbers that are the same User name User ID Netman port tws-a 31111 31111 tws-b 31112 311124.3.2 Verify the Tivoli Workload Scheduler installation Start Tivoli Workload Scheduler and verify that it starts without any error messages. Note that if there are no active workstations in Tivoli Workload Scheduler for z/OS for the Tivoli Workload Scheduler agent, only the netman process will be started. But you can verify that the netman process is started and that it listens to the IP port number that you have decided to use in your end-to-end environment.4.4 Define, activate, verify fault-tolerant workstations To be able to define jobs in Tivoli Workload Scheduler for z/OS to be scheduled on FTWs, the workstations must be defined in Tivoli Workload Scheduler for z/OS controller. The workstations that are defined via the CPUREC keyword should also be defined in the Tivoli Workload Scheduler for z/OS workstation database before they can be activated in the Tivoli Workload Scheduler for z/OS plan. The workstations are defined the same way as computer workstations in Tivoli Workload Scheduler for z/OS, except they need a special flag: fault tolerant. This flag is used to indicate in Tivoli Workload Scheduler for z/OS that these workstations should be treated as FTWs. When the FTWs have been defined in the Tivoli Workload Scheduler for z/OS workstation database, they can be activated in the Tivoli Workload Scheduler for z/OS plan by either running a plan replan or plan extend batch job. The process is as follows: 1. Create a CPUREC definition for the workstation as described in “CPUREC statement” on page 187. 2. Define the FTW in the Tivoli Workload Scheduler for z/OS workstation database. Remember to set it to fault tolerant. 3. Run Tivoli Workload Scheduler for z/OS plan replan or plan extend to activate the workstation definition in Tivoli Workload Scheduler for z/OS. 4. Verify that the FTW gets active and linked. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 211
  • 5. Define jobs and job streams on the newly created and activated FTW as described in 4.5, “Creating fault-tolerant workstation job definitions and job streams” on page 217. Important: Please note that order of the operations in this process is important.4.4.1 Define fault-tolerant workstation in Tivoli Workload Schedulercontroller workstation database A fault-tolerant workstation can be defined either from Tivoli Workload Scheduler for z/OS legacy ISPF dialogs (use option 1.1 from main menu) or in the JSC. In the following steps, we show how to define an FTW from the JSC (see Figure 4-19 on page 213): 1. Open the Actions Lists, select New Workstation, then select the instance for the Tivoli Workload Scheduler for z/OS controller where the workstation should be defined (TWSC-zOS in our example). 2. The Properties - Workstation in Database window opens. 3. Select the Fault Tolerant check box and fill in the Name field (the four-character name of the FTW) and, optionally, the Description field. See Figure 4-19 on page 213. Note: It is a good standard to use the first part of the description field to list the DNS name or host name for the FTW. This makes it easier to remember which server or machine the four-character workstation name in Tivoli Workload Scheduler for z/OS relates to. You can add up to 32 alphanumeric characters in the description field. 4. Save the new workstation definition by clicking OK.212 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Note: When we used the JSC to create FTWs as described, we sometimes received this error: GJS0027E Cannot save the workstation xxxx. Reason: EQQW787E FOR FT WORKSTATIONS RESOURCES CANNOT BE USED AT PLANNING If you receive this error when creating the FTW from the JSC, then select the Resources tab (see Figure 4-19 on page 213) and un-check the Used for planning check box for Resource 1 and Resource 2. This must be done before selecting the Fault Tolerant check box on the General tab. Figure 4-19 Defining a fault-tolerant workstation from the JSC4.4.2 Activate the fault-tolerant workstation definition Fault-tolerant workstation definitions can be activated in the Tivoli Workload Scheduler for z/OS plan either by running the replan or the extend plan programs in the Tivoli Workload Scheduler for z/OS controller. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 213
  • When running the replan or extend program, Tivoli Workload Scheduler for z/OS creates (or recreates) the Symphony file and distributes it to the domain managers at the first level. These domain managers, in turn, distribute the Symphony file to their subordinate fault-tolerant agents and domain managers, and so on. If the Symphony file is successfully created and distributed, all defined FTWs should be linked and active. We run the replan program and verify that the Symphony file is created in the end-to-end server. We also verify that the FTWs become available and have linked status in the Tivoli Workload Scheduler for z/OS plan.4.4.3 Verify that the fault-tolerant workstations are active and linked First, it should be verified that there is no warning or error message in the replan batch job (EQQMLOG). The message log should show that all topology statements (DOMREC, CPUREC, and USRREC) have been accepted without any errors or warnings. Verify messages in plan batch job For a successful creation of the Symphony file, the message log should show messages similar to those in Example 4-14. Example 4-14 Plan batch job EQQMLOG messages when Symphony file is created EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000 EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000 EQQQ502I SPECIAL RESOURCE DATASPACE HAS BEEN CREATED. EQQQ502I 00000020 PAGES ARE USED FOR 00000100 SPECIAL RESOURCE RECORDS. EQQ3011I WORKSTATION F100 SET AS DOMAIN MANAGER FOR DOMAIN DM100 EQQ3011I WORKSTATION F200 SET AS DOMAIN MANAGER FOR DOMAIN DM200 EQQ3105I A NEW CURRENT PLAN (NCP) HAS BEEN CREATED EQQ3106I WAITING FOR SCP EQQ3107I SCP IS READY: START JOBS ADDITION TO SYMPHONY FILE EQQ4015I RECOVERY JOB OF F100DJ01 HAS NO JOBWS KEYWORD SPECIFIED, EQQ4015I THE WORKSTATION F100 OF JOB F100DJ01 IS USED EQQ3108I JOBS ADDITION TO SYMPHONY FILE COMPLETED EQQ3101I 0000019 JOBS ADDED TO THE SYMPHONY FILE FROM THE CURRENT PLAN EQQ3087I SYMNEW FILE HAS BEEN CREATED Verify messages in the end-to-end server message log In the Tivoli Workload Scheduler for z/OS end-to-end server message log, we see the messages shown in Example 4-15. These messages show that the Symphony file has been created by the plan replan batch jobs and that it was possible for the end-to-end server to switch to the new Symphony file.214 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Example 4-15 End-to-end server messages when Symphony file is createdEQQPT30I Starting switching SymphonyEQQPT12I The Mailman process (pid=Unknown) ended successfullyEQQPT12I The Batchman process (pid=Unknown) ended successfullyEQQPT22I Input Translator thread stopped until new Symphony will be availableEQQPT31I Symphony successfully switchedEQQPT20I Input Translator waiting for Batchman and Mailman are startedEQQPT21I Input Translator finished waiting for Batchman and MailmanEQQPT23I Input Translator thread is runningVerify messages in the controller message logThe Tivoli Workload Scheduler for z/OS controller shows the messages inExample 4-16, which indicate that the Symphony file was created successfullyand that the fault-tolerant workstations are active and linked.Example 4-16 Controller messages when Symphony file is createdEQQN111I SYMNEW FILE HAS BEEN CREATEDEQQW090I THE NEW SYMPHONY FILE HAS BEEN SUCCESSFULLY SWITCHEDEQQWL10W WORK STATION F100, HAS BEEN SET TO LINKED STATUSEQQWL10W WORK STATION F100, HAS BEEN SET TO ACTIVE STATUSEQQWL10W WORK STATION F101, HAS BEEN SET TO LINKED STATUSEQQWL10W WORK STATION F102, HAS BEEN SET TO LINKED STATUSEQQWL10W WORK STATION F101, HAS BEEN SET TO ACTIVE STATUSEQQWL10W WORK STATION F102, HAS BEEN SET TO ACTIVE STATUSVerify that fault-tolerant workstations are active and linkedAfter the replan job has completed and output messages have been displayed,the FTWs are checked using the JSC instance pointing Tivoli WorkloadScheduler for z/OS controller (Figure 4-20).The Fault Tolerant column indicates that it is an FTW. The Linked columnindicates whether the workstation is linked. The Status column indicates whetherthe mailman process is up and running on the FTW.Figure 4-20 Status of FTWs in the Tivoli Workload Scheduler for z/OS plan Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 215
  • The F200 workstation is Not Available because we have not installed a Tivoli Workload Scheduler fault-tolerant workstation on this machine yet. We have prepared for a future installation of the F200 workstation by creating the related CPUREC definitions for F200 and defined the FTW (F200) in the Tivoli Workload Scheduler controller workstation database. Tip: If the workstation does not link as it should, the cause could be that the writer process has not initiated correctly or the run number for the Symphony file on the FTW is not the same as the run number on the master. Mark the unlinked workstations and right-click to open a pop-up menu where you can click Link to try to link the workstation. The run number for the Symphony file in the end-to-end server can be seen from legacy ISPF panels in option 6.6 from the main menu. Figure 4-21 shows the status of the same FTWs, as it is shown in the JSC, when looking at the Symphony file at domain manager F100. Note that is much more information is available for each FTW. For example, in Figure 4-21 we can see that jobman and writer are running and that we can run 20 jobs in parallel on the FTWs (the Limit column). Also note the information in the Run, CPU type, and Domain columns. The information shown in Figure 4-21 is read from the Symphony file and generated by the plan programs based on the specifications in CPUREC and DOMREC definitions. This is one of the reasons why we suggest activating support for JSC when running end-to-end scheduling with Tivoli Workload Scheduler for z/OS. Note the status of the OPCMASTER workstation is correct and also remember that the OPCMASTER workstation and the MASTERDM domain is predefined in Tivoli Workload Scheduler for z/OS and cannot be changed. Jobman is not running on OPCMASTER (in USS in the end-to-end server), because the end-to-end server is not supposed to run jobs in USS. So the information that jobman is not running on the OPCMASTER workstation is OK. Figure 4-21 Status of FTWs in the Symphony file on domain manager F100216 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 4.5 Creating fault-tolerant workstation job definitionsand job streams When the FTWs are active and linked in Tivoli Workload Scheduler for z/OS, you can run jobs on these workstations. To submit work to the FTWs in Tivoli Workload Scheduler for z/OS, you should: 1. Define the script (the JCL or the task) that should be executed on the FTW, (that is, on the server). When defining scripts in Tivoli Workload Scheduler for z/OS, it is important to remember that the script can be placed central in the Tivoli Workload Scheduler for z/OS job library or non-centralized on the FTW (on the Tivoli Workload Scheduler server). Definitions of scripts are found in: – 4.5.1, “Centralized and non-centralized scripts” on page 217 – 4.5.2, “Definition of centralized scripts” on page 219, – 4.5.3, “Definition of non-centralized scripts” on page 221 – 4.5.4, “Combination of centralized script and VARSUB, JOBREC parameters” on page 232 2. Create a job stream (application) in Tivoli Workload Scheduler for z/OS and add the job (operation) defined in step 1. It is possible to add the job (operation) to an existing job stream and create dependencies between jobs on FTWs and jobs on mainframe. Definition of FTW jobs and job streams in Tivoli Workload Scheduler for z/OS is found in 4.5.5, “Definition of FTW jobs and job streams in the controller” on page 234.4.5.1 Centralized and non-centralized scripts As described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69, a job can use two kinds of scripts: centralized or non-centralized. A centralized script is a script that resides in controller job library (EQQJBLIB dd-card, also called JOBLIB) and that is downloaded to the FTW every time the job is submitted. Figure 4-22 on page 218 illustrates the relationship between the centralized script job definition and member name in the job library (JOBLIB). Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 217
  • JOBLIB(AIXHOUSP) //*%OPC SCAN //* OPC Comment: This job ………….. //*%OPC RECOVER echo OPC occurence plan date is: rmstdlist -p 10 IBM Tivoli Workload Scheduler for z/OS job library (JOBLIB)Figure 4-22 Centralized script defined in controller job library (JOBLIB) A non-centralized script is a script that is defined in the SCRPTLIB and that resides on the FTW. Figure 4-23 on page 219 shows the relationship between the job definition and the member name in the script library (EQQSCLIB).218 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • EQQSCLIB(AIXHOUSP) VARSUB TABLES(IBMGLOBAL) JOBREC JOBSCR(/tivoli/tws/scripts/rc_rc. JOBUSR(%DISTUID.) RCCONDSUC(((RC<16) AND (RC<>8)) RECOVERY OPTION(RERUN) MESSAGE(Reply OK to rerun job) JOBCMD(ls) JOBUSR(%DISTUID.) SUB IBM Tivoli Workload Scheduler for z/OS script library (EQQSCLIB)Figure 4-23 Non-centralized script defined in controller script library (EQQSCLIB)4.5.2 Definition of centralized scripts Define the centralized script job (operation) in a Tivoli Workload Scheduler for z/OS job stream (application) with the centralized script option set to Y (Yes). See Figure 4-24 on page 220. Note: The default is N (No) for all operations in Tivoli Workload Scheduler for z/OS. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 219
  • Centralized scriptFigure 4-24 Centralized script option set in ISPF panel or JSC window A centralized script is a script that resides in the Tivoli Workload Scheduler for z/OS JOBLIB and that is downloaded to the fault-tolerant agent every time the job is submitted. The centralized script is defined the same way as a normal job JCL in Tivoli Workload Scheduler for z/OS. Example 4-17 Centralized script for job AIXHOUSP defined in controller JOBLIB EDIT TWS.V8R20.JOBLIB(AIXHOUSP) - 01.02 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 //*%OPC SCAN 000002 //* OPC Comment: This job calls TWS rmstdlist script. 000003 //* OPC ======== - The rmstdlist script is called with -p flag and 000004 //* OPC with parameter 10. 000005 //* OPC - This means that the rmstdlist script will print 000006 //* OPC files in the stdlist directory older than 10 days. 000007 //* OPC - If rmstdlist ends with RC in the interval from 1 000008 //* OPC to 128, OPC will add recovery application 000009 //* OPC F100CENTRECAPPL.220 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 000010 //* OPC 000011 //*%OPC RECOVER JOBCODE=(1-128),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000012 //* OPC 000013 echo OPC occurrence plan date is: &ODMY1. 000014 rmstdlist -p 10 ****** **************************** Bottom of Data **************************** In the centralized script in Example 4-17 on page 220, we are running the rmstdlist program that is delivered with Tivoli Workload Scheduler. In the centralized script, we use Tivoli Workload Scheduler for z/OS Automatic Recovery as well as JCL variables. Rules when creating centralized scripts Follow these rules when creating the centralized scripts in the Tivoli Workload Scheduler for z/OS JOBLIB: Each line starts in column 1 and ends in column 80. A backslash () in column 80 can be used to continue script lines with more than 80 characters. Blanks at the end of a line are automatically removed. Lines that start with //* OPC, //*%OPC, or //*>OPC are used for comments, variable substitution directives, and automatic job recovery. These lines are automatically removed before the script is downloaded to the FTA.4.5.3 Definition of non-centralized scripts Non-centralized scripts are defined in a special partitioned data set, EQQSCLIB, that is allocated in the Tivoli Workload Scheduler for z/OS controller started task procedure and used to store the job or task definitions for FTA jobs. The script (the JCL) resides on the fault-tolerant agent. Note: This is the default behavior in Tivoli Workload Scheduler for z/OS for fault-tolerant agent jobs. You must use the JOBREC statement in every SCRPTLIB member to specify the script or command to run. In the SCRPTLIB members, you can also specify the following statements: VARSUB to use the Tivoli Workload Scheduler for z/OS automatic substitution of variables when the Symphony file is created or when an operation on an FTW is added to the current plan dynamically. RECOVERY to use the Tivoli Workload Scheduler recovery. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 221
  • Example 4-18 shows the syntax for the VARSUB, JOBREC, and RECOVERY statements. Example 4-18 Syntax for VARSUB, JOBREC, and RECOVERY statements VARSUB TABLES(GLOBAL|tab1,tab2,..|APPL) PREFIX(’char’) BACKPREF(’char’) VARFAIL(YES|NO) TRUNCATE(YES|NO) JOBREC JOBSCR|JOBCMD (’task’) JOBUSR (’username’) INTRACTV(YES|NO) RCCONDSUC(’success condition’) RECOVERY OPTION(STOP|CONTINUE|RERUN) MESSAGE(’message’) JOBCMD|JOBSCR(’task’) JOBUSR (’username’) JOBWS(’wsname’) INTRACTV(YES|NO) RCCONDSUC(’success condition’) If you define a job with a SCRPTLIB member in the Tivoli Workload Scheduler for z/OS database that contains errors, the daily planning batch job sets the status of that job to failed in the Symphony file. This change of status is not shown in the Tivoli Workload Scheduler for z/OS interface. You can find the messages that explain the error in the log of the daily planning batch job. If you dynamically add a job to the plan in Tivoli Workload Scheduler for z/OS whose associated SCRPTLIB member contains errors, the job is not added. You can find the messages that explain this failure in the controller EQQMLOG. Rules when creating JOBREC, VARSUB, or RECOVERY statements Each statement consists of a statement name, keywords, and keyword values, and follows TSO command syntax rules. When you specify SCRPTLIB statements, follow these rules: Statement data must be in columns 1 through 72. Information in columns 73 through 80 is ignored. A blank serves as the delimiter between two keywords; if you supply more than one delimiter, the extra delimiters are ignored. Continuation characters and blanks are not used to define a statement that continues on the next line.222 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Values for keywords are contained within parentheses. If a keyword can have multiple values, the list of values must be separated by valid delimiters. Delimiters are not allowed between a keyword and the left parenthesis of the specified value. Type /* to start a comment and */ to end a comment. A comment can span record images in the parameter member and can appear anywhere except in the middle of a keyword or a specified value. A statement continues until the next statement or until the end of records in the member. If the value of a keyword includes spaces, enclose the value within single or double quotation marks as in Example 4-19.Example 4-19 JOBCMD and JOBSCR examplesJOBCMD(’ls la’)JOBSCR(‘C:/USERLIB/PROG/XME.EXE’)JOBSCR(“C:/USERLIB/PROG/XME.EXE”)JOBSCR(“C:/USERLIB/PROG/XME.EXE ‘THIS IS THE PARAMETER LIST’ “)JOBSCR(‘C:/USERLIB/PROG/XME.EXE “THIS IS THE PARAMETER LIST” ‘)Description of the VARSUB statementThe VARSUB statement defines the variable substitution options. This statementmust always be the first one in the members of the SCRPTLIB. For moreinformation about the variable definition, see IBM Tivoli Workload Scheduler forz/OS Managing the Workload, Version 8.2 (Maintenance Release April 2004),SC32-1263. Note: Can be used in combination with a job that is defined with centralized script.Figure 4-25 shows the format of the VARSUB statement.Figure 4-25 Format of the VARSUB statement Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 223
  • VARSUB is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL. Description of the VARSUB parameters The following describes the VARSUB parameters: TABLES(GLOBAL|APPL|table1,table2,...) Identifies the variable tables that must be searched and the search order. APPL indicates the application variable table (see the VARIABLE TABLE field in the MCP panel, at Occurrence level). GLOBAL indicates the table defined in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch options. PREFIX(char|&) A non-alphanumeric character that precedes a variable. It serves the same purpose as the ampersand (&) character that is used in variable substitution in z/OS JCL. BACKPREF(char|%) A non-alphanumeric character that delimits a variable to form simple and compound variables. It serves the same purpose as the percent (%) character that is used in variable substitution in z/OS JCL. VARFAIL(NO|YES) Specifies whether Tivoli Workload Scheduler for z/OS is to issue an error message when a variable substitution error occurs. If you specify NO, the variable string is left unchanged without any translation. TRUNCATE(YES|NO) Specifies whether variables are to be truncated if they are longer than the allowed length. If you specify NO and the keywords are longer than the allowed length, an error message is issued. The allowed length is the length of the keyword for which you use the variable. For example, if you specify a variable of five characters for the JOBWS keyword, the variable is truncated to the first four characters. Description of the JOBREC statement The JOBREC statement defines the fault-tolerant workstation job properties. You must specify JOBREC for each member of the SCRPTLIB. For each job this statement specifies the script or the command to run and the user that must run the script or command.224 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Note: JOBREC can be used in combination with a job that is defined with centralized script.Figure 4-26 shows the format of the JOBREC statement.Figure 4-26 Format of the JOBREC statementJOBREC is defined in the members of the EQQSCLIB library, as specified by theEQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the planextend, replan, and Symphony renew batch job JCL.Description of the JOBREC parametersThe following describes the JOBREC parameters: JOBSCR(script name) Specifies the name of the shell script or executable file to run for the job. The maximum length is 4095 characters. If the script includes more than one word, it must be enclosed within single or double quotation marks. Do not specify this keyword if the job uses a centralized script. JOBCMD(command name) Specifies the name of the shell command to run the job. The maximum length is 4095 characters. If the command includes more than one word, it must be enclosed within single or double quotation marks. Do not specify this keyword if the job uses a centralized script. JOBUSR(user name) Specifies the name of the user submitting the specified script or command. The maximum length is 47 characters. If you do not specify the user in the JOBUSR keyword, the user defined in the CPUUSER keyword of the CPUREC statement is used. The CPUREC statement is the one related to the workstation on which the specified script or command must run. If the user is not specified in the CPUUSER keyword, the tws user is used. If the script is centralized, you can also use the job-submit exit (EQQUX001) to specify the user name. This user name overrides the value specified in the JOBUSR keyword. In turn, the value that is specified in the JOBUSR keyword Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 225
  • overrides that specified in the CPUUSER keyword of the CPUREC statement. If no user name is specified, the tws user is used. If you use this keyword to specify the name of the user who submits the specified script or command on a Windows fault-tolerant workstation, you must associate this user name to the Windows workstation in the USRREC initialization statement. INTRACTV(YES|NO) Specifies that a Windows job runs interactively on the Windows desktop. This keyword is used only for jobs running on Windows fault-tolerant workstations. RCCONDSUC(“success condition”) An expression that determines the return code (RC) that is required to consider a job as successful. If you do not specify this keyword, the return code equal to zero corresponds to a successful condition. A return code different from zero corresponds to the job abend. The success condition maximum length is 256 characters and the total length of JOBCMD or JOBSCR plus the success condition must be 4086 characters. This is because the TWSRCMAP string is inserted between the success condition and the script or command name. For example, the dir command together with the success condition RC<4 is translated into: dir TWSRCMAP: RC<4 The success condition expression can contain a combination of comparison and Boolean expressions: – Comparison expression specifies the job return codes. The syntax is: (RC operator operand), where: • RC is the RC keyword (type RC). • operator is the comparison operator. It can have the values shown in Table 4-5. Table 4-5 Comparison operators Example Operator Description RC < a < Less than RC <= a <= Less than or equal to RC> a > Greater than RC >= a >= Greater than or equal to RC = a = Equal to226 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Example Operator Description RC <> a <> Not equal to • operand is an integer between -2147483647 and 2147483647. For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows: RCCONDSUC “(RC <= 3)” – Boolean expression specifies a logical combination of comparison expressions. The syntax is: comparison_expression operator comparison_expression, where: • comparison_expression The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation. • operator Logical operator. It can have the following values: and, or, not. For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows: RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))”Description of the RECOVERY statementScheduler recovery for a job whose status is in error, but whose error code is notFAIL. To run the recovery, you can specify one or both of the following recoveryactions: A recovery job (JOBCMD or JOBSCR keywords) A recovery prompt (MESSAGE keyword)The recovery actions must be followed by one of the recovery options (theOPTION keyword), stop, continue, or rerun. The default is stop with no recoveryjob and no recovery prompt. For more information about recovery in a distributednetwork, see Tivoli Workload Scheduler Reference Guide Version 8.2(Maintenance Release April 2004),SC32-1274.The RECOVERY statement is ignored if it is used with a job that runs acentralized script.Figure 4-27 on page 228 shows the format of the RECOVERY statement. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 227
  • Figure 4-27 Format of the RECOVERY statement RECOVERY is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL. Description of the RECOVERY parameters The following describes the RECOVERY parameters: OPTION(STOP|CONTINUE|RERUN) Specifies the option that Tivoli Workload Scheduler for z/OS must use when a job abends. For every job, Tivoli Workload Scheduler for z/OS enables you to define a recovery option. You can specify one of the following values: – STOP: Do not continue with the next job. The current job remains in error. You cannot specify this option if you use the MESSAGE recovery action. – CONTINUE: Continue with the next job. The current job status changes to complete in the z/OS interface. – RERUN: Automatically rerun the job (once only). The job status changes to ready, and then to the status of the rerun. Before rerunning the job for a second time, an automatically generated recovery prompt is displayed. MESSAGE(“message’”) Specifies the text of a recovery prompt, enclosed in single or double quotation marks, to be displayed if the job abends. The text can contain up to 64 characters. If the text begins with a colon (:), the prompt is displayed, but no reply is required to continue processing. If the text begins with an exclamation mark (!), the prompt is not displayed but a reply is required to proceed. You cannot use the recovery prompt if you specify the recovery STOP option without using a recovery job.228 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • JOBCMD(command name)Specifies the name of the shell command to run if the job abends. Themaximum length is 4095 characters. If the command includes more than oneword, it must be enclosed within single or double quotation marks.JOBSCR(script name)Specifies the name of the shell script or executable file to be run if the jobabends. The maximum length is 4095 characters. If the script includes morethan one word, it must be enclosed within single or double quotation marks.JOBUSR(user name)Specifies the name of the user submitting the recovery job action. Themaximum length is 47 characters. If you do not specify this keyword, the userdefined in the JOBUSR keyword of the JOBREC statement is used.Otherwise, the user defined in the CPUUSER keyword of the CPURECstatement is used. The CPUREC statement is the one related to theworkstation on which the recovery job must run. If the user is not specified inthe CPUUSER keyword, the tws user is used.If you use this keyword to specify the name of the user who runs the recoveryon a Windows fault-tolerant workstation, you must associate this user name tothe Windows workstation in the USRREC initialization statementJOBWS(workstation name)Specifies the name of the workstation on which the recovery job or commandis submitted. The maximum length is 4 characters. The workstation mustbelong to the same domain as the workstation on which the main job runs. Ifyou do not specify this keyword, the workstation name of the main job is used.INTRACTV(YES|NO)Specifies that the recovery job runs interactively on a Windows desktop. Thiskeyword is used only for jobs running on Windows fault-tolerant workstations.RCCONDSUC(“success condition”)An expression that determines the return code (RC) that is required toconsider a recovery job as successful. If you do not specify this keyword, thereturn code equal to zero corresponds to a successful condition. A returncode different from zero corresponds to the job abend.The success condition maximum length is 256 characters and the total lengthof the JOBCMD or JOBSCR plus the success condition must be 4086characters. This is because the TWSRCMAP string is inserted between thesuccess condition and the script or command name. For example, the dircommand together with the success condition RC<4 is translated into: dir TWSRCMAP: RC<4 Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 229
  • The success condition expression can contain a combination of comparison and Boolean expressions: – Comparison expression Specifies the job return codes. The syntax is: (RC operator operand) where: • RC is the RC keyword (type RC). • operator is the comparison operator. It can have the values in Table 4-6. Table 4-6 Operator comparison operator values Example Operator Description RC < a < Less than RC <= a <= Less than or equal to RC> a > Greater than RC >= a >= Greater than or equal to RC = a = Equal to RC <> a <> Not equal to • operand is an integer between -2147483647 and 2147483647. For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows: RCCONDSUC “(RC <= 3)” – Boolean expression: Specifies a logical combination of comparison expressions. The syntax is: comparison_expression operator comparison_expression where: • comparison_expression The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation. • operator Logical operator (it could be either: and, or, not). For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows: RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))”230 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Example VARSUB, JOBREC, and RECOVERYFor the test of VARSUB, JOBREC, and RECOVERY, we used thenon-centralized script member as shown in Example 4-20.Example 4-20 Non-centralized AIX script with VARSUB, JOBREC, and RECOVERYEDIT TWS.V8R20.SCRPTLIB(F100DJ02) - 01.05 Columns 00001 00072Command ===> Scroll ===> CSR****** ***************************** Top of Data ******************************000001 /* Definition for job with "non-centralized" script */000002 /* ------------------------------------------------ */000003 /* VARSUB - to manage JCL variable substitution */000004 VARSUB000005 TABLES(E2EVAR)000006 PREFIX(&)000007 BACKPREF(%)000008 VARFAIL(YES)000009 TRUNCATE(YES)000010 /* JOBREC - to define script, user and some other specifications */000011 JOBREC000012 JOBCMD(rm &TWSHOME/demo.sh)000013 JOBUSR (%TWSUSER)000014 /* RECOVERY - to define what FTA should do in case of error in job */000015 RECOVERY000016 OPTION(RERUN) /* Rerun the job after recover*/000017 JOBCMD(touch &TWSHOME/demo.sh) /* Recover job */000018 JOBUSR(&TWSUSER) /* User for recover job */000019 MESSAGE (Create demo.sh on FTA?) /* Prompt message */****** **************************** Bottom of Data ****************************The member F100DJ02 in Example 4-20 was created in the SCRPTLIB(EQQSCLIB) partitioned data set. In the non-centralized script F100DJ02, weuse VARSUB to specify how we want Tivoli Workload Scheduler for z/OS to scanfor JCL variables and substitute JCL variables. The JOBREC parameters specifythat we will run the UNIX (AIX) rm command for a file named demo.sh.If the file does not exist (it does not exist the first time the script is run) we run therecovery command (touch) that will create the missing file. So we can rerun(OPTION(RERUN)) the JOBREC JOBCMD() without any errors.Before the job is rerun, an operator have to reply yes to the prompt message:Create demo.sh on FTA?Example 4-21 on page 232 shows another example. The job will be markedcomplete if return code from the script is less than 16 and different from 8 orequal to 20. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 231
  • Example 4-21 Non-centralized script definition with RCCONDSUC parameter EDIT TWS.V8R20.SCRPTLIB(F100DJ03) - 01.01 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 /* Definition for job with "distributed" script */ 000002 /* -------------------------------------------- */ 000003 /* VARSUB - to manage JCL variable substitution */ 000004 VARSUB 000005 TABLES(IBMGLOBAL) 000006 PREFIX(%) 000007 VARFAIL(YES) 000008 TRUNCATE(NO) 000009 /* JOBREC - to define script, user and some other specifications */ 000010 JOBREC 000011 JOBSCR(/tivoli/tws/scripts/rc_rc.sh 12) 000012 JOBUSR(%DISTUID.) 000013 RCCONDSUC(((RC<16) AND (RC<>8)) OR (RC=20)) Important: Be careful with lowercase and uppercase. In Example 4-21, it is important that the variable name DISTUID is typed with capital letters because Tivoli Workload Scheduler for z/OS JCL variable names are always uppercase. On the other hand, it is important that the value for the DISTUID variable is defined in Tivoli Workload Scheduler for z/OS variable table IBMGLOBAL with lowercase letters, because the user ID is defined on the UNIX system with lowercase letters. Remember to type with CAPS OFF when editing members in SCRPTLIB (EQQSCLIB) for jobs with non-centralized script and members in Tivoli Workload Scheduler for z/OS JOBLIB (EQQJBLIB) for jobs with centralized script.4.5.4 Combination of centralized script and VARSUB, JOBRECparameters Sometimes it can be necessary to create a member in the EQQSCLIB (normally used for non-centralized script definitions) for a job that is defined in Tivoli Workload Scheduler for z/OS with centralized script.232 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • This can be the case if: The RCCONDSUC parameter will be used for the job to accept specific return codes or return code ranges. Note: You cannot use Tivoli Workload Scheduler for z/OS highest return code for fault-tolerant workstation jobs. You have to use the RCCONDSUC parameter. A special user should be assigned to the job with the JOBUSR parameter. Tivoli Workload Scheduler for z/OS JCL variables should be used in the JOBUSR() or the RCCONDSUC() parameters (for example).Remember that the RECOVERY statement cannot be specified in EQQSCLIB forjobs with centralized script. (It will be ignored.)To make this combination, you simply:1. Create the centralized script in Tivoli Workload Scheduler for z/OS JOBLIB. The member name should be the same as the job name defined for the operation (job) in the Tivoli Workload Scheduler for z/OS job stream (application).2. Create the corresponding member in the EQQSCLIB. The member name should be the same as the member name for the job in the JOBLIB.For example:We have a job with centralized script. In the job we should accept return codesless than 7 and the job should run with user dbprod.To accomplish this, we define the centralized script in Tivoli Workload Schedulerfor z/OS the same way as shown in Example 4-17 on page 220. Next, we createa member in the EQQSCLIB with the same name as the member name used forthe centralized script.This member should only contain the JOBREC RCCONDSUC() and JOBUSR()parameters (Example 4-22).Example 4-22 EQQSCLIB (SCRIPTLIB) definition for job with centralized scriptEDIT TWS.V8R20.SCRPTLIB(F100CJ02) - 01.05 Columns 00001 00072Command ===> Scroll ===> CSR****** ***************************** Top of Data ******************************000001 JOBREC000002 RCCONDSUC(RC<7)000003 JOBUSR(dbprod) Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 233
  • ****** **************************** Bottom of Data ****************************4.5.5 Definition of FTW jobs and job streams in the controller When the script is defined either as centralized in the Tivoli Workload Scheduler for z/OS job library (JOBLIB) or as non-centralized in the Tivoli Workload Scheduler for z/OS script library (EQQSCLIB), you can define some job streams (applications) to run the defined scripts. Definition of job streams (applications) for fault-tolerant workstation jobs is done exactly the same way as normal mainframe job streams: The job is defined in the job stream, and dependencies are added (predecessor jobs, time dependencies, special resources). Optionally, a run cycle can be added to run the job stream fat a set time. When the job stream is defined, the fault-tolerant workstation jobs can be executed and the final verification test can be performed. Figure 4-28 shows an example of a job stream that is used to test the end-to-end scheduling environment. There are four distributed jobs (seen in the left window in the figure) and these jobs will run on workdays (seen in the right window). Figure 4-28 Example of a job stream used to test end-to-end scheduling234 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • It is not necessary to create a run cycle for job streams to test the FTW jobs, as they can be added manually to the plan in Tivoli Workload Scheduler for z/OS.4.6 Verification test of end-to-end scheduling At this point we have: Installed and configured the Tivoli Workload Scheduler for z/OS controller for end-to-end scheduling Installed and configured the Tivoli Workload Scheduler for z/OS end-to-end server Defined the network topology for the distributed Tivoli Workload Scheduler network in the end-to-end server and plan batch jobs Installed and configured Tivoli Workload Scheduler on the servers in the network for end-to-end scheduling Defined fault-tolerant workstations and activated these workstations in the Tivoli Workload Scheduler for z/OS network Verified that the plan program executed successfully with the end-to-end topology statements Created members with centralized script and non-centralized scripts Created job streams containing jobs with centralized and non-centralized scripts It is time to perform the final verification test of end-to-end scheduling. This test verifies that: Jobs with centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs. Jobs with non-centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs. Jobs with a combination of centralized and non-centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs. The verification can be performed in several ways. Because we would like to verify that our end-to-end environment is working and that it is possible to run jobs on the FTWs, we have focused on this verification. We used the Job Scheduling Console in combination with legacy Tivoli Workload Scheduler for z/OS ISPF panels for the verifications. Of course, it is possible to perform the complete verification only with the legacy ISPF panels. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 235
  • Finally, if you decide to use only centralized scripts or non-centralized scripts, you do not have to verify both cases.4.6.1 Verification of job with centralized script definitions Add a job stream with a job defined with centralized script. The job from Example 4-17 on page 220 is used in this example. Before the job was submitted, the JCL (script) was edited and the parameter on the rmstdlist program was changed from 10 to 1 (Figure 4-29). Figure 4-29 Edit JCL for centralized script, rmstdlist parameter changed from 10 to 1 The job is submitted, and it is verified that the job completes successfully on the FTA. Output is verified by doing browse job log. Figure 4-30 on page 237 shows only the first part of the job log. See the complete job log in Example 4-23 on page 237. From the job log, you can see that the centralized script that was defined in the controller JOBLIB is copied to (see the line with the = JCLFILE text): /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_0 05_F100CENTHOUSEK.sh236 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • The Tivoli Workload Scheduler for z/OS JCL variable &ODMY1 in the “echo” line(Figure 4-29) has been substituted by the Tivoli Workload Scheduler for z/OScontroller with the job stream planning date (for our case, 210704, seen inExample 4-23 on page 237).Figure 4-30 Browse first part of job log for the centralized script job in JSCExample 4-23 The complete job log for the centralized script job================================================================ JOB : OPCMASTER#BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK= USER : twstest= JCLFILE :/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh= Job Number: 52754= Wed 07/21/04 21:52:39 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 237
  • TWS for UNIX (AIX)/JOBINFO 8.2 (9.5) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user . Locale LANG set to "C" Now we are running the script /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8C FD2B8A25EC41.J_005_F100CENTHOUSEK.sh OPC occurrence plan date is: 210704 TWS for UNIX/RMSTDLIST 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. AWSBIS324I Will list directories older than -1 /tivoli/tws/twstest/tws/stdlist/2004.07.13 /tivoli/tws/twstest/tws/stdlist/2004.07.14 /tivoli/tws/twstest/tws/stdlist/2004.07.15 /tivoli/tws/twstest/tws/stdlist/2004.07.16 /tivoli/tws/twstest/tws/stdlist/2004.07.18 /tivoli/tws/twstest/tws/stdlist/2004.07.19 /tivoli/tws/twstest/tws/stdlist/logs/20040713_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040713_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040714_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040714_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040715_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040715_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040716_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040716_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040718_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040718_TWSMERGE.log =============================================================== = Exit Status : 0 = System Time (Seconds) : 1 Elapsed Time (Minutes) : 0 = User Time (Seconds) : 0 = Wed 07/21/04 21:52:40 DFT =============================================================== This completes the verification of centralized script.238 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 4.6.2 Verification of job with non-centralized scripts Add a job stream with a job defined with non-centralized script. Our example uses the non-centralized job script from Example 4-21 on page 232. The job is submitted, and it is verified that the job ends in error. (Remember that the JOBCMD will try to remove a non-existing file.) Reply to the prompt with Yes, and the recovery job is executed (Figure 4-31). 1 • The job ends in error with RC=0002. • Right-click the job to open a context menu (1). • In the context menu, select Recovery Info to open the Job Instance Recovery Information window. • The recovery message is shown and you can reply to the prompt by clicking the Reply to Prompt arrow. • Select Yes and click OK to run the recovery job and rerun the failed F100DJ02 job (if the recovery job ends successfully).Figure 4-31 Running F100DJ02 job with non-centralized script and RECOVERY options The same process can be performed in Tivoli Workload Scheduler for z/OS legacy ISPF panels. When the job ends in error, type RI (for Recovery Info) for the job in the Tivoli Workload Scheduler for z/OS Error list to get the panel shown in Figure 4-32 on page 240. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 239
  • Figure 4-32 Recovery Info ISPF panel in Tivoli Workload Scheduler for z/OS To reply Yes to the prompt, type PY in the Option field. Then press Enter several times to see the result of the recovery job in the same panel. The Recovery job info fields will be updated with information for Recovery jobid, Duration, and so on (Figure 4-33). Figure 4-33 Recovery Info after the Recovery job has been executed. The recovery job has been executed successfully and the Recovery Option (Figure 4-32) was rerun, so the failing job (F100DJ02) will be rerun and will complete successfully. Finally, the job log is browsed for the completed F100DJ02 job (Example 4-24 on page 241). The job log shows that the user is twstest ( = USER) and that the twshome directory is /tivoli/tws/twstest/tws (part of the = JCLFILE line).240 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Example 4-24 The job log for the second run of F100DJ02 (after the RECOVERY job)================================================================ JOB : OPCMASTER#BB8D04BFE71A3901.J_010_F100DECSCRIPT01= USER : twstest= JCLFILE : rm /tivoli/tws/twstest/tws/demo.sh= Job Number: 24100= Wed 07/21/04 22:46:33 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc rmTWS for UNIX (AIX)/JOBINFO 8.2 (9.5)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user .Locale LANG set to "C"Now we are running the script rm /tivoli/tws/twstest/tws/demo.sh================================================================ Exit Status : 0= System Time (Seconds) : 0 Elapsed Time (Minutes) : 0= User Time (Seconds) : 0= Wed 07/21/04 22:46:33 DFT===============================================================If you compare the job log output with the non-centralized script definition inExample 4-21 on page 232, you see that the user and the twshome directorywere defined as Tivoli Workload Scheduler for z/OS JCL variables (&TWSHOMEand %TWSUSER). These variables have been substituted with values from theTivoli Workload Scheduler for z/OS variable table E2EVAR (specified in theVARSUB TABLES() parameter).This variable substitution is performed when the job definition is added to theSymphony file either during normal Tivoli Workload Scheduler for z/OS planextension or replan or if user ad hoc adds the job stream to the plan in TivoliWorkload Scheduler for z/OS.This completes the test of non-centralized script. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 241
  • 4.6.3 Verification of centralized script with JOBREC parameters We did a verification with a job with centralized script combined with a JOBREC statement in the script library (EQQSCLIB). The verification uses a job named F100CJ02 and centralized script, as shown in Example 4-25. The centralized script is defined in the Tivoli Workload Scheduler for z/OS JOBLIB.Example 4-25 Centralized script for test in combination with JOBRECEDIT TWS.V8R20.JOBLIB(F100CJ02) - 01.07 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 //*%OPC SCAN 000002 //* OPC Here is an OPC JCL Variable OYMD1: &OYMD1. 000003 //* OPC 000004 //*%OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000005 //* OPC 000006 echo Todays OPC date is: &OYMD1 000007 echo Unix system date is: 000008 date 000009 echo OPC schedule time is: &CHHMMSSX 000010 exit 12 ****** **************************** Bottom of Data **************************** The JOBREC statement for the F100CJ02 job is defined in the Tivoli Workload Scheduler for z/OS scriptlib (EQQSCLIB); see Example 4-26. It is important that the member name for the job (F100CJ02 in our example) is the same in JOBLIB and SCRPTLIB.Example 4-26 JOBREC definition for the F100CJ02 jobEDIT TWS.V8R20.SCRPTLIB(F100CJ02) - 01.07 Columns 00001 00072Command ===> Scroll ===> CSR****** ***************************** Top of Data ******************************000001 JOBREC000002 RCCONDSUC(RC<7)000003 JOBUSR(maestro)****** **************************** Bottom of Data **************************** The first time the job is run, it abends with return code 12 (due to the exit 12 line in the centralized script). Example 4-27 on page 243 shows the job log. Note the “= JCLFILE” line. Here you can see TWSRCMAP: RC<7, which is added because we specified RCCONDSUC(‘RC<7’) in the JOBREC definition for the F100CJ02 job.242 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • Example 4-27 Job log for the F100CJ02 job (ends with return code 12)================================================================ JOB : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01= USER : maestro= JCLFILE :/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh TWSRCMAP: RC<7= Job Number: 56624= Wed 07/21/04 23:07:16 DFT===============================================================TWS for UNIX/JOBMANRC 8.2AWSBJA001I Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2003US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.shTWS for UNIX (AIX)/JOBINFO 8.2 (9.5)Licensed Materials Property of IBM5698-WKB(C) Copyright IBM Corp 1998,2001US Government User Restricted RightsUse, duplication or disclosure restricted by GSA ADP Schedule Contract with IBMCorp.Installed for user .Locale LANG set to "C"Todays OPC date is: 040721Unix system date is:Wed Jul 21 23:07:17 DFT 2004OPC schedule time is: 23021516================================================================ Exit Status : 12= System Time (Seconds) : 0 Elapsed Time (Minutes) : 0= User Time (Seconds) : 0= Wed 07/21/04 23:07:17 DFT===============================================================The job log also shows that the user is set to maestro (the = USER line). This isbecause we specified JOBUSR(maestro) in the JOBREC statement.Next, before the job is rerun, the JCL (the centralized script) is edited, and thelast line is changed from exit 12 to exit 6. Example 4-28 on page 244 shows theedited JCL. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 243
  • Example 4-28 The script (JCL) for the F100CJ02 job is edited exit changed to 6 ****** ***************************** Top of Data ****************************** 000001 //*>OPC SCAN 000002 //* OPC Here is an OPC JCL Variable OYMD1: 040721 000003 //* OPC 000004 //*>OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000005 //* OPC MSG: 000006 //* OPC MSG: I *** R E C O V E R Y A C T I O N S T A K E N *** 000007 //* OPC 000008 echo Todays OPC date is: 040721 000009 echo 000010 echo Unix system date is: 000011 date 000012 echo 000013 echo OPC schedule time is: 23021516 000014 echo 000015 exit 6 ****** **************************** Bottom of Data **************************** Note that the line with Tivoli Workload Scheduler for z/OS Automatic Recover has changed: The % sign has been replaced by the > sign. This means that Tivoli Workload Scheduler for z/OS has performed the recovery action by adding the F100CENTRECAPPL job stream (application). The result after the edit and rerun of the job is that the job completes successfully. (It is marked as completed with return code = 0 in Tivoli Workload Scheduler for z/OS). The RCCONDSUC() parameter in the scriptlib defintion for the F100CJ02 job sets the job to successful even though the exit code from the script was 6 (Example 4-29). Example 4-29 Job log for the F100CJ02 job with script exit code = 6 =============================================================== = JOB : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01 = USER : maestro = JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_0 20_F100CENTSCRIPT01.sh TWSRCMAP: RC<7 = Job Number: 41410 = Wed 07/21/04 23:35:48 DFT =============================================================== TWS for UNIX/JOBMANRC 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.244 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/cen tralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh TWS for UNIX (AIX)/JOBINFO 8.2 (9.5) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user . Locale LANG set to "C" Todays OPC date is: 040721 Unix system date is: Wed Jul 21 23:35:49 DFT 2004 OPC schedule time is: 23021516 =============================================================== = Exit Status : 6 = System Time (Seconds) : 0 Elapsed Time (Minutes) : 0 = User Time (Seconds) : 0 = Wed 07/21/04 23:35:49 DFT =============================================================== This completes the verification of centralized script combined with JOBREC statements.4.7 Activate support for the Tivoli Workload SchedulerJob Scheduling Console To activate support for use of the Tivoli Workload Scheduler Job Scheduling Console (JSC), perform the following steps: 1. Install and start a Tivoli Workload Scheduler for z/OS JSC server on mainframe. 2. Install Tivoli Management Framework 4.1 or 3.7.1. 3. Install Job Scheduling Services in Tivoli Management Framework. 4. To be able to work with Tivoli Workload Scheduler for z/OS (OPC) controllers from the JSC: a. Install the Tivoli Workload Scheduler for z/OS connector in Tivoli Management Framework. b. Create instances in Tivoli Management