An enterprise consisting of multiple z/os sysplexes or mono-plexes can have work scheduled by a single Tivoli Workload Scheduler for z/OS



Yüklə 82,9 Kb.
tarix27.12.2018
ölçüsü82,9 Kb.
#87135

Tivoli Workload Scheduler and System Automation for z/OS Integration in an Enterprise of Multiple Plexes



September 2013


IBM Advanced Technical Skills


Art Eisenhour, Certified I/T Specialist: arteisen@us.ibm.com

Introduction

This paper discusses the areas of consideration when multiple Sysplexes that are running NetView for z/OS and System Automation for z/OS are integrated to work with a single Tivoli Workload Scheduler for z/OS system. It is the assumption of the authors that NetView for z/OS and System Automation for z/OS (SA) have already been installed in each Sysplex and that the users know how to customize SA policies. It is also assumed that Tivoli Workload Scheduler for z/OS (TWS) has been installed and is scheduling operations across the multiple Sysplexes, and that the users know how to customize, administer and schedule operations using TWS.


Special Notices

This document reflects the IBM Advanced Technical Skills understanding on many of the questions asked about the integration of System Automation and Tivoli Workload Scheduler for z/OS. It was produced and reviewed by the members of the IBM Advanced Technical Skills organization and additional subject matter experts. This document is presented “As-Is” and IBM does not assume responsibility for the statements expressed herein. It reflects the opinions of the IBM Advanced Technical Skills organization. If you have questions about the contents of this document, please direct them to Art Eisenhour (arteisen@us.ibm.com).



Trademarks

The following terms are trademarks or registered trademarks of International Business Machines Corporation in the United States and/or other countries: IBM, Netview, Parallel Sysplex, System z, Tivoli, z/OS and zSeries. A full list of U.S. trademarks owned by IBM may be found at http://www.ibm.com/legal/copytrade.shtml .


Other company, product and service names may be trademarks or service marks of others.

Acknowledgements


Co-authors:

John Cross, IBM Global Services

Adam Palmese, System z™ Technical Software Sales
Special thanks to Mike Sine, IBM Advanced Technical Skills for reviewing this document and providing valuable feedback.

Contents


Introduction 2

Special Notices 2

Trademarks 2

Acknowledgements 2

Overview 4

Move a Tivoli Workload Scheduler for z/OS (TWS) Controller to an alternate plex using System Automation for z/OS (SA) 5

Prerequites: 5



How to coordinate the management of a TWS Controller across an enterprise 6

Objectives: 6

Steps: 6

Examples of STARTUP and SHUTDOWN procedures: 6

How to query and control a remote TWS Controller with SA 8

How to send requests from TWS to SA using Conventional workstations 8

How to send requests from TWS to SA using Automation workstations 8

How to send requests from TWS to SA using a Batch Job 10

How to use TWS to initiate a planned move 10

How to bring up TWS in disaster recovery mode 11

SA Application STARTUP policy to start the controller with alternate options: 11

REXX exec example to start the controller with alternate options: 11

How to deliver Netview z/OS commands through TSO 11

Security 12

Appendix A - AOFRYCMD setup 14

Appendix B - TWS and SA for z/OS Communication Flow 15

An enterprise consisting of multiple z/OS sysplexes or mono-plexes can have work scheduled by a single Tivoli Workload Scheduler for z/OS (TWS). However, it may be necessary at times to change the location of the Controller from one plex to another. This whitepaper describes how to automate the move of a Controller in a multi-plex environment using System Automation for z/OS (SA), and how to send requests from TWS on any plex to SA on any plex.



Overview



This figure illustrates an enterprise with one active TWS Controller that can be moved from one Sysplex to another through the coordinated operation of the separate SA systems. The enterprise consists of multiple separate Sysplex systems and each Sysplex consists of one or more z/OS images. Each Sysplex has an SA Automation Manager. And each z/OS image has a TWS Tracker and an SA Netview Agent. The disk storage that contains the TWS databases and plan files are switchable from one Sysplex to another as are the network connections related to the TWS Controller.
Overview of a planned move

Operations management has scheduled that SYSPLEX1 will be removed from service from 3 a.m. Sunday until 3 a.m Monday to allow upgrades to the hardware of the z/OS images, and the TWS controller is to be moved to SYSPLEX2. TWS will send a request before 3 a.m. Sunday to SA on SYSPLEX1 to stop the Controller started task on z/OS image MVS1A. SA will stop the TWS Controller and related server tasks. When the shutdown is complete, SA will order the network and disk storage switching to occur. When the switching operations are complete, SA on SYSPLEX1 will send a request to the SA Automation Manager on SYSPLEX2 to start the TWS Controller subsystems on MVS2A. When the TWS startup on MVS2A is complete, TWS will resume job scheduling automatically, and System Automation on SYSPLEX2 will notify the SA systems on the other plexes of the current location of the TWS Controller using global variables. The procedure to move the TWS Controller from SYSPLEX2 to SYSPLEX1 can proceed in similar manner or be left for another time.


Overview for unplanned outages

When the TWS controller cannot be restarted on the primary z/OS image or a hot standby backup within the local sysplex, then TWS disaster recovery procedures must be followed as described in the TWS Customization and Tuning documentation in the section on Disaster recovery planning. Otherwise, events might be lost and the files may not be in a consistent state. Dual job-tracking should be utilized and the alternate Controller started with JTOPTS CURRPLAN(NEW). Many of the techniques described for planned moves can be utilized, but preparations have to be made and procedures documented as to when and how the move will be executed.

The following sections describe how to use SA and TWS functions to enable these moves.


Move a Tivoli Workload Scheduler for z/OS (TWS) Controller to an alternate plex using System Automation for z/OS (SA)

Prerequites:





  • Netview-to-Netview connections must exist between the plexes for RMTCMDs.

  • Every z/OS image must have a TWS Tracker subsystem that is network connected to the Controller through either TCP/IP or SNA.

  • The SA Application for the TWS Tracker should be defined to be active on all z/OS Systems.

  • Every plex where the Controller may be located must have an SA system.

  • The SA Application Group for the TWS Controller must be defined in a z/OS system group in each plex. Since only one instance may be active at a time, the Application Group should be ONDEMAND and the desired availability of the Applications should be ALWAYS.

  • The TWS Controller options must enable SA provided exit EQQUXSAZ for Automation workstations, and/or EQQUX007 for Conventional (non-Automation) workstations

  • The IP addresses or SNA LUs associated with the Controller must have a dynamic switch capability to move with the Controller.

  • The TWS databases and Controller related files must be on DASD volumes that can be switched or quickly replicated to the target system when the Controller is moved.

  • Documented procedures and REXX Execs / CLISTS must be defined to switch the DASD and Network connections when the TWS Controller is stopped on the active plex.

  • The TWS administrator must be familiar with Disaster recovery planning as documented in the Customization and Tuning guide (Chapter 12), and TechNote 1104339, http://www-01.ibm.com/support/docview.wss?uid=swg21104339

Note: An unplanned move of the TWS Controller to another plex requires Disaster recovery procedures unless the Controller is stopped normally.

How to coordinate the management of a TWS Controller across an enterprise

Objectives:





  • SA Policy definitions will support the move of a TWS Controller in a single policy.

  • All SA systems will know the location of the active Controller by Netview domain name.

  • All SA systems will know where to move the Controller when the active controller stops.

Steps:





    1. Code a REXX exec to perform a Netview SETCGLOB command to set common variables in the startup and shutdown procedures described in the next section. Here is an example named “SETVAL” to customize to your needs.

/* REXX */

/* SETVAL: Issue the SETGLOB command */

PARSE ARG VAR1 VAL

'SETCGLOB 'VAR1' TO ' VAL

/* */


    1. Define where to move the Controller when it is stopped. Here are two options using variables:

      • Store the location to move the Controller in SA Systems AUTOMATION SYMBOLS. For example, using the Overview figure on page 3, set AOCCLONE9 for System MVS1A to be CNM21 as the “move to” domain for the Controller when it is active on MVS1A. And, set AOCCLONE9 to CNM31 for System MVS2A. The value in AOCCLONE9 can be retrieved within procedures as variable &AOFAOCCLONE9.

      • Let TWS set a global variable that defines the move to location domain and submit the request to stop the Controller for the move. This will be illustrated later.




    1. Define procedures in the SA policy for the Controller Applications for STARTUP and SHUTDOWN to set a common variable across the multiple Netview domains where the Controller may be located. This variable will indicate the location of the TWS Controller task when it is UP and have the value DOWN when it is stopped. This variable can then be queried via a PIPE or NCCF command or REXX EXEC.



Examples of STARTUP and SHUTDOWN procedures:



STARTUP, phase POSTSTART:

Set a common global variable in all SA systems to indicate where the Controller has started.


SA Application policy - Command Processing: POSTSTART

----------------------------------------------------------

Command Text

CNM11: SETVAL GLOBAL1.TWS.CONTROL.ACTIVE CNM&SYSCLONE.

CNM21: SETVAL GLOBAL1.TWS.CONTROL.ACTIVE CNM&SYSCLONE.

CNM31: SETVAL GLOBAL1.TWS.CONTROL.ACTIVE CNM&SYSCLONE.


Replace CNMxxx with the appropriate Netview domain names.

SYSCLONE values can be used to distinguish the domains and are resolved at ACF load time.


How to perform an initial startup of the TWS Controller with a NetView command:

The TWS Controller Group should be the object that is managed. This ensures that all related applications and automation tasks are active where the Controller is located. An initial startup command could look like this in which you specify the full resource name and target domain:


INGREQ {TWS_CONTROL/APG/SYSNAME} REQ=START SCOPE=ALL OUTMODE=LINE VERIFY=NO TARGET={DOMAINID}
Remember, only one instance of the Controller and related tasks may be active at a time in the enterprise.

SHUTDOWN, phase FINAL:

Set the common global variable value to "DOWN" to indicate that the Controller is stopped, then execute network and DASD switching, and request that the controller be started in the next domain.


SA Application policy - Command Processing: SHUTFINAL

----------------------------------------------------------

Command Text

CNM11: SETVAL GLOBAL1.TWS.CONTROL.ACTIVE DOWN

CNM21: SETVAL GLOBAL1.TWS.CONTROL.ACTIVE DOWN

CNM31: SETVAL GLOBAL1.TWS.CONTROL.ACTIVE DOWN



exec DASD and Network switching commands

INGREQ TWS_CONTROL/APG/&SYSNAME. REQ=START SCOPE=ALL OUTMODE=LINE VERIFY=NO TARGET={DOMAINID}


Where DOMAINID is the move to location variable, such as: &AOFAOCCLONE9, or a variable that is defined by TWS, e.g. &GLOBAL1.TWS.CONTROL.MOVE2LOC. See the section, “How to use TWS to initiate a planned move”, later in this whitepaper.
How to inititiate shutdown of the active TWS Controller with a NetView command:

INGSET TWS_CONTROL/APG/* REQUEST=MAKEAV* SOURCE=* OUTMODE=LINE



How to query and control a remote TWS Controller with SA


Here are 2 options:

  1. Execute SA command, INGOPC with the option TARGET={domainid}, to send requests to a remote Controller. The target domainid could be the variable set by the STARTUP POSTSTART procedure described above.

  2. If the Controller is defined to have a SNA LU, then the LU name can be defined in the SA OPC control specifications for the local Tracker resource.



How to send requests from TWS to SA using Conventional workstations


The Conventional (NVxx) non-Automation command workstations have been available for a long time and many applications use this interface. However, this interface submits requests only to the SA system that is located on the same sysplex as the TWS Controller and may have limited applicability in a multi-plex environment.
The following SA Product Automation for OPC definitions: OCS and ODM, will allow the TWS Controller to send requests to the local SA system using Conventional workstations.

Note: rather than use specific domain names, specify a value of SYSPLEX to let SA identify the active controller anywhere within the sysplex.
OCS:

TWS_CONTROLLER TWS Controller TWSC

OPCA PCS OPC Controller details

Entry Type : Controller Details PolicyDB Name : HASL34

Entry Name : TWS_CONTROLLER Enterprise Name : HASL

OPC PCS entry name: TWS_CONTROLLER

Enter or update the following table names:

Netview domain name. . . SYSPLEX

OPC controller subsystem TWSC  replace TWSC with your TWS Controller MVS subsystem name as defined in IEFSSNxx.

ODM:


DOMAINID Conventional (NVxx) Workstation Domain Map

AOFGDYN9 Code Processing : DOMAINID

Cmd Code 1 Code 2 Code 3 Value Returned

NV01 SYSPLEX

NV02 SYSPLEX
In the following example, the TWS workstation named NV01 is defined as a Conventional non-Automation workstation:

Work station name : NV01

WORK STATION TYPE ===> G General

DESTINATION ===> ________  The destination name must be blank.

Options: AUTOMATION ===> N
The following TWS operation will submit a request from TWS to the local SA system to stop the resource named PCICS5 using a Conventional workstation:

NV01 010 00.00.01 PCICS5__ STOP_  PCICS5 is the resource name,

STOP is the Op TEXT

How to send requests from TWS to SA using Automation workstations


TWS operations assigned to Automation workstations can be used to issue SA requests against resources located anywhere within the enterprise regardless of where the TWS Controller is located. The OPC OCS definitions should be the same as for Conventional workstations. The Automation workstation destination must name a valid active Netview domain or be blank.

In the following example, the TWS workstation named SA01 has been defined with a specific Netview domain name destination and does not require an SA OPC ODM entry.


Work station name : SA01

WORK STATION TYPE ===> G General

DESTINATION ===> HVNFA___  The destination name must also be defined in the Controller ROUTOPTS, e.g. USER(HVNFA)

Options: AUTOMATION ===> Y


The SA01 Operations INGREQ commands are defined within the Operation AUTOMATION INFO, Command Text. This application will stop a database task, run a batch update, and restart the database task by canceling the stop request.
SA01 015 STOPDBC: INGREQ DSNTSADB/APL/HCB$

REQ=STOP,TYPE=NORM,OUTMODE=LINE,TARGET=HVNCB  TARGET= Netview Domain

to process the request

CPU1 040 UPDATEDB  UPDATEDB is a database update batch job

SA01 055 STARTDBC: INGSET CANCEL DSNTSADB/APL/HCB$

REQUEST=MAKEUN* SOURCE=* OUTMODE=LINE TARGET=HVNCB


In the above example, TWS sends the request to SA through the local Netview PPI to the domain named in the Workstation destination. SA in turn sends the request to the domain specified in TARGET=, and the target domain can be in a remote plex.
A more versatile solution is to have a blank Workstation Destination name and to use a TWS variable to specify the Target domainid. The variable table name can be specified in the TWS Application run cycle. When the Destination name is blank, then there must be an entry in the SA policy Product Automation under OPC components, in the ODM - Workstation DomainID for the workstation name to be resolved, e.g. for TWS Workstation name, SAAO, the ODM entry is like the definitions for Conventional workstations:
ODM:

AOFGDYN9 Code Processing : DOMAINID

Cmd Code 1 Code 2 Code 3 Value Returned

SAAO SYSPLEX  Automation workstation, name

can be anything, even NVxx

NVSA SYSPLEX  Automation workstation


Here is a simple TWS variable example:

Variable table SATABLE

Variable Subst. Setup Val Default

Name Exit req Value

DOMAINID No Yes HVNCB
Here is an example of the previous job stream using the variable.

SAAO 015 STOPDBC: INGREQ DSNTSADB/APL/HCB$

REQ=STOP,TYPE=NORM,OUTMODE=LINE,TARGET=&DOMAINID

CPU1 040 UPDATEDB

SA01 055 STARTDBC: INGSET CANCEL DSNTSADB/APL/HCB$

REQUEST=MAKEUN* SOURCE=* OUTMODE=LINE TARGET=&DOMAINID


Note 1: The INGREQ requests are defined in the AUTOMATION INFO for the SAAO Automation workstation operations.

Note 2: If the workstation destination name is blank and there is not an ODM entry for the workstation name with Value of SYSPLEX or the Netview Domain name local to the Controller z/OS system, then the operation will fail with return code, "U003", reason, no DOMAINID.

How to send requests from TWS to SA using a Batch Job


This solution has the advantage that SA requests can be submitted via. a TWS CPU workstation batch job through any TWS Tracker workstation in any plex running SA.

SA provides sample JCL in SINGSAMP member EVJSJ001.

This example reports status for automated subsystems and lists STCs not controlled by SA.
//REPORTSA JOB (0),'TWS SCHEDULED WORK',CLASS=A,MSGCLASS=O

//*%OPC SCAN Resolve the TWS workstation id variable, &OWSID

//COMMAND1 EXEC PGM=IKJEFT01,DYNAMNBR=30,REGION=4M,

// PARM='AOFRYCMD &OWSID SERVER=* HIGHRC=4'

//STEPLIB DD DISP=SHR,DSN=NETVIEW.V6R1M0.SCNMLNKN DSIPHONE REXX Func

//SYSPROC DD DISP=SHR,DSN=SA4ZOS.V3R4M0.SINGTREX SA AOFRYCMD

//EQQMLIB DD DISP=SHR,DSN=TWS.V8R6M0.SEQQMSG0 TWS MSG LIBRARY

//CONCAT DD DISP=MOD,DSN=TWS.V8R6M0.STCRPT.LIST

//SYSTSPRT DD SYSOUT=*

//SYSTSIN DD DUMMY

//SYSIN DD *

DATE >CONCAT

DISPSTAT * OUTMODE=LINE >CONCAT

WRITE >CONCAT

INGLKUP REQ=JOB QUAL=STC OUTMODE=LINE >CONCAT

/*
Note: the SA v3.4 version of EVJSJ001 calls AOFRYCMD, which requires the SA TSO REXX function package INGTXFPG. It also requires that command receivers be defined.

Reference Appendix A for AOFRYCMD setup.

How to use TWS to initiate a planned move


This is an example of a TWS Application to initiate a planned move by combining the use of a CPU batch job and an Automation workstation:
Application : MOVECONTROLLER Set vars and cancel TWSC

Oper Duration Job name Operation text

ws no. HH.MM.SS

CPU1 010 00.01.47 UPDATVAR Update move-to variables

SA01 020 00.00.02 STOPTWSC Cancel active Controller

Note: SA01 is an Automation workstation

JCL Variable Table: SAMVSTBL1 used by application MOVECONTROLLER

The value can be updated dynamically and can be dependent on other variables.
Variable Subst. Setup Val Default

Name Exit req Value

MOVTWSC2 ________ N N CNM21___
UPDATVAR is a batch job to inform SA where to start the Controller.
//UPDATVAR JOB (0), 'UPDATE MOVE-TO VARS ',CLASS=A,MSGCLASS=O

//*%OPC SCAN Resolve the TWS workstation id variable, &OWSID

//COMMAND1 EXEC PGM=IKJEFT01,DYNAMNBR=30,REGION=4M,

// PARM='AOFRYCMD &OWSID SERVER=* HIGHRC=4'

//STEPLIB DD DISP=SHR,DSN=NETVIEW.V6R1M0.SCNMLNKN DSIPHONE REXX Func

//SYSPROC DD DISP=SHR,DSN=SA4ZOS.V3R4M0.SINGTREX SA AOFRYCMD

// DD DISP=SHR,DSN= NETVIEW.V6R1USER.HVNFA.CNMCLST SETVAL Cmd

//EQQMLIB DD DISP=SHR,DSN=TWS.V8R6M0.SEQQMSG0 TWS MSG LIBRARY

//CMDLOG DD DISP=MOD,DSN=TWS.V8R6M0.MOVETWSC.LOG

//SYSTSPRT DD SYSOUT=*

//SYSTSIN DD DUMMY

//SYSIN DD *

DATE >CMDLOG

CNM11: SETVAL GLOBAL1.TWS.CONTROL.MOVE2LOC &MOVTWSC2. >CMDLOG

CNM21: SETVAL GLOBAL1.TWS.CONTROL.MOVE2LOC &MOVTWSC2. >CMDLOG

CNM31: SETVAL GLOBAL1.TWS.CONTROL.MOVE2LOC &MOVTWSC2. >CMDLOG

/*
Automation job to cancel the START request for the active TWS Controller and thereby cause SA to stop the controller:

SA01 020 STOPTWSC: INGSET TWS_CONTROL/APG/* REQUEST=MAKEAV* SOURCE=* OUTMODE=LINE


Note 1: The Automation request is defined in the AUTOMATION INFO for STOPTWSC.

Note 2: TARGET is not necessary because the request is local to the Controller location.


How to bring up TWS in disaster recovery mode


TWS disaster recovery procedures must be followed as described in the TWS Customization and Tuning documentation in the section on Disaster recovery planning.

SA can issue the TWS Start command. The normal STC JCL must be overridden to specify an alternate Controller options file with 2 and possibly 3 significant differences in the JTOPTS:

CURRPLAN(NEW)

JOBSUBMIT(NO)

FTWJSUB(NO)  if End-to-End Fault-tolerant Agents are used.

SA Application STARTUP policy to start the controller with alternate options:

AOFGDYN9 Command Processing : STARTUP

Cmd Type AutoFn/* Command Text

DR MVS S &SUBSPROC,PARM='{CONOPFDR}'


REXX exec example to start the controller with alternate options:

/* REXX – STRTWSDR */

/* START THE TWS CONTROLLER WITH STARTUP OPTION “CONOPFDR” */

'INGSET SET {TWSCTRL/APL/SYSNAME} STARTTYPE=DR TARGET={DOMAINID}'

‘INGREQ {TWS_CONTROL/APG/SYSNAME} REQ=START SCOPE=ALL OUTMODE=LINE VERIFY=NO TARGET={DOMAINID}’

How to deliver Netview z/OS commands through TSO


This solution is off topic but NETVCMD is a handy tool that is provided by the Netview product itself to send simple adhoc requests to Netview or MVS. It does not require SA or TWS.
Set up the Netview provided mechanism, NETVCMD, as described as option "A)" in Technote:

http://www-01.ibm.com/support/docview.wss?uid=swg21328426

.

1) on TSO, copy CNMSAMP(CNMS8029) to a TSO sysproc library, naming it NETVCMD



2) on Netview; run the CMDSERV command on an autotask to start PPI receiver DSICMDSV

EXCMD AUTO2,CMDSERV AUTHSNDR=N,NAME=DSICMDSV

3) From TSO option 6 or a batch job running IKJEFT01 program:

//NETVCMD EXEC PGM=IKJEFT01,DYNAMNBR=30,REGION=4M

//STEPLIB DD DISP=SHR,DSN=NETVIEW.V6R1M0.SCNMLNKN NETVIEW V6 LIB

//SYSPROC DD DISP=SHR,DSN=SYSU.COMMON.SYSEXEC USER REXX LIBRARY

//OUTPUT DD SYSOUT=*

//SYSTSPRT DD SYSOUT=*

//REALRC DD SYSOUT=*

//SYSIN DD DUMMY

//SYSTSIN DD *

NETVCMD D NET,ID=NDM4APPL

/*
Note: Netview receiver tasks' status can be displayed using the Netview command, DISPPI.


Security


Another key area of consideration is the security method used by the SAz NetView regions. There are various security options available and will depend on your settings for SECOPTS.OPERSEC and SECOPTS.CMDAUTH in DSIPARM CNMSTYLE (or ‘user’ style members CNMSTGEN/CNMSTUSR). Check your local DSIPARM CNMSCAT2 (if using Command Authorization Table “CAT”) and/or your SAF database settings (if using RACF, or TopSecret, or ACF/2) to ensure that all desired operator tasks and AUTOTASKs are defined, as well as their command permissions.

Likewise, there are various security options available for use by TWS. Queries and updates from SA will depend on your TWS AUTHDEF settings and security permissions.



Summary and Further Information

This paper has shown just the surface of what is possible using System Automation with Tivoli Workload Scheduler for z/OS in a Multi-plex environment. Many more functions are available for exploiting the usage and integration of these products.


The IBM publications most relevant to the integration of SA and TWS are the following:
The System Automation for z/OS Information Center for version 3.4 is at,

http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=/com.ibm.sazos.doc_3.4/welcome.html
System Automation for z/OS Version 3 Release 4 TWS Automation Programmer’s Reference and Operator’s Guide, SC34-2651-00

System Automation for z/OS V3R4.0 Customizing and Programming guide, SC34-2644-00, Command Receivers, Chapter 10.

The Tivoli Workload Scheduler for z/OS Information Center for version 9.1 and previous versions is at,

http://pic.dhe.ibm.com/infocenter/tivihelp/v47r1/index.jsp?topic=%2Fcom.ibm.tivoli.itws.doc_9.1%2Fwelcome_TWA.html
Relevant documentation includes,

IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263-08

IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265-08,

Disaster recovery planning, Chapter 12


TechNote 1395390, Disaster Recovery and TWS

http://www-01.ibm.com/support/docview.wss?uid=swg21395390
Redbook:

Integrating IBM Tivoli Workload Scheduler with Tivoli Products, SG24-6648-00, Integrating with OPC Automation extension of System Automation, Chapter 4



http://publib-b.boulder.ibm.com/abstracts/sg246648.html?Open

Appendix A - AOFRYCMD setup

Review: Automation for z/OS V3R4.0 Customizing and Programming guide, SC34-2644-00, Chapter 10, Command Receivers.

http://publibfi.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/ing4p500/10.0
SA provides two function packages necessary for the use of AOFRYCMD and other SA capabilities in TSO and Netview, INGTXFPG, and INGRXFPG – INGRXFPG is mandatory for SA. Here are the steps to implement these function packages:
Setup the Command Receiver as per the SA Customizing and Programming doc above

Define the INGTXFPG function package to the TSO/E function package table, IRXTSPRM.

Add a definition for INGTXFPG into a copy of the TSO/E function package table

IRXREXX2 from SYS1.SAMPLIB. IRXREXX2 is a function package table that has CSECT name, IRXTSPRM. A snippet including IRXTSPRM is below.

Assemble and link-edit the updated IRXREXX2 (IRXTSPRM) into LPALIB as IRXTSPRM. We recommend that this is done using SMP/E.

Ensure that the function package INGTXFPG resides in the LinkList, e.g. SINGMOD1

Define the INGRXFPG function package to the Netview function package table, DSIRXPRM.

INGRXFPG is added via running Netview sample job CNMSJM11 to recompile DSIRXPRM into an APF authorized Netview loadlib.


IRXREXX2 - snippet edited to include IRXTSPRM. Note lines 361, 363, 375, 376, 377:

000361 PACKTB_SYSTEM_TOTAL DC F'4' /* Total number of @P1C*/

000362 * /* System entries @P1C*/

000363 PACKTB_SYSTEM_USED DC F'4' /*Number of System @P1C*/

000364 * /* entries in use @P1C*/

000365 PACKTB_LENGTH DC F'8' /* Length of each PACKTB entry */

000366 PACKTB_FFFF DC X'FFFFFFFFFFFFFFFF' /* Set the PACKTB end marker */

000367 PACKTB_ENTRIES EQU * /* System Package Table entries */

000369 PACKTB_ENTRY_MVS EQU * /* The MVS PACKTB entry @PG10210*/

000370 PACKTB_NAME_MVS DC CL8'IRXEFMVS' /* Set the function package name*/

000371 PACKTB_NEXT_MVS DS 0C /* Point to the next entry */

000372 PACKTB_ENTRY_TSO EQU * /* The TSO PACKTB entry @PG10210*/

000373 PACKTB_NAME_TSO DC CL8'IRXEFPCK' /* Set the function package name*/

000374 PACKTB_NEXT_TSO DS 0C /* Point to the next entry */

000375 PACKTB_ENTRY_SAM EQU * /* The SAM PACKTB entry */

000376 PACKTB_NAME_SAM DC CL8'INGTXFPG' /* Set SA function package name */

000377 PACKTB_NEXT_SAM DS 0C /* Point to next entry */

000378 PACKTB_ENTRY_FTP EQU * /* The EZAFTPKR PACKTB entry@P1A*/

000379 PACKTB_NAME_FTP DC CL8'EZAFTPKR' /* Set the function package name +

000380 for the FTP API @P1A*/


Appendix B - TWS and SA for z/OS Communication Flow



The flow is as follows:



  • A TWS controller calls exit EQQUXSAZ for Automation command workstations and exit EQQUX007 for Conventional workstations to send commands and requests to SA.

Note: these exits must be loaded from SINGMOD1, and not SEQQLMD0.

  • The exits pass their information via Netview PPI to the EVJTOPPI receiver task. EVJTOPPI validates the PPI buffer that it has received and sends the command to the appropriate routine: EVJESPVY for Conventional request types or EVJESCVY for Automation workstation command request types.

  • After processing the request, the OPCAPOST or EVJRYPST routine (for conventional requests or command requests, respectively) posts the success or failure of the request or command to TWS.

Note that because this is posted to all trackers that are running on the SA z/OS that processes the request, the job name for the operation is passed as an additional qualifier for the request to OPCAPOST or EVJRYPST. It is the responsibility of the installation to ensure that the TWS operation is uniquely identified. SA z/OS uses the following attributes to identify the TWS operation:

  • Application name

  • Workstation name

  • Operation number

  • Job name (if present)

  • IA time

  • EVJRYPST formats an EQQUSIN buffer and sends this to the z/OS Master Subsystem.

  • The z/OS Master Subsystem sends a copy of this buffer to every z/OS subsystem that has registered for the data.

  • The tracker (and possibly also the controller) will have registered for the TWS buffer data that EQQUSIN created. These address spaces receive a copy of the buffer.

  • If the tracker receives the data, it sends it on to its owning controller, which might be on another system. The controller now marks the operation complete or in error.

©IBM Corporation, 2013 Page of EisenhourPage of

(www.ibm.com/support/techdocs)




Yüklə 82,9 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin