Moving Legacy Mainframe Batch Processing to the SDM

10 October 2017

Batch processing, long an architectural approach of many legacy mainframe environments continues to operate even today in a modern world of web and mobile computing. This type of workload is representative of the architectural style of mainframe applications appropriate for the capacity of these platforms from their very beginning. This technological approach eventually became a defining factor in many business processes that became dependent on the legacy mainframe platform. As organizations grew and the volume of this type of workload grew with it significant IT processes evolved to support it. Batch processing windows, enforced through job scheduling rules and procedures continued to grow. With the increase in online transactional processing, technological limitation forced this batch processing to run at night, within a fixed, and often insufficient block of time. As organizations have struggled delivering all the workload with the time space allocated, they have considered alternatives.

In conjunction with the LzLabs Software Defined Mainframe® (SDM), LzBatchTM provides support for executing legacy mainframe batch jobs in a modern x86-Linux environment. LzBatch provides binary-compatible execution for Batch applications written in COBOL or PL/1. This means that an application can be run without modification in a Linux environment based on x86 hard­ware. Standard mainframe Job Control Language (JCL) support is provided, enabling Batch jobs to be submitted locally to the SDM or for example via (Network Job Entry) NJE-connected mainframes. LzBatch also allows you to run interface compatible versions of many commonly used utilities such as: IEBGENER, IEBDG, IEBCOPY, IDCAMS, SORT, etc.

Batch jobs in a legacy mainframe environment were usually associated with a particular class of work. The operating system’s job scheduling component then scheduled this work into legacy mainframe initiators defined to support the particular class. These initiators may have been allocated different performance resources as defined by the operations team. LzBatch provides support for class-based batch workload scheduling in the SDM. When used in conjunction with a job scheduling solution, such as SMA Solutions OpCon, sophisticated distributed and coordinated workflow processing can be defined.

Batch processing in a legacy mainframe was often designed to process large volumes of data during the overnight timeframe, producing reports or manipulating data while the online transactional system was no longer operating. These processes used customer developed applications as well as 3rd-party programs. When printed output was producing a spooling function was provided that stored the printed output until it was ready to be printed. LzBatch provides support for spooled printed output.

The architecture of legacy mainframe initiators provided some additional capabilities, such as data set allocation or creation. LzBatch provides equivalent capabilities, providing support for both unchanged customer programs as well as many utilities. Legacy mainframe data processed by these batch job streams was implemented in a variety of mainframe storage types. LzBatch provides support for a variety of mainframe data storage types without change, including:

  • DB2® Access
  • VSAM (Virtual Storage Access Method) data sets
  • Sequential data sets
  • Partitioned data sets

Batch processes could also leverage an early mainframe scripting language, Restructured Extended Executor (REXX). LzBatch provides support for the usage of REXX language for the definition of batch processing workflow.

In most mainframe organizations, batch jobs were submitted locally through individuals using TSO or by an automated job scheduling system which may have instructions to execute jobs in a particular order or at a particular time of day. LzBatch also provides support for jobs submitted remotely, from other mainframes, using the Network Job Entry (NJE) protocol. An instance of SDM can operate as part of an NJE network that both sends and receives workload from any other node in the network. Consequently, existing IBM legacy mainframe batch workload can be re-routed to run on an LzLabs SDM by the simple addition of a JCL routing card.

The LzLabs Management Console provides a set of menu functions allows you to create, modify, delete and run Batch Jobs, execute SQL against a relational database, and create/execute REXX scripts. It also can be used for (data set/PDS) File and Library management, as well as Spool job handling.

An example of a Job:

//JOBNAME JOB ACCT,‘Description‘,
// CLASS=A,MSGCLASS=A
//*
//STEP1 EXEC PGM=PROGRAM
//STEPLIB DD DISP=SHR,DSN=MY.OWN.LOADLIB
//SYSOUT DD SYSOUT=A
//INPUT DD DISP=SHR,DSN=MY.OWN.DATASET
//OUTPUT DD DISP=(NEW,CATLG,DELETE),UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=256,BLKSIZE=2560),
// SPACE=(CYL,(1,1),RLSE),DSN=MY.NEW.DATASET
//SYSIN DD DISP=SHR,DSN=MY.OWN.PDS(PARM)
//

The following table provides some explanations of the job example above to help understand what the JCL instructions are used for:

JOBNAME Name of the Job (cannot be longer than eight characters).
JOB Keyword to tell the operating system that the following instructions belong to this job until the next JOB keyword, or the end of the member.
ACCT Account Code of the Enterprise (a user-defined string).
Description Optional short description of the Job.
CLASS Address Space in which the Job is allowed to run (note: different address spaces can have different priorities).
MSGCLASS Output File for the Job Log Messages (can be a printer or a spool).
USER Sends a message to the user who started the job when (and how) the job ends.
STEP1 Keyword for the first Step in this job. A Job can contain more than one step; each step must have a unique name (which can be blank).
EXEC PGM=PROGRAM Tells the system to execute the binary called PROGRAM.
STEPLIB Partitioned Data Set (MY.OWN.LOADLIB) where the program called PROGRAM is stored as a member.
SYSOUT Output file for messages from PROGRAM.
INPUT Sequential Data Set (MY.OWN.DATASET) which contains data that is used by the PROGRAM.
OUTPUT Sequential Data Set (MY.NEW.DATASET) which is going to be created and contains the result of the processed data.
SYSIN Member PARM (stored in the PDS called MY.OWN.PDS) contains instruction parameters which are used by the PROGRAM.
DD Keyword for the Data Definition statement. The next statements contain one of SYSOUT (i.e., a printer), or a DSN (Data-Set Name), and many more.

Clients that wish to re-host batch mainframe applications to the SDM must provide the load modules for the application programs, batch JCL and access to the required data files. Using the LzLabs Centerpiece function customers can select all the needed legacy mainframe artifacts to support the proper execution of the batch application. The artifacts are bundled in a standard DSS dump format, with a manifest describing its content. This file is then transmitted to the SDM, where the Centerpiece Import function installs all the content in the SDM operating environment.

Moving batch workload to the SDM is greatly eased by 3 features of the product:

  1. Only application load modules are required
  2. Support for legacy mainframe batch interfaces, operational environment, etc. are provided
  3. CPX/CPI automates the collection and movement of required pieces.

These features are the result of the 3 Driving Philosophies of Product Development of LzLabs.

  • The power of modern x86 computing paradigms can be leveraged to run enterprise class workload, including cloud infrastructure deployment models.
  • Our Software Defined Mainframe approach ensures the lowest migration cost and risks. You move existing workload, unchanged and “It just runs!”
    The SDM is designed to provide a container environment to run mainframe applications, but in all other ways is designed to leverage the power of Linux, open source and cloud environments.

DB2® is a registered trademark of International Business Machines Corporation.

Popular Articles

Related Topics