DevOps for Agile Mainframe Legacy Maintenance: not an Oxymoron!

9 July 2019

Recently, I heard: "DevOps was created to fix the issue caused by Agile". Even though, "issue" is probably not the right word, it is true that DevOps was born from the requirement for faster rollout of new software releases, as Agile in essence is all about faster time to market for new software.

Indeed, faster software upgrading is much more strategic than it seems, as "Software is eating the world"[1]. What would be the purpose of writing software much more quickly through Agile methodologies if the final stages of the delivery pipeline weren’t any faster?

This new velocity requirement brought about by wider adoption of Agile methodologies (Scrum, etc.) is becoming ubiquitous: according to Gartner, 50% of all current development projects are currently run in DevOps mode, and 90% of IT organizations are using this methodology at a large scale or are experimenting with it.

Graph showing percentage of organizations running projects in DevOps mode. Source: Gartner 2019 - "The Future of Agile and DevOps"Source: Gartner 2019 – "The Future of Agile and DevOps"

Many applications are multi-tiered: more than often not there is a back-end server, a gatekeeper of the "system of records", involved in these applications. Here, the application cannot be more agile than the slowest of its tiers. This means that "Agile" initiatives cannot be truly successful if one of the tiers is not moving fast enough, or its constraints negatively impact the other tiers.

Triple Pain for Agile 

A triple pain for Agile is likely to be present in many mainframe shops:

  1. Intrinsic sluggishness and long delivery timeframes: new version roll-out is usually extremely slow in mainframe shops: the standard quarterly (not to say less in some cases) update for legacy systems is not what you would call agile when compared with Amazon, for example, which delivers software upgrades in production each second[2] of every day (i.e., 50 million updates per year).
  2. Multi-tier testing hurdle: the setup of "comfortable" test environments requiring a mainframe back-end is always hard to obtain. Legacy computing resources are so expensive that a solid negotiation – often unsuccessful – is required for teams evolving the front-end component (which is often Java on Linux with a browser-based UI) of mainframe applications. Only then can they obtain a mainframe environment matching their needs to test on a continuous basis to respect DevOps best practices.

    It is natural for these teams to set up an end-to-end test harness that is run on each code commit to respect putting "Shift-Left Testing" into practice: it is hard for them to understand why they can’t get the required mainframe environment to run their tests as often as they would wish. Let’s not even talk about recurring tests at scale with tools like JMeter to simulate thousands of users: the bill for the MIPS (Millions of Instructions Per Second) that would be consumed makes this almost impossible to achieve!
  3. Test flexibility: in fact, Agile teams require not only one but several test environments as they need to validate their new developments in several distinct contexts in parallel. Item #2 described how hard it is to obtain one test environment, no need to explain the problems in obtaining several!

Development Utopia

Now, imagine a situation where you could have as many mainframe testing environments as you wish for your Agile teams in parallel, because they are essentially free when compared to mainframe environments. These environments would be fully independent and isolated from each other, so they could have totally different configurations. And what a cornucopia if those environments could scale up as needed, because the underlying hardware computing resources are scalable and incredibly cheap when compared with the mainframe!

"Utopia!"" you say? "No, reality!"". Take a careful look at the image below to get all details about the development environment that could be opened to your legacy applications, thanks to the LzLabs Software Defined Mainframe® (SDM):

  • SDM allows the transparent rehosting of mainframe applications (data & programs) from their legacy source environment to x86/Linux in their original mainframe binary form. No need for code source changes, recompilation or data re-encoding! So, direct access to the cost-efficient economics of the x86 and Linux/OSS: the application runs as-is to be tested or to be a mainframe-equivalent back-end for testing of the front-ends. This can be repeated automatically as needed. The pain point of costs for testing environments is now gone!
  • An application, its data, the underlying SDM and any required Linux libraries can all be easily containerized together (see "GTH" below) into a single Docker image to run independently, fully isolated from the host server executing the container. This application can then be containerized under various configurations for testing purposes. Given enough x86 computing power, these tests can be run in parallel to ensure no impact on time to delivery for the last agile sprint. The pain point of testing flexibility is also removed!
  • A fraction of an application, let’s say a transaction or a batch job, can be enclosed separately with its data, SDM and its test harness in a separate Docker container[3]. As soon as the proper test harness for this function of limited scope has been developed by the application team, the tests can be run on a recurrent basis based on a single click. So, the cost of testing is reduced almost to zero and the delivery pace can leverage this: instead of accumulating (disparate) changes in the application code for months before testing them all at once – as mainframe shops usually do – due to the cost of the testing exercise, which must consequently be reduced to a minimum number of occurrences, each change can now be tested by itself on the fly and moved to production in an incremental upgrade process. The pain point of intrinsic slowness and long time to delivery is then also removed!

Tools and reference architecture

Additionally, LzLabs provides the tools and reference architecture to easily remap the mainframe maintenance process to this Agile DevOps environment. We want to help our customers reach this stage of higher productivity as quickly as possible:

Development Pipeline Integration diagram

  • We suggest our customers replace the mainframe source code management system with Subversion and Git, where they can now host their COBOL or PL/I source code.
  • Similarly, Jenkins will replace the mainframe batch scheduler for the back-end compilation. The workstation of a developer can also be used to run quickly through initial compilation cycles without going to Jenkins.
  • The Docker runtime is installed on Jenkins servers but can also be loaded onto a developer’s workstation, providing them with a personal containerized mainframe on their own machine. In fact, due to the isolation features of Docker technology, it is possible for Jenkins and the developers to have as many mainframe environments as needed, all powered by the SDM, to deliver all the required flexibility and independence in the various debugging and testing activities of all teams.
  • The "tip of iceberg" for this construction is an Eclipse-based Integrated Development Environment, LzWorkbench, which replaces the 3270 "green screen" interface offered to legacy developers by the mainframe. It allows teams to edit and compile source code, debug components, and also access and manipulate test data and other resources loaded in their own private mainframe (= the Docker container embarking SDM and their newly compiled programs as part of the global application).

With this containerized architecture, having a virtualized and independent mainframe environment – and even several – per person is just business-as-usual thanks to SDM. Isn’t that a quantum leap in efficiency and convenience when compared to a "physical" mainframe environment?

To keep our "feet on the ground", all this is a reality for LzLabs' Development and QA teams on a daily basis. As we like to "eat our own dog food[4], we run hundreds of thousands of tests weekly through our Global Test Harness[5] (GTH), 100% automatically, based on this containerized architecture. Each new piece of code that we push into Git toward the next release of SDM is run against a vast myriad of such Docker containers, each representing a different testing context to ensure the continuous quality of our product. In fact, we raise the bar constantly: one of the top priorities of the QA team is to add more containers to GTH, reflecting more use/test cases encountered by customers. And from a resource standpoint, this is almost cost-free: all tests run day and night, as Docker containers on a growing cluster of standard servers. This is all thanks to the efficiency of x86 and Linux/OSS economics!

If you want this efficient maintenance and testing context to become your own, just get in touch with us today: we’ll be happy to discuss its implementation on your premises with your teams!

The SDM solution goes beyond the rehosting of the production platform: it encompasses the implementation of a development and maintenance environment that allows identical Agile/DevOps processes to be applied to legacy applications.

As we demonstrated above, many mission-critical applications have a mainframe side on their back-ends. The application of most modern Agile methods would be pointless if they applied only to the modern half of those assets: LzLabs SDM®, associated with technologies like Docker, allows the best practices to become ubiquitous and deliver all of their value!

[1] "Software is eating the world": a seat at the banquet table for mainframe shops! - LzLabs Blog
[2] How Amazon handles a new software deployment every second - ZDNet
[3] Using Containers to Deliver Microservices from Legacy Systems – Bringing The Power of Modern to Mainframe Application Workload! - LzLabs Blog
[4] Eating your own dog food - Wikipedia
[5] The advantage of using Docker containers for LzLabs Global Test Harness (GTH) - LzLabs Blog




White Paper: The Evolution of Mainframe Transactional Processing Through Containerization & the Cloud

Reduce risk of mainframe re-hosting whilst gaining scalability, cost and agility benefits of container environments

Read our whitepaper to understand how to:

  • Evolve from existing workload architectures to container and cloud-based models, and finally microservices
  • Reduce the scale of testing through containerized applications and data
  • Roll out new products and services in continuous delivery mode, with new applications hyper-connected to legacy applications and data
  • Automate the build, delivery and updating of microservices through seamless integration of modern dev-ops toolkits
Download the white paper

Popular Articles

Related Topics