What characteristics or steps determine the difference between a successful or failed mainframe migration? Having followed mainframe migration projects for 18 years (and over 10,000 inquiries) at Gartner, I have seen some disasters. At LzLabs, we have developed a technology that we believe mitigates risk significantly – more on that later. Nevertheless, we would suggest a couple of key considerations for any mainframe migration project in order to ensure success. Mainframe migrations are fraught with a myriad of application technologies that have been used over the decades. Application portfolios, much like coral reefs, have grown slowly. The loss of knowledge of the business rules or processes, as well as the implementation technology, can bring risk to the migration process.
In order to be successful LzLabs recommends the following steps be taken to mitigate migration risks:
- Portfolio understanding
- Reference Architecture
- Skills & Culture Change
- Data planning
IT Portfolio Understanding – Figure out where you are and where you’re going
IT modernization is a CIO issue. It requires a perspective that remains above the “sturm und drang” of daily IT operations. Application owners need to accept responsibility for how they intend to manage a legacy portfolio over the next 5-10 years. The development and operational skills decline of the mainframe platform is reaching a crescendo, and a continued view that life will be fine is a sign of someone hoping to retire before the proverbially portfolio hits the fan! Some parts, or maybe all of your application portfolio can run off the mainframe. The question is how you get there! How much are you willing to spend? How long do you want to take? What risks are you willing to incur?
The easy answer is to say we want a modern application completely rewritten in Java or implemented by an off-the-shelf software package. An admirable destination, but not cheap, easy or quick to get there. CIOs need to decide which applications are worth the effort and which are better undertaken in an incremental way. Re-hosting applications to a modern infrastructure environment, provides the immediate benefit of these lower cost options and is a solid base to begin an iterative modernization effort.
The predominant paradigm of application development on the mainframe has been a portfolio of bespoke, or custom developed applications. This process has gone on for DECADES and it boggles the mind to think that any organization really understands all of it – they don’t! The loose relationship between source code, documentation, and the actual programs in use has been a developing problem for 50 years. And the difficulty in identifying precisely which source code was used to build aspects of long-running applications is central to some of the challenges of mainframe workload rehosting. As an IT executive in one large enterprise lamented, “we’ve got about 100 million lines of COBOL in our source code repository, but it’s OK as we only have 10 million lines active. We just don’t know which 10 million.” To consider recompiling or rewriting such applications to run on a new platform requires the prohibitively daunting task of correctly identifying the source code and making it available for migration.
Define a Reference Architecture for Your Destination
In a recent Mainframe Modernization Survey, delivered by research agency Vanson Bourne, we found that our prospects are looking towards open-source solutions as the primary factor in their modernization decisions. By moving application workload to a modern server or cloud-based Linux environment, organizations can begin their journey without incurring major upfront risks. Migrating the applications to run off the mainframe is one thing, but the operational differences of any modern runtime infrastructure and the mainframe must be understood. Your goal should NOT be to migrate your entire mainframe-architected portfolio to a modern x86 or cloud environment, replicating the operational approaches of the last 50 years! Take advantage of open-source solutions wherever the gain is worth the risk. You shouldn’t have to drag along an entire portfolio recompilation just to get this benefit! With the LzLabs Software Defined Mainframe® you don’t have to! No recompilation of COBOL, PL/1 or Assembler applications are required to migrate legacy mainframe applications to the SDM. However, you can take advantage of many modern open-source solutions to operate your new infrastructure in a meaningful way.
- Leverage modern workflow processing solutions, rather than replicate mainframe schedulers.
- Create modern devops environments to support the maintenance/enhancement of re-hosted applications.
- Take advantage of the openness of PosGreSQL to expose previously locked legacy data to a new world of reporting or data analytics.
- Implement modern workload scaling solutions that utilize Docker containers, for example, in a cloud deployment environment.
Organizations must define a reference architecture that leverages as many modern solutions as possible to replace aging mainframe Independent Software Vendor (ISV) solutions in a way that provides equivalent capability without the software tax of these vendors.
Invest in cultural change and new skills training
Migrating legacy mainframe applications is one thing. Migrating the “hearts and minds” of the mainframe staff is something else! Many of them may be ready for retirement anyways, but for those that remain its key to help them see the value of the new environment. One of our key customers identified their mainframe migration efforts as a “change project” and required several important requirements for their success:
- Senior management must have the courage to support such a project and have the support of a technical leader – usually in the mainframe team – who has the confidence to drive it, as well as to inspire others to follow along.
- The migration criteria should be defined with the mainframe team (and regularly reviewed for validity as the project progresses).
- Dedicated training programs – both technical and cultural – are required to support a change of mindset.
- IT staff with Linux, Open-Source and cloud skills should be involved to guide the transition to a new platform.
Plan to leverage old data in new ways
The performance of the many mainframe pre-relational data stores is well understood. But the price that is paid is to lock this data into an environment that limits access. By moving legacy mainframe data to a modern environment it opens up access to this data from a wide variety of reporting or analytics tools. When migrating legacy mainframe applications, understand where these new solutions can provide equivalent capability without having to use some parts of the migrated application. Eliminate pieces of the application that are dependent on legacy mainframe technologies if they can be replaced with modern alternatives. Don’t try to move every piece of an application if you can help it. For example, using SDM, printed output can be imported directly into Splunk and becomes immediately available for analysis. Relational data in PostGreSQL is also readily available to any modern reporting or analytics tooling.
Mainframe Migrations – A Recipe for Success
As the mainframe skills decline accelerates, procrastination is no longer a viable option. Spend the time to understand your existing portfolio and leverage the power of open-source solutions to create an architecture that maximizes the value of these environments, while reducing the amount of change necessary to rehost your portfolio. Not all the change is technical. Every successful migration requires a strong leader with the courage of their convictions. Help those who are willing to learn to become knowledgeable in the new technologies of Linux and cloud. Finally, help your business understand the benefits of much easier and broader data access and analytics. Yes, the new tools may be different than legacy applications written 20 years ago, but the benefits greatly outweigh the change.
White Paper: The Evolution of Mainframe Transactional Processing Through Containerization & the Cloud
Reduce risk of mainframe re-hosting whilst gaining scalability, cost and agility benefits of container environments
Read our whitepaper to understand how to:
- Evolve from existing workload architectures to container and cloud-based models, and finally microservices
- Reduce the scale of testing through containerized applications and data
- Roll out new products and services in continuous delivery mode, with new applications hyper-connected to legacy applications and data
- Automate the build, delivery and updating of microservices through seamless integration of modern dev-ops toolkits