In 2011, Marc Andreessen, father of the seminal and legendary Mosaic internet browser and co-founder of Netscape, wrote in the Wall Street Journal a very famous article "Why software is eating the world" where he stated "my own theory is that we are in the middle of a dramatic and broad shift in which software companies are poised to take over large swathes of the economy".
The following eight years have greatly proven how right he was! Back then, Uber was just a 2-year old baby and Airbnb was just a year older. Both were relatively unknown by the general public: look where they are 8 years later! They have transformed the taxi and hosting business from services based on material assets to a global digital platform where convenience and ubiquitous access are the strategic values. Neither own a single car or a single room, yet reached a strategic position in their markets through very sophisticated, multi-channel software hosted on the cloud. The fintech companies are trying to reach the same dominant position in the financial industry by replacing physical banking agencies with smart and friendly mobile software services. No need here to detail the success of Amazon and how it moved enormous chunks of the global retail business to the internet, away from physical stores, using an amazing software platform!
"Epic fight between incumbents and software-powered insurgents"
In his article, Andreessen further details that this software revolution is the result of hardware and software progress, ubiquity of high-speed broadband telecommunications and internet deep penetration in our society achieved over just a few decades.
He also said that "companies in any industry need to assume that a software revolution is coming". He was right: in 2019, we all must admit that this revolution has arrived and that his predicted "epic fight between incumbents and software-powered insurgents" is clearly taking place!
The LzLabs customers – those who, for much of their existence, have run mainframe systems to control core operations - are clearly among the incumbents: they have existed for a long time in their market and have a long love-hate history with these systems, on which their mission-critical applications reside. Our mission is to help them get a seat at the “software banquet” table!
How can they obtain this seat to – at the very least - maintain current market share when an increase seems out of reach due to the fierce competition of insurgents? The answer remains the same as on any previous (and probably, future) market: through innovation!
Of course, the next question is: how does an organization innovate in a software-driven market? The answer is quite simple: by running operations on a platform allowing continuous delivery of new versions of the applications! This approach enables users to use and enjoy quickly the new features, which embody this innovation.
"The power of small wins"
Amazon is leading this approach by releasing new software upgrades almost every second, 24 hours per day, 7 days per week. Of course, those are not like the massive releases, of the traditional mainframe shops, which are usually delivered in a “one go” quarterly cycle, but incremental steps toward significant improvements visible in the longer term: with 7.5 million (3’600 x 24 x 30 x 3) updates made over each second of a given quarter, Amazon achieves as much as, if not far more than a monolithic release even though each update is quite nimble! Some call it the “Power of small wins”.
This incremental strategy is successful for all the behemoths of our new digital world: Amazon, Google, Facebook, Twitter, Netflix, etc. They continuously innovate to remain ahead of the pack and do so with a permanent cycle of high-frequency software upgrades.
But this will not happen for staid organizations in major industries simply overnight: it can only be built upon a carefully designed and implemented underlying IT architecture designed to support it. All the shops mentioned above share a common view of system design: their IT architecture is based 100% on containerized microservices. They all “divide to rule and conquer”! For most of them, this did not happen by guessing: as they scaled, they realized that the monolithic applications that they were building would eventually paralyze them in their need for permanent innovation. So, they refactored their initial system – many with significant investment - to reach their current microservice architecture, giving them so much more power in the digital arena!
Limited associated risks with small microservice updates
The advantage of those microservices is their small functional perimeter: they can be exhaustively (and automatically) tested more easily than huge monolithic application structures. They can then be independently deployed. As a benefit, the “fear factor” of this approach is much smaller: when small microservices are updated, the associated risks are very limited. Organizations can worry less about the consequences of a remaining bug and so, be less inclined to postponing the upgrades, especially when they are done with the latest DevOps techniques (A/B or Canary deployments). Upgrading your software becomes BAU (Business As Usual) and vector of success for your enterprise!
Based on these positive experiences by digital leaders, LzLabs proposes solutions for a gradual migration of the monolithic mainframe applications toward containerized microservices:
- They are rehosted as-is on our SDM with no disruption through a fully risk-mitigated approach
- The full application (batch & transactional) is then analyzed by ad hoc tools to obtain a global and exhaustive caller-callee graph of all involved programs
- A subtree of this graph will be extracted for each program representing the entry point of a given task (transaction or batch job)
- The nodes of this subtree, i.e. all the subprograms potentially called during the execution of the task at hand and still in their original mainframe binary form, are packaged in a Docker image with the corresponding technical components of our LzLabs’ SDM: LzCore and LzLanguages for all, LzOnline for transactions, LzBatch for batch jobs, etc.
After this restructuring, which, by the way, is made possible without any change or recompilation, the initial legacy monolith is replaced by a myriad of Docker images, each representing a unitary process of the application. From now on, each of these items can experience its own evolution path at its own pace to satisfy new business needs and innovate much faster than in the past.
Standard mainframe mechanisms around resource sharing
Of course, these Docker containers are not independent from a data perspective: they must share resources (databases, transient queues, etc.) to provide the same results as before. This is made possible by the standard mainframe mechanisms around resource sharing, also provided by LzLabs Software-Defined Mainframe®. Read our already published whitepaper on that matter The Evolution of Mainframe Transactional Processing Through Containerization and the Cloud" to better understand how it can be achieved.
Though, the microservice approach must be properly equipped: it substitutes the problem of slow evolution due to monolithic architecture with the new requirements of sophisticated software management of a distributed system. The mainframe monolith, with all components in one software bucket, is replaced by a myriad of smaller buckets (the Docker images), each containing only a fraction of the components. So, proper DevOps tooling must be in place to know in which of these buckets a newly updated application program must go in order to create the next version of the corresponding container(s).
LzLabs provides the corresponding bricks of this advanced global DevOps platform: these bricks can be integrated with other standard facilities (Kubernetes alone or within OpenShift for container orchestration, image registry for upgrade deployment, etc.) of a microservice cluster to make the full approach efficient and sustainable over the long term.
Get in touch with us if you want to discover all the details of the LzLabs cooking recipe for modernization and book your organization a seat at this “digital feast”!
White Paper: The Evolution of Mainframe Transactional Processing Through Containerization & the Cloud
Reduce risk of mainframe re-hosting whilst gaining scalability, cost and agility benefits of container environments
Read our whitepaper to understand how to:
- Evolve from existing workload architectures to container and cloud-based models, and finally microservices
- Reduce the scale of testing through containerized applications and data
- Roll out new products and services in continuous delivery mode, with new applications hyper-connected to legacy applications and data
- Automate the build, delivery and updating of microservices through seamless integration of modern dev-ops toolkits