Cloud-Only for Legacy Mainframes: The LzLabs Software Defined Mainframe

29 November 2016

Big Iron, Light as a Cloud: Flexibility, Cost Savings, AND Innovation

Over the past few months, we’ve seen some global household names decide to adopt a “cloud-only” strategy: international brands like Conde Nast, Coca Cola, General Electric and Johnson & Johnson have announced their intent to “turn out the light” in their datacenters. Elsewhere, in France for example, Veolia has also undertaken this bold move through a global migration towards AWS.

These leading enterprises confirm through their decisions that cloud leaders like Microsoft Azure or AWS no longer suffer from their initial “teething problems,” common in such infrastructure services in their infancy.

For a company like Netflix to have 100% of its IT based in the AWS Cloud, this demonstrates that it is in fact possible to deliver video-on-demand services on a global scale out of a cloud infrastructure, despite the fact that video services are amongst the most demanding in terms of IT requirements (network throughput, raw computing power, storage capacity, etc.).

If it is possible for Netflix, why shouldn't it also be possible for most traditional incumbent corporations, many of which have less stringent technical requirements? Of course, some fears regarding confidentiality may still persist for a few years among those incumbents, particularly in the financial industry. However, it seems that many are entering the era of “cloud-only” IT, where private or outsourced custom datacenters look set to disappear and be replaced by standard “IT power plants,” made of swarms of strictly identical commodity components, i.e. x86 servers, and architectures optimised to deliver the most efficient ratio between compute power and consumed electricity.

So, why do corporations go for “cloud-only”?

The standard advantages of clouds, leveraged to their maximum when applied to the IT system of a company can be understood as follows:

  • Savings: The costs of using clouds are difficult to match in private data centers, not only because major cloud providers buy their building blocks (servers, storage, etc.) at competitive prices due to their scale of purchase (hundreds of thousands, even millions of units), but also because they fully automate the administration of their systems in order to streamline efficiency for datacenters’ most valuable resource - human beings.
  • Budget structure: By using clouds, CapEx can be significantly reduced without the need for on premises infrastructure, which is difficult to maintain particularly at a time of budgetary pressure. Budget is instead shifted into OpEx, which the CIO can easily charge back to corresponding business units by portion of usage.
  • Flexibility and Agility: Supply can scale alongside demand as infrastructure, made accessible by cloud service providers, supports almost any change in demand. Need a test environment? Just clone corresponding live instances and the tests can start!
  • Innovation: Organizations can test new versions of application services to see how customers react. By adding an instance of the improved service among the original version, and routing a fraction of user traffic to it: companies can see how users react and compare results accordingly.

A “cloud-only” strategy is self-explanatory, but implementing the cloud across the full landscape of internal IT often presents challenges.

A partial approach, limited to low-hanging fruits such as internal x86 servers - could also prove dangerous, however. Compactness is a key feature of resilient and efficient IT systems, so from an availability or performance standpoint, it would be a complex process to dismantle an existing system with closely tied machines, for example, x86 clients in close and synchronous interaction with a mainframe server, by putting only half of the system on the cloud in order to partially obtain the aforementioned benefits.

Furthermore, the road toward “cloud-only” for the entire IT ecosystem can't take too long: all applications with their interdependent components must be moved to cloud in an ordered manner and in a very short timeframe in order to ensure system uptime.

Many of the 71% of the top Fortune 500 corporations that use mainframes for their core business activities feel that they are unable to take advantage of any of these strategies, and are instead stuck with (tens of) millions of lines of COBOL code on “big iron”, desperate to have them fully rewritten into a modern language like Java able to run on Linux powered by an x86 machine.

As of 2016, this perception is now outdated: platforms such as the LzLabs Software Defined Mainframe® enable legacy applications to run seamlessly on x86 cloud like Azure, on existing mainframe binaries, without any changes or recompilation required.

For a direct glimpse into this lift-and-shift process, click on the video link below where Christian Wehrli, Head of Post Sales at LzLabs, demonstrates how to move a transactional mainframe application to Microsoft Azure in matter of minutes (July 5th 2016 Video @ Microsoft Headquarters Zurich):


Don't procrastinate any longer giving the nascent Google of your industry more room to "eat your lunch": get in touch with us. Together we'll find efficient ways to execute the technological side of your digital transformation quickly, so that your corporation becomes “digitally reborn” if its age prevents it from being a truly “digital native”.

We'll be providing full details on what “lift-and-shift” really means, and on the technical aspects and benefits of the Software Defined Mainframe ® in future posts.

Popular Articles