When implementing the LzLabs Software-Defined Mainframe® - a container-based re-hosting technology which enables organizations’ mainframe applications to run, unchanged on x86 and the cloud – our team requires a testing environment that leaves nothing to chance. Our customers must be 100% assured that the SDM delivers just as reliable uptime, security and even greater performance than they experience on their mainframe.
As part of our QA endeavour, we try to be as thorough and as universal as Kirkaldy’s testing machine when further developing the SDM. Less “heavy” and much more agile than the impressive machine in the title picture of this article, we make maximum use of Docker containers in our Global Test Harness (GTH) in order to leverage a comprehensive testing environment for optimal quality of our product.
The GTH is our way of practising “Shift-Left Testing” (SLT): the practice of ensuring each full set of automated tests is run on every new GIT commit by every developer, high upstream in the Software Development Life-Cycle (SDLC) of SDM. Obviously, the test number needs to be as agile as the test frequency.
SLT is the best way to identify (and then fix) bugs as early as possible to minimize their cost and potential impact. The cost of additional hardware infrastructure to support this continuous stream of tests pales in comparison to the support and customer management costs induced if a development team selects the alternative method – fixing bugs later. And that’s just financial cost – corporate reputation is also preserved when SLT is considered as a shield against problems reaching the customer and damaging the image of the product.
The pure cost benefits of SLT are clearly demonstrated in the chart below published by Barry Boehm, distinguished professor at University of Southern California and globally renowned expert in software engineering: software vendors do not want to pay the hefty “late discovery tax” of >150x when they can avoid it by running tests as early as possible in the development cycle.
Our GTH is produced – through our standard DevOps build chain: GIT, Jenkins, etc. – as a set of Docker containers each gathering a piece of the test components of the overall GTH.
The build chain is triggered after each branch merge. This is true for each new feature no matter how small, when a team moves new code up in our release hierarchy. Like most other software vendors, our GIT schema is structured toward the official release branch. We then produce a new set of SDM software packets (RPMs), which are containerized with various kinds of QA tests (regression, functional, etc.). The Docker images run dynamically to install the changed SDM components and triggers the tests automatically. Results are collated and distributed to everyone who needs to know.
The advantages of Docker are well known. Our DevOps team is able to exploit many of these in a test context:
- Isolation: the Docker image is fully isolated from the software configuration of the Linux machine or instance on which it runs. It is also fully isolated from other containers running simultaneously. This means that we can very rapidly scale to run similar or different test groups, but with software components (system packages, SDM, etc.) at different version levels. In the Virtual Machine (VM) world, each set of tests would require a different VM, which is much less scalable, far more painful to support and much more expensive to provision and execute.
- Efficiency: containers are widely known to be much more efficient than VMs. You don’t have to pay the so-called “hypervisor / virtualization tax”. The result is that we can execute our “Shift-Left Testing” on fewer machines, i.e., at significantly reduced cost in terms of time and physical hardware, or perform many, many more tests at the same cost.
- Reproducibility / immutability: when a GTH test fails, the developer can easily reproduce and diagnose the problem. No need for him to do tricky remote debugging in our DevOps platform. So, security is enhanced as well. The developer simply pulls the faulty image down to his laptop, starts it under his own personal Docker engine on his workstation and reaches the same failure point. This can be much more easily debugged in the local environment.
- Scalability / orchestration: new Docker container images to be executed can be generated in high numbers by SLT approach when you run the GTH on each commit / branch merge in a large team of developers. The parallel scheduling of this large set of Docker images gets much easier when you delegate this task to a dedicated container orchestration platform like Kubernetes. This is now proposed by all major CaaS platforms (Docker EE, OpenShift).
- Elasticity / portability: the main advantage of containers, in fact one of the seminal foundations on which they were created, is portability. The same container image can be run on bare metal, virtualized developer/support instances, or a public cloud. This means that test harnesses like GTH can be easily delegated to the cloud services offered by public providers (Microsoft Azure, Amazon AWS, etc.). As a free benefit, the test harness runs not only where it will be most efficient, but also in very heterogeneous technical testing environments, validating them with no additional effort. It really is “killing two birds with one stone”.
Why would you need any more compelling reasons to run you your own tests in Docker containers? The advantages mentioned are already delivering impactful results for our product, hence for our customers. Further, through an ever-present “Dogfooding” culture, our DevOps team is able to demonstrate and educate other teams with a shared experience of just how beneficial it is to use these technologies.
So, the LzLabs GTH has been containerized for dynamism, agility and efficiency. Visit our web site or get in touch to to understand more about how we’re developing more new approaches that disrupt current enterprise computing practices.