The current leading edge form of cloud computing is serverless. Wikipedia says that “serverless computing” is “a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity”. The name "serverless computing" is used because the server management and capacity planning decisions are completely hidden from the developer or operator. The focus is on “compute capacity” and NOT specific servers.
According to this definition, all Infrastructure-as-a-Service (IaaS) offerings like Amazon AWS EC2 or Microsoft Azure Virtual Machines are excluded because they are based on pre-purchased capacity (CPU hours for different types of processors).
The current canonical serverless offerings are clearly services like Azure Functions or AWS Lamdba: you develop and upload some business-oriented code delivering some computing services, and you define in your cloud console behind which URL this code will respond. That’s it! AWS or Azure take care of the rest for you.
No Linux instance to provision, manage or monitor. Pure focus on business code! If you are successful with your application, the capacity growth is fully managed on your behalf by the cloud provider.
AWS Fargate or Azure Container Instances are steps in this “serverless” direction: you only build Docker images unrelated to any specific Linux instance and ask for their execution. The service will then take care of the container orchestration and scheduling (with some limitations). You pay according to what you use. This is a bit “less serverless” (it implies more low-level system knowledge) than the strictly business-oriented lambdas, but, it complies with the definition: you do not pre-purchase computing capacity related to usage of some precisely predefined computing machinery.
Canonical serverless computing reminds me of the prediction of one of my IT professors during engineering school, which became seared in my memory: “someday, computing energy will be provided based on consumption in a very standard shape and form like electricity”. It was visionary 30 years ago: the Internet was nascent and cloud computing was not even a concept. With the concept of serverless computing, we are almost there.
Serverless computing has advantages:
- cost directly linked to usage,
- provisioning headaches are eliminated,
- time-to-market shrinks, etc.
Of course, it has also drawbacks: loss of control over infrastructure, less tuning or optimization capabilities, etc.
IaaS has an impact on operations (Ops) teams: they are reduced in size because hardware is provisioned by the cloud provider. In parallel, they are often merged with the development (Dev) teams leading to so-called “DevOps” units to increase agility and velocity. But serverless computing – as it grows – will have an even bigger impact on the Ops team: it may even lead to their quasi-extinction of systems software as operating systems and middleware move under the responsibility of the cloud provider. The sole focus of corporate IT becomes application code and nothing else.
So, time will tell whether serverless computing is likely to move corporate IT organisations from DevOps to “DevNoOps”.
Either way, serverless computing will become a very important computing paradigm as our world becomes increasingly about both interactions AND transactions:
- We now make more credit purchases than ever, even for very small amounts. The corresponding electronic transactions replace physical cash exchanges of the past.
- Via smartphones, most often, we interact all day long with online services provided by retailers and other kinds of merchants: for example, people checking prices on the Internet for goods that they are about to buy in a physical shop. Additionally, nobody uses paper scheduling for public transportations: real-time information and electronic ticket purchase is the norm.
- Traditional corporations fuel this trend by achieving their digital transformation: an IT manager of Marriott explains that their “look-to-book” ratio has increased a few hundred times since their booking system was widely available on the Internet. And for no real additional business, which remains dictated by the number of rooms available in a given hotel!
- The fast-expanding ‘Internet of Things’ around us further increases this trend: your domestic security system reports in real-time to the company monitoring your home, your connected car reports metrics as you drive to the manufacturer, etc.
Those electronic interactions/transactions (i.e. individual, indivisible IT interactions as per the Wikipedia definition) develop at blazing speed.
Serverless computing is the architecture of choice to cope with this growth. This enables corporations to remain focused on continuous innovation via fast-paced application updates. If you have to cope with infrastructure provisioning in such a (positive) storm, you can turn yourself into a “box mover”, never fast and efficient enough to install all new required servers, and totally miss the opportunity passing you by.
The LzLabs Software Defined Mainframe® (SDM) enables legacy applications to execute in a binary-compatible form, in an x86 Linux OS environment or in the cloud, by providing two key components:
- The mainframe APIs (binary signature and exact semantics) required by the application.
- A container that wraps around the legacy code using the LzLabs’ Dynamic Instruction Set Architecture translation (DIT) to make it appear like a standard Linux application (regular processes and threads etc.) to the new hosting environment.
The aim of this sophisticated technology is to keep the SDM as unobtrusive as possible. Our philosophy can be expressed as:
- The power of modern x86 computing paradigms can be leveraged to run enterprise class workloads, including cloud infrastructure deployment models.
- Our SDM approach ensures the lowest re-hosting cost and risks. You can move your existing workloads, with no requirement for recompilation, and they just runs.
- The SDM is designed to provide a container environment to run mainframe applications, but, in all other ways, it leverages the power of Linux and open source environments.
What this means is that LzLabs is aiming to make its Software-Defined Mainframe fit within such serverless architectures. We aren’t there yet, but we are moving in this direction, trying to relieve our customers of the pain of infrastructure management.
- we make widespread and continuous use of containers via our Global Test Harness (GTH): the SDM is ready for a container-only world.
- we propose solutions to break mainframe monoliths into microservices, getting one step closer to lambda functions.
Recently published figures report that still 70% of financial transactions in the world are processed on a mainframe. Solutions like the SDM are here to make sure that corresponding mainframe workloads can adapt best to new leading-edge IT paradigms. We want our customers to further leverage the massive investments initially made in mission-critical applications currently crunching that data.
Our design philosophy is to provide just enough mainframe capabilities to support the seamless execution of customer applications on the SDM, but in all other ways leverage modern computing paradigms including serverless computing. Easy access to unlimited compute capacity, delivered in highly reliable cloud environments enables organizations to achieve the power and reliability, often associated with the mainframe, without remaining dependent on legacy architectures and legacy pricing models. Cloud computing continues to move to become “IT dialtone” – as ubiquitous and available as the dialtone of voice communication for the last 100 years.