Thilo Rockmann is chairman of LzLabs, a Swiss startup that launched a software-defined mainframe in March this year in order to help relegate the mainframe to the history books
Many of the world’s largest organisations rely on mainframe computer systems that run on COBOL – a relatively ancient computing language developed in the 1960s.
There are hundreds of billions of lines of COBOL code still in use, with more than 3,000 of the world’s largest companies still dependent on legacy mainframe applications, which in turn power over 70 percent of the world’s transactions.
Not a work of sci-fi at all, mainframes and the COBOL code they run play a huge role in global technology infrastructure.
The problem, though, is that this technology is outdated, sluggish and incredibly expensive to maintain. Here’s just a few reasons why:
- Mainframes make the effective use of modern technologies complicated and highly costly, meaning those refusing to relinquish dependence are losing touch with competitors.
- The price versus performance profile for legacy hardware mainframes has been eclipsed by cloud and on-premise x86 computing.
- Mainframe skills are in critically short supply and are becoming virtually non-existent.
- Mainframe application architectures are stubbornly resistant to the agile, DevOps models that born-on-the-web companies can take advantage of.
Taken in isolation, any one of these issues is a reason to migrate, but taken together they represent a perfect storm of existential proportions.
Mainframe IT workers are a dying breed
For 40 years, mainframe application customisation has been delivered by professionals who are now mostly retired. With no tangible influx of mainframe-trained IT staff, the world’s largest and oldest organisations are understandably worried:
- Who is going to maintain this tech, when there are so many other avenues for future innovation in technology?
- Who is going to have the expertise to migrate these applications from our mainframe, when the availability of these services is absolutely business critical?
- How much is it going to cost to move enterprise IT away from the mainframe, and how difficult will this be?
Then there is the risk of losing competitiveness. In the dark IT rooms of the world’s financial institutions, it is widely believed that mainframe systems are reliable and capable of handling migration to new platforms when necessary.
The reality is that the difference in price and performance of mainframe versus more modern IT architectures, such as x86 servers, is growing exponentially, and in a few years this difference will be staggering.
Any bank that clings to the mainframe is going to be left in the dust by competitors taking advantage of the enormous price and performance benefits enabled by modern systems.
At the same time, a new opposition is confronting the world’s banks – those born on the web. These companies have never been shackled to ancient IT infrastructure, and have the freedom to exploit modern application architectures, such as open-source and the Cloud.
These new architectures are more flexible, enable companies to change their IT approaches and let them integrate new solutions rapidly. This puts older businesses dependent on mainframes at a serious competitive risk within their respective markets.
Why aren’t companies doing it?
The mainframe has a deep rooted legacy in a number of industries – particularly finance and insurance – as these were initially the early-adopters of new technology at the time.
This has, however, turned out to be a double-edged sword. These large organisations still spend millions of dollars each year maintaining and investing in new mainframe technology. For this reason, boards are hesitant to ditch the technology and those running the systems are apprehensive to a change in infrastructure due to the risks involved.
While companies have started running mainframe applications in new environments, the risks of migrating them entirely through conventional methods – involving rebuilding, repacking and recompiling those applications – are extremely high.
These typically involve recompilation of COBOL code, which is expensive, can take years to complete, and requires intensive testing of application features that perhaps even the relevant skilled workers approaching retirement are no longer aware of.
How can it be done?
For the reasons mentioned, companies approach mainframe migration with extreme caution, and rightly so – until recently migration projects have been synonymous with failed attempts and lengthy, expensive recompilation projects.
To avoid such issues, one approach being taken by the industry is to replace the architectural foundation of the mainframe with a container-based software solution, which enables old COBOL applications to run on new architectures, such as X86.
Using an approach that is software-defined, a migration away from mainframe requires no recompilation of company data or job control language, and enables companies to run most major legacy application sets on Red Hat Enterprise Linux and in the cloud.
Through this development, CIOs no longer have to grapple with the dilemma of keeping mission-critical applications running on old systems, while improving their ability to innovate, as a large-scale redevelopment to the application architecture can be pursued more effectively following the migration, more effectively future-proofing their IT infrastructure.
A software-defined approach, if correctly applied, can significantly reduce the risks involved with migrating away from the mainframe. Not everybody will be expected to remove legacy systems from their enterprise altogether, but CIOs at these organisations might now have the option to adopt cloud and open up IT infrastructure that will be crucial to business performance in the future.
The shackles of mainframe are coming off, and we expect companies to sprint to freedom in the coming years.
NS Tech’s guest opinions are an opportunity for expert and interesting people to put their views to the test. They do not necessarily represent the views of NS Tech