As Mark Twain once said, “the report of my death was an exaggeration.” This is the same with mainframes for an enterprise. As far back as the early 1990s, consultants, advisors, and executives have noted that the mainframe is an outdated computer infrastructure that is both expensive and inflexible. While those descriptions may be appropriate, they miss the finer point that there is a significant percentage of commerce-based workloads that have their home on the mainframe. Applications from Federal and State governments to banks, credit processors, and insurance companies continue to survive and even thrive on the mainframe.
While pro-mainframe programmers and administrators would have you believe it is because of the robust nature of the platform and high level of security, the reality is that it is difficult and expensive to rewrite applications that are as much as 50 years old to new modern architectures.
In their best interests, IBM has improved hardware and services functionality to improve one area of complaint, namely inflexibility. Workload resiliency is being built into modern mainframes like the z16. Things such as capacity on demand have provided quick and easy upgrades as well as disaster recovery scenarios. New IBM license metrics have reduced the software run rate for many clients. But, IBM is only a portion of the equation and inflexibility is only one problem.
The perception of the mainframe is that it is expensive because of Independent Software Vendors (ISVs) who have invested heavily in the platform and want to see positive returns on that investment. While that perception may be true, diligence in contract and vendor management can reduce costs and help maintain the value of the platform. And even if a company were to plan for a mainframe reduction or elimination, managing well in the last years will still pay dividends, even for a few years.
There are three major activities in reducing software costs on any platform, including the mainframe. But the principles of software savings still apply, namely, to save money, you need to buy less of it, or pay less for it. And you can’t save money that you have already spent. With that in mind, the three activities of software savings are benchmarking your costs, optimizing your licenses and products, and then negotiating on the optimized and benchmarked software footprint.
The first step in gaining mainframe savings is to benchmark your costs against some control. This can be a do-it-yourself exercise by finding publicly available costs that are either list prices or already negotiated discounts. An example is the General Services Administration (GSA) contracts that are pre-negotiated. While these discounts are minimal, they do show a direction of the discounts available for that vendor. Another option for benchmarking is to contract with an organization that is fit for the purpose to benchmark.
These should include actual client prices and a normalized view of pricing. For the mainframe, calculating cost per MIPS provides a normalized view across all data center sizes and even various workloads across the platform. To calculate, divide the total cost for the vendor, category, or entire data center by the number of deployed MIPS. A good starting point is about $1,000 per MIPS. Less than that and a company is in the right direction. More than that indicates that there is work to be done.
A big reason why anyone benchmarks anything is to understand the opportunities that are available for reducing costs. For instance, if a company finds that its IBM costs are higher than expected based on a normalized benchmark, there is an opportunity to review and reduce. Another example is within a category of software. Say that the database category is higher than expected, then the company can look at the opportunity to reduce its database costs. Once a benchmark is complete and opportunities are identified, optimization can take place.
Optimization is a simple process to ensure that you own what you need and use what you own regarding software costs. The most difficult task is to ensure the business value of software costs. If a company has purchased a utility to ease the burden of database management, then optimization would ask whether that burden has been eased. Is time being saved? Are quality outcomes being achieved? Human time is easy to measure. Quality is more difficult. Regardless, measurement needs to take place to ensure that the company is using what it owns.
There are two simple optimization tasks that need to be understood and taken. The first task is to eliminate redundancy. For a given task or function does the organization own one or more products that do that task? If so, the products are redundant. No company ever sets out to duplicate functionality. It just creeps in. Have you undergone a merger? Often when two companies merge, two cultures with different values and different standards come together. Many ignore the obvious redundancy and justify it by the elimination of the need to retrain. That is a good example of burying your head in the sand and hoping no notices. Redundancy elimination starts with product categorization, usage measurement, and contract review.
Once product categories are understood, then products that share the same category can be reviewed for elimination. A byproduct of categorization is understanding opportunities to replace a category with lower-cost technology. This has its start in benchmarking but can continue into alternative identification. If two products identify the same category but one is not used, then the elimination candidate becomes more obvious. Reviewing contracts of the redundant products may show that one is under long-term contract and shouldn’t be eliminated.
The other simple optimization task is to ensure that as systems are downsized on the way to elimination, capacity-based software is reduced along the way. Most mainframe software is licensed by MIPS or MSU quantities. As systems are downgraded, these licenses and contracts should be reviewed. In fact, a good negotiation technique is to ensure the true-down capability exists so that the savings can be harvested immediately.
This brings us to step 3, negotiation. A benchmark has been completed to identify opportunities. Product sets have been optimized to ensure redundancy has been eliminated and license quantities match system sizes. Negotiation will reap the benefits of these exercises by contracting for less and paying less. A simple, often overlooked step for mainframe software is to ensure that processes exist for upgrades and you don’t get upgrades that you don't need or already have. The best outcomes of a negotiation can be overwritten by largely unknown upgrade fees.
Benchmarking, optimizing, and negotiating are the three steps that will turn an aging mainframe software stack into an asset to the company. The benefits of good optimization and negotiation will exist for years to come. The mainframe may be short-lived, but it's not dead yet. Don’t ignore the obvious opportunities in software cost elimination. Rumors of the death of the mainframe have been exaggerated.