The myth of the expensive mainframe
Published:
Content Copyright © 2010 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt
Mainframes are expensive, aren’t they?
Well, that’s a piece of accepted wisdom I’ve never really bought into. I think that mainframes are a remarkably cheap platform, if they are properly managed, partly because getting 80% plus utilisation of a mainframe processor and the electricity it eats isn’t hard.
Of course, there are valid reasons for people thinking that mainframes are expensive. Partly, vendor pricing policies are responsibleyou tend to be charged for the potential processing power of a mainframe platform, whether you use it or not; and mainframes are pretty big and expensive cabinets, so if you exceed the capacity of your mainframe by 5% at month-end, say, buying that extra “increment” of capacity can be pretty horrendous. Similarly, moving 95% of your mainframe processing onto distributed platforms doesn’t make a lot of financial sense, as you’re now paying for both a distributed platform and a woefully underutilised mainframe.
Still, things don’t have to be as bad as they sometimes look. Vendors seldom charge list price for mainframe technology, especially if you look seriously interested in (and capable of) moving to a different platform. And there are better vendor pricing schemes these days, in which you only pay for the MIPS that you use and can install a bigger mainframe than you need at the moment and only pay for the extra capacity when you actually use it. In fact, a mainframe vendor really has an interest in delivering cheap mainframe computing (whether it realises it or not), because it can play strongly in the “green computing” space and the vendor would make more money out of lots of people using cheap mainframe power in a green way than it does from a few people buying expensive mainframe power and feeling exploited.
The fly in the ointment is that mainframe vendors are neither stupid nor mindlessly altruistic. If you want to buy MIPS and software licences for applications that no-one needs or uses, then they’ll be happy to sell them to you. If you don’t know how much capacity you need at month end, say, and want to buy 100% overcapacity “just in case”, they’ll sell it to you. If you work on the assumption that your hardware vendor understands what it is selling and so is the best partner to help you optimise hardware utilisationwell, it just might take a pessimistic (from your point of view) view of the spare hardware capacity you should have. And if you design batch-oriented applications with very peaky demands instead of applications which spread the load better and can throttle back peak demand, you’ll probably find that your utilisation-based charges are based on the peak utilisation encountered on only a few days a month.
Mainframes are cheap for people that manage their mainframe utilisation effectively and can negotiate with vendors from a position of full knowledge of their workload and its processing needs.
To do this you need tools that help you monitor actual processor utilisation, optimise SQL queries for efficiency, determine actual concurrent utilisation of software licenses and so on. Perhaps you even need software that will help you maximise utilisation of the “free” zIIP and zAAP mainframe “special purpose processors”. You need to be able to negotiate licence and hardware charges with your vendor from a position of powerthat is, from a position of full knowledge of what processing power you actually need.
And, something else. If this mainframe monitoring software is key to cost-effective mainframe computing, don’t you want everybody in the organisation to be using it, wherever it is appropriate? Wouldn’t it be a good idea if such software monitored its own usage patterns and the savings it is making for you? Then you can be confident that you’re getting value out of it and can direct training resources where they are needed, to address underutilisation of your efficiency aids. If you look, you can find software that works like this, from Compuware for example, and it is a pity that more vendors don’t adopt a similar approach.
This has only been a brief overview of the issues around cost-effective mainframe computing. However, the point to take away is that mainframe computing can only be cost-effective if you know what your computing needs really are and can baseline your utilisation of the platform today, with the aim of continually improving your utilisation efficiency going forward. To do this, you need monitoring and tuning tools that should be able to pay for themselves with the increased efficiencies they can deliverand which should provide feedback to help you to confirm that they are, in fact, improving efficiency.