Data centre schizophrenia
Published:
Content Copyright © 2011 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt
I’m coming down with a bout of schizophrenia. I know that IT is now taking a further lurch in its continuing journey towards abstraction, with cloud computing. The business is beginning to care much less about technology, just whether its SLAs are being achieved, and managers really don’t have to care much about technical details
Part of me is entirely happy with this idea – I’ve experienced business managers trying to micromanage technology – but part of me screams that someone needs to actually understand the technology or the business will sign up to infeasible SLAs. And by the time it discovers this it will be too late, and any penalty clauses will be largely unenforceable anyway.
Perhaps datacentre automation currently exemplifies this schizophrenia best. An important underpinning of Bloor’s Adaptive Organisation story, for enterprise-scale organisations, is the adaptive Dynamic Data Centre supporting dynamic provisioning and seamless deployment, management and billing. Nevertheless, achieving the Dynamic Data Centre must be a journey from where we are now to where we’d like to be – and service levels to the business from “legacy” systems must not be impacted on this journey.
This means that data centre management tools must provide transparency between business goals and the technology underlying it. If the business wants business services supported by its data centre technologies, it must be able to manage at the service level – and feedback on performance and emerging issues must be available at the business service level, independently of the hardware and technology platform, or platforms, involved. At the same time, technicians must be allowed to manage the “bits and bytes” of legacy hardware and applications (although they should now start to report back to management in business service terms) because if this legacy stops working, so does the business, in many cases.
This means, I think, looking at the tools that support the data centre status quo and how they are evolving to support a business service view and extending their capabilities into dynamic provisioning and so on. Many business-critical applications run from very large data centres today and ripping out old data centres and their applications and replacing them with, say, new cloud applications is not going to be feasible – without risking a major discontinuity in service delivery to customers, entailing, at best, significant reputation risk.
In essence, existing data Centre management tools, managing existing applications, will be needed for a long time to come. However, they will, increasingly, also need to operate across technology boundaries (manage the end user experience by monitoring transactions at all stages of their life from mainframe database to user desktop – or mobile phone) and supply management feedback at an aggregated whole-service level (via a dashboard, perhaps), possibly in terms of exceeded thresholds and exception events. And, at he same time, support the technicians with as much detail as they need to make the bits and bytes work.
And what of the move to externally-hosted cloud services, if or when they arrive in this world? Well, if reporting and end-user experience management at the aggregated service level is already in place, cloud services should simply slide in underneath. However, it will be important that continuity of management information is maintained. Today, it is unacceptable if a badly performing transaction falls into an information “black hole” when it hits, say, mainframe CICS; but it will be equally unacceptable if an information black hole resides in some cloud service vendor’s dynamic data centre!