SOA Innovation and Metrics
Our article on SOA Governance provoked a rather wide-ranging conversation with Dr Jerry Smith, CTO of Symphony Services (which is an outsourcing supplier of software engineering services to ISVs and the like). Not surprisingly, Smith is interested in innovation and metrics: if you’re a “professional” outsourcer with your software engineers in India, you need to be able to deliver innovation (presumably your customers can manage the status quo well enough) and, just as important, you need to be able to prove that you deliver value—and, without metrics, and baseline metrics from the status quo, how can you do this?
Interestingly, Smith is in full agreement with us when we say that what the business needs these days is the delivery of holistic business services, not just programs or automated applications. And we both recognise that this implies cultural change both in the IT group and in the business that pays its wages.
Smith is strong on the “organisational psychology” aspects of this: why do some organisations reward behaviours that perpetuate the status quo, even when they’re trying to innovate? Many factors are involved: managers often got where they are by being good at operating the status quo; immature “blame cultures” are common, where identifying opportunities for improvement may just get you blamed for the failures you identify in the status quo; you get what you measure, and we’re much better at measuring costs (such as those associated with innovation) than value; we’re often bad at managing skills and capabilities—and the acquisition of new ones—without making people being ‘reskilled’ feel inadequate. Smith advocates a doctoring approach, in which continuing external support, in conjunction with both technology fixes and lifestyle changes, results in a healthier organism. This sounds about right to us, as long as we distinguish “doctoring” from “therapy”—if you ever cure a patient in therapy, your cash-flow walks out of the door and we have met big IT consultants who seem to have adopted a therapy approach to their clients. If you take on consultants, it’s always well to think about the exit strategy up front; before you find that they’re a continuing drain on your income—although, to be fair, this dysfunctional model is becoming less common of late.
We found that we’ve come to a similar place to Smith, but by a different route. Most organisations represent a system (possibly in the sense discussed here and here, although applying General Systems Theory to organisational behaviours has its issues) in a relatively steady state, even if this isn’t (or soon won’t be) delivering the business results it wants. If we want to change the way such an organisation works (in the present context, if we want to introduce true service orientation), we have to overcome inertia (this usually involves sponsorship from the top and investment in training and new tools) and this is all many organisations worry about. However, once inertia is overcome and change has been adopted you have to face a new problem: homeostasis, the tendency of a system in equilibrium to return to its initial state after it has been disturbed (this idea was put forward by Mark Flynn in his presentation at the recent iTSMF conference in Brighton). This is a long term effect, and in a world driven by short-term stockmarket performance, and executives moving on every couple of years, many organisations slip back into the dysfunctional pre-change status quo once management gets interested in something else.
Our solution to this is to institutionalise innovation by providing continuing mentoring, using charismatic teachers from the innovative world, by providing “feedback paths” from the workers at the sharp end of innovation to management and by changing the metrics used by management to drive rewards and punishments in the organisation. Our concept of mentoring change sounds very like Smith’s doctoring, to us. And both approaches get us back in the realm of innovative metrics for innovation and, in particular service oriented innovation.
From his observation of innovative organisations, Smith has come up with 10 possible service delivery metrics, grouped by organisational area:
Corporate Metrics:
- Revenue per service;
- Service vitality index (the amount of revenue from new services over the last 12 months as a proportion of total service revenue);
Management Metrics:
- Number of new services generated and used as a percentage of total services;
- Mean time to Service Development (MTTSD);
- Mean time to Service Change (MTTSC);
- Service Availability;
Project Metrics:
- Service reuse;
- Cost of not using or stopping a service;
Service Development Metrics:
- Service complexity;
- Service quality assurance based on systems-level tests that examine the behaviour of service-oriented use cases across possible choreographies.
According to Smith, “these aren’t the traditional measures one thinks about when discussing SOA implementations; however, they all provide some level of transparency into operational issues that impact either SOA agility or cost reduction business drivers”. That seems fair enough to us, and they’re certainly a good starting point.
Just to pick out some of these metrics:
- “Service vitality” seems, to us, to be particularly useful as it deals with any possible slide back into the status quo due to homeostasis. It is based on the value of services at the business outcome level, not at the technical delivery level—and the former is what the business should worry about.
- “Service reuse” hides a whole can of worms—remember how little real reuse at the business level we got out of object orientation? The issue with service reuse will come down to design for reuse, with appropriate levels of granularity. If a service is cohesive, if it does one thing (at some level) and does it well, then it will probably be reused. If it does several things, some of which aren’t needed and may even be implemented badly and potentially impact service levels, it probably won’t be reused. And, you can only reuse things if you can find them, and you will want to search on service level attributes such as SLA and warranty, as well as name and interface. SOA standards have some way to go before they support this fully, in practice.
- “Service complexity”—this is particularly important as a loosely-coupled asynchronous SOA environment is inherently more complex (especially as regards error handling), it seems to us, than a tightly coupled synchronous transaction-processing environment. A company should adopt SOA when (and only when) its benefits (to agility and business alignment) outweigh any increased complexity costs. We do already have accepted complexity metrics, but are they adequate in an SOA world?
- “Service quality assurance”—as with functional testing, with finite resources you can’t execute all possible use cases across all possible choreographies, but the possibility of ‘emergent behaviours’ in real-world high volume situations, means that you will probably have to examine statistical outcomes. This means that the quality metric for a service might be that we are 95% confident of the system behaving within specified requirements parameters. This should, it seems to us, fit well with the concepts of risk-based testing.
As we said, Smith’s 10 metrics seem like a reasonable starting point. However, the devil is in the detail and this is just a short article pointing out some issues around SOA innovation. In order to be accepted widely, any such metrics need to be formulated and accepted by the industry as a whole and we need case studies proving that a company with, say, a high “service vitality” index is more agile than companies with a lower index.
However, on the principles that (as Smith says) “governance as a ‘necessary condition’ for SOA” and that “you can’t manage what you can’t measure” (as just about everyone says, whether they mean it or not), this would be work that is well worthwhile doing, if we want SOA to be a business success instead of just another passing acronym.