The evolution of testing, from a Bloor POV - Testing is an aspect of good governance and should be done early.
Published:
Content Copyright © 2013 Bloor. All Rights Reserved.
Also posted on: Accessibility
Testing has always been a bit of an afterthought for most developers but it is now at the very point of intersection of new ways of developing SaaS based, service-oriented business automation and new people-centric, app-based ways of doing business. This is becoming apparent in the face of high profile failures in the delivery of the expected business outcomes from major automation projects such as the US healthcare.gov rollout – as John Michelsen, Chief Technology Officer at CA Technologies, blogs: “I find it ironic that it is difficult to get our industry to focus on the inadequacy of our test engineering and yet it is now the subject of congressional hearings and nightly newscasts”.
As Michelsen points out, when even highly skilled people can’t get things right, we need a radically new way of doing things. Fortuitously, this is at a time when a few recent conversation have encouraged me to look at testing in context to my overall coverage of governance and development. How should testing evolve in an agile world of SaaS, people-centric computing using apps, and loosely coupled globalised services?
Testing as a governance story Firstly, is testing part of good governance? Yes, if course it is! Or, at least, it is at the very intersection of good governance and systems development
Good governance is, to me, about ensuring that the money and resources invested, by the business, in automation aren’t wasted. Part of this is making sure that what you develop contributes to the strategic business outcomes of the business. Obviously, from this point of view, developing solutions to problems that the business doesn’t have or solutions that try to facilitate business outcomes but which fail to do so – by giving wrong answers or by crashing – are examples of poor governance. Testing is part of what gives management assurance that the development of its business automation is being well-governed and gives advanced warning of potential or actual problems – so they can be fixed cheaply before they impact the business.
Testing as waste avoidance Nothing is more wasteful than building an excellent, high quality, solution that addresses a problem that isn’t quite the one that the business needs to address. Not only is the rewrite to address business needs wasteful, it reduces the morale of those involved in building it; and because of this and the time-pressures on last-minute changes, probably reduces quality.
Traditional “test when we’ve built everything” approaches are no longer appropriate (if they ever were) In the worst case, a product development process that overlooks, say, the security or performance needs of the business or a particular class of customer may use a fundamentally disfunctional architecture. Moreover, rewriting parts of a finished product at the last moment can introduce serious security, performance and maintainability problems. Then, time and resources constraints in the real world sometimes make addressing test failures difficult and lead to poor quality products reaching production – the place of testing then becomes merely the identification of areas of functionality to avoid or the provision of assistance to business users developing workarounds (which implies a continuing drain on business resources from using a sub-optimal solution).
Testing good practice Current testing good practice can be summarised as test early and fail early (preferably before you invest in too much code and certainly before anything reaches production). The earlier you find defects, the cheaper they are to fix, partly because the fixes are then less coupled to the evolving system technology. In agile terms, deliver the “minimum useful functionality” (built with a test-driven process) and get feedback from the people using it (and who were involved in developing it) before addressing more functions or making it more widely available; don’t try to deliver all the possible functionality in a ‘big bang’ after a massive last-minute testing program.
There are plenty of tools available to help you with testing and its automation on a wide range of platforms. See, for example, the Bloor articles on popular HP and Embarcadero tools here and on CA’s mainframe application quality and testing suite here; and our test data management page covers tools for selecting and anonymysing test data. That’s another good practice: testing automation tools are available – use them, you don’t have the time and resources to spare, that you’d need for manual testing.
Automated testing is necessary, however, but not sufficient. It stops you wasting resources on routine testing and this lets you redirect resources towards the necessary testing that’s hard to automate (testing whether the product implements business strategy, for example). You can even get external help with testing, although there are several gotchas with this approach (discussed in the Bloor articles here and here). Shipping your code out to a lot of cheap off-shore testers just before you go live is not recognised as good practice – it is the complete antithesis of ‘test early’ and whatever your cheap testers (and they are usually chosen to be cheap rather than effective) do find is likely to be expensive to fix and involve system rewrites. On the other hand, early input from external experts who really have the testing mindset, and who can report uncomfortable truths without ruining their careers, can be very useful.
As an aside, early testing, early involvement of stakeholders, and ‘fail early’ are all things achieved by eXtreme Programming done properly (although it often isn’t) – the issue is how you scale this up to large global projects.
Radically new approaches to testing However, more than all this, you should widen the scope of what you think of as testing. Questioning and prioritising requirements in discussion with all the stakeholders – not only business sponsors and analysts, but regulators, auditors, security specialists etc. – is, to my mind, the first stage of “testing considered as ‘defect removal'”. Fixing errors in the developers’ understanding of the business process before they even start designing and writing code is extremely cost effective; likewise, this is the best time to make sure that the architecture will support the security, performance and regulatory requirements of the business. Also, not building something that is well down near the bottom of the business’ priorities and will probably hardly get used, saves all its resources and eliminates all potential defects.
We all want agile delivery but if business analysis or testing are allowed to interrupt agile delivery, we are no longer agile. This implies a more mature approach to development – identifying a business outcome, designing a a test programme that validates what the developed system does in the context of that desired outcome and then looking at what the new system actually achieves in production and at what can be learnt from this for the next development. This is what a general process improvement initiative such as CMMI can help to achieve (see here for example) but some testing organisations are developing specialised maturity models (such as TMMI, Test Maturity Model Integrated) just for testing – Richard Sykes at Bloor chairs the TMMI Foundation – although there is some question as to whether testing maturity means much in the absence of true development maturity and process maturity.
Testing as programming Testing is potentially either an enabler for agile development or a barrier to it. Testing can be the place where agility stops, as you work out why what you are continuously delivering is disrupting the business; but automated early testing can be thought of as another kind of programming, incorporated in an agile development approach. You analyse a test scenario and determine its expected outcome and write ‘code’ (test scripts) to invoke the component under test and verify the results. You can make testing agile by using the techniques of modern programming: reusing pre-built components; exploiting ‘testing as a service’ and using simulations of external services that the component under test communicates with (so you can test even before related services are available); and, using advanced analytics to feedback progress to management (and highlight any emerging issues).
This programmed approach to test automation is an integrated part of IBM’s approach to DevOps (described further here) – continuous delivery of automated business outcomes, at scale. With this, defect removal can start extremely early, with impact analyses; and business-level feedback from simulations of the behaviour of the stems being tested based on requirements, even before you start coding. What this achieves is an overall feedback loop – from business vision, through requirements, through continuous build and delivery, through to production monitoring – and a return to improved business vision and the continuous delivery cycle.
This isn’t just for “big iron” development shops. I came across an interesting approach recently from useMango, for example. This exemplifies the possibilities of a programmed approach to testing in ERP applications:
- useMango has an inspection tool that scans the application under test and creates components to get values, set values and verify values on a form.
- These components are stored in a library and can be used, with simple drag and drop, to build business process tests efficiently.
- Business process tests can be consolidated into single components, for reuse in other consolidated tests (this speeds execution and is a bit like making use of an object-oriented development framework).
- Regression testing is simply a case of re-running appropriate componentised test cases and, presumably, ensuring that the regression tests are in synch with an evolving system simply involves rerunning the inspector.
This is an interesting approach, especially as useMango provides comprehensive management reports and audit trails for the testing process, although there might be a devil in the detail if you try to expand it to general environments, which may have less well-defined APIs.
Testing as simulation Finally, CA Technologies is talking (back to Michelsen, in his blog) about removing testing from the real world into a cloud-based simulated world, built using its CA LISA virtualisation technology, where you can explore system behaviours even before you’ve written a line of code! This could help to address my increasing concern that the complexity of loosely coupled, web-based, applications with (potentially) millions of global users and interactions with external services, makes them fundamentally untestable – using conventional testing approaches, at least.
A loosely-coupled long-term interaction is more complicated to code, if you include the possibility of reversing erroneous or abandoned interactions (something else may have used something that you are trying to undo and produced outcomes based on something that no longer exists) and allow for global variations in requirements; and there is always the possibility of ’emerging behaviours’.
With enough interactions, even extremely unlikely situations are statistically likely to occur several times a day and must be allowed for, and the system as a whole may deliver unexpected, possibly dysfunctional, outcomes that no-one anticipated. This means that I’m developing a real interest in radically new ways (my more off the wall thoughts, prompted by systems-engineering technology I saw at IBM Innovate, are here) of testing development against business outcomes.