The impact of Data Governance on M&A
Here is the short answer to the question of the importance of data governance when it comes to mergers and acquisitions: if you want to be acquired then it would be good idea to have good data governance in place. Similarly, if you have a hankering to make acquisitions then you should be more interested in companies that have good data governance in place than those that don’t. And the shoe is on the other foot also: if you want to go shopping for other companies then it would be a good idea if you had got data governance practices well established throughout your own organisation.
The corollary, of course, is that if you want to remain independent then you should not implement data governance or implement it poorly—but then you’ll have poor data quality, an inflexible infrastructure that does not enable reuse, and won’t understand your customers as well as your competitors, and will suffer on the bottom line as a result. This will depress your share price, which will mean that you can be acquired for less money, which will compensate the acquirer for the extra money that they have to spend to get your systems into decent shape.
So, that’s the short answer. Let me explain.
Consider what happens after an acquisition is made (I will discuss what happens beforehand later). In the first instance, the new company is likely to be looking to reduce costs: removing duplication in staff, equipment, buildings and so forth. So much is relatively easy. However, probably even before this happens, or can be completed at any rate, the new company is going to have to report its combined financial results. The way that this is typically done is to create a (temporary) financial data mart into which you can load the relevant information from each of the contributing sub-organisations.
The problem, of course, is that you have just one quarter to complete this process. Moreover, you absolutely have to get this right. You would probably use a data integration tool for this purpose, not so much because it is faster and easier but because of the audit trail that, at least in the States, will ensure Sarbanes-Oxley compliance. However, it is not loading the data that is the biggest issue. Suppose that the company that has been acquired had no data governance in place and, more specifically, no data quality checking. In particular, it is entirely possible that some of the relevant data is held in spreadsheets. Now you could have a real problem loading the data: there may be hidden errors that have to be identified and fixed and you have to make sure that you have actually identified all of the relevant data sources (spreadsheets and Access databases are classic examples of data sources that the IT department may not know about). If the acquired company has fully implemented data governance then these things will be already known about.
In the medium term the new company will want to co-ordinate new business processes that will work across the new organisation, it will want to improve the efficiency of the back office and it will want to cross-leverage the two company’s customer bases. There are basically three approaches to the integration of the product and customer bases: you can not integrate them, in which case you will not gain any efficiency savings, or additional sales, in this area; you can consolidate them, which will require a major migration effort (which I will discuss under long-term plans below); or you can synchronise the two sets of operational data, typically by implementing a master data management (MDM) solution. Given that you can implement MDM in a lightweight fashion via a registry this is the approach that I would recommend in the first instance, as you should be able to get real benefits from such an approach within a few weeks or months at most. You can evolve this into a hub-based approach later if you want to.
However, whatever sort of MDM solution you adopt, data quality considerations, and indeed the whole data integration environment, are going to be of major significance. Put simply, there is no point in implementing MDM without good data quality or without having appropriate mechanisms to load the MDM system initially and to keep it synchronised thereafter. On the data quality front, if data governance is already in situ then putting MDM in place will be much faster and easier than if you have to start by going through a major data quality programme.
In the long term, the new organisation will want to consolidate its ERP and other major systems. For large enterprises this is a project that can take years and cost hundreds of millions of dollars. A major element of such migrations is the data. Worse, recent research we have conducted suggests that only around one in six of such projects bring in the data migration portion of these projects on time and on budget. The main reason behind such overruns is a failure to fully understand the data that is to be migrated. In other words, the data sources are neither fully known (too many spreadsheets and Access databases) nor completely understood (data profiling tools have not been used). Moreover, hand-coded approaches to integration add significant cost and risk to migration projects because they can’t support a highly iterative approach (which we recommend) to migration. Once again, if data governance was fully in place then, by definition, this information would already be available and documented, and the right integration infrastructure would be in place, thereby eliminating this particular issue.
How much does all of this cost? Well, how long is a piece of string? The average spend on the data aspects of application migration is 13% so if ERP consolidation spend is in the hundreds of millions then we can certainly say that the data related aspects of mergers and acquisitions can measure in the tens of millions.
So, let’s go back to the beginning. To what extent should you be able to predict how much this is all going to cost before you buy someone, during the due diligence process? Well, the truth is that most executives dealing with M&A are wildly over-optimistic when it comes to estimating the IT-related costs of acquisitions. As a result (or, at least partly so), most mergers and acquisitions are failures. According to the Boston Consulting Group more than half of all mergers and acquisitions between 1992 and 2006 were failures in the sense that they reduced rather than grew shareholder value.
While there are plenty of non-IT issues involved in successful M&A, IT costs do play a significant part, and the data aspects of this are not insignificant either. If the company to be acquired uses data quality and profiling tools already that will be good; if it has implemented MDM that will be better; but what you really want is for a full programme of data governance to be in place to ensure that you know where the data is, what it is used for, that is of suitable quality, and so on. If this is the case then merging the two companies’ data (assuming that the buying company has also got a robust data governance programme in place—which, of course is by no means a given) will be much easier than otherwise, especially if a flexible, data integration infrastructure is in place. In particular, there should be no unexpected glitches. In turn, your estimates of these costs during the due diligence phase should be more accurate and reliable. And, last but not least, the consultants carrying out that due diligence should have an easier time of things so you should get charged less money for the due diligence in the first place.
To summarise, and to slightly extend my argument, good governance (including process as well as data governance) makes M&A go more smoothly, for less money. Such acquisitions are therefore more likely to produce their stated aim in increasing shareholder value.