Addressing the lack of decent tools to help with data mapping comes Mappenhaven by AptiMap
The most difficult thing in any activity is to get started. Nowhere is this more true than in building a Business Intelligence solution. BI solutions are complex and are very difficult to clearly articulate until you have built up an agreed lexicon and have several iterations of straw men that enable the users to tell you enough about what they do not want for you to start to gain an insight into what they might like. We have been building BI solutions for years and on most projects the sophistication of the tool support stretches no further than the use of spreadsheets as cross tab repositories of source to target lists.
Over the years I have heard a lot of promises from the big vendors that this is an area that they will be covering at some point, but I have not seen anything yet that really meets the needs of the business analyst. At present there is a lot of activity coming from the ETL vendors about building metadata hubs and these are creating very good reverse engineered diagrams showing the lineage from source to report or analytical application of what has been implemented which is really useful for impact assessment and managing the asset. But it really wets my appetite for seeing something that helps us to move from a green field opportunity and blank sheets of paper to a documented and understood set of requirements deliverables that help to identify and evaluate sources of data, detail the transformations required to capture and normalise the source data to maximise its value across the piece in the data warehouse.
It was against this background that I was introduced to MappenMaven by AptiMap. These guys are a niche developer based out of Texas, and it is immediately obvious that they know what they are trying to address with this solution. They have spent the time with the same ain that I have of trying to use spreadsheets to track data for a BI solution, and have rejected the spreadsheet in favour of a proper graphical tool to tackle the whole issue of identifying data sources, identifying their contents and the rules that apply to them. The biggest failure of the use of spreadsheets is that whilst the spreadsheet is a means of holding information in a fashion that is readily understood by a broad audience, where it fails is when changes are detected and the changes have to ripple through the various tabs to reflect the new understanding, this is not only unbelievably arduous and tedious, but as a consequence of that slow painful change process it inevitably introduces error as we all know in business intelligence garbage in produces garbage out with a hundred percent certainty.
With this tool you can see how people are freed from the mundane and are able to do what we all want to do, which is to focus on the value adding aspects of analysis, explanation and exploitation. Those of us who have worked on data warehouses know that this whole process is not only painful but it is unbelievably slow and slow means it’s expensive, and because of the nature of the work this is really not a set of tasks that can be sent offshore if you want to get a quality outcome. So with this tool you can see not only are the right things being done, in a thoughtful and clear fashion, but it should also make a compelling business case, because it should speed up the process by a significant margin.
So what is the tool able to do? It can import data definitions, it is able to build visual lineage maps, and provide full impact analysis. So it is the perfect complement to all of those retro engineering tools like the Ab Initio meta data hub, by providing similar coverage for the business analysts to feed the process with quality input, for BI solutions, data warehouses, big data projects and data migration projects. I think this is a super tool and wish them every luck in promoting it. I just wish I had a tool like this on many of my projects in the past.