IBM, Big Data, and its analytics solutions
Published:
Content Copyright © 2012 Bloor. All Rights Reserved.
Also posted on: Accessibility
Last year I looked at how IBM had been successful in acquiring and integrating the multiple components of its portfolio in the Business Analytics and Optimisation space, this time I have been given the opportunity to look at how those acquisitions are being deployed to provide solutions to their clients’ problems. The first thing to note is that, in addressing the issues of Big Data, with its volume, variety and volatility, it is clear that, for IBM, most of the solutions are created by amalgamating strands from its portfolio and using then in a coordinated fashion that creates the right solution. So, for those who think that Big Data is just about Hadoop, IBM very rightly are saying no, it takes more than that to address many of the problems that need to be addressed. So what you see is that Hadoop may be used as the centre piece to address the volume and variety of the data required to obtain an insight, but that their streaming platform is used to detect patterns, and remove much of the noise, in the data that is volatile. This is done whilst it is streaming into the analytics hub, and the data warehouse, which is now going to be more often than not a Netezza one, is still collating the various indicators and is the place where the really deep actionable insights are produced on an ongoing basis.
Another thing that struck me was that in the briefing I had last year the one component that did not seem to be so prominent was the Cognos element. This year I am pleased to be able to report that Cognos 10 was very prominent in its role in many of the solutions being presented. Cognos Insight, Express and Enterprise are now looking like a very well thought through and executed family of tools able to report, analyse, model, plan and offer collaboration capabilities across all of the platforms and devices that are required to use and expose the data to the widest range of end users.
I also, for the first time, saw how Watson, as well as winning quiz shows, was now being used to address real issues with the same astonishing breadth of data and ability to assign credibility to the sources in their search to find support for a given hypothesis. Watson is now being used to assist in healthcare with vital things such as cancer diagnosis and treatment. For those of us with money purchase pensions I was pleased to see that they are tackling investment planning, and they are also looking at applications in the health insurance market. There are also Watson applications for the contact centre and, by the end of the year, for industry.
What impressed me most about the IBM approach to Watson deployment is that they know that this is a solution that will only work once an organisation has reached a level of maturity in its ability to use to and exploit what Watson can provide. They have therefore devised an assessment of the maturity of candidates and, for those who have not yet reached a level where it would make sense to deploy Watson, there is a roadmap to help them focus on the actions required to drive up the maturity in their use of informatics to enable them to exploit it meaningfully in the future.
At present, so much of the activity in the Big Data space is being conducted very much on the far side of the chasm from full scale commercial adoption. It has a Heath Robinson air about so much that is being done to deliver value, and many of the early adopters come from anything but a traditional BI and enterprise IT background. IBM, more than any other vendor I have spoken to, is taking very large steps to help this whole extension to the capability of analytics to cross the chasm into mainstream usability, whilst ensuring that the drive and enthusiasm of the innovation is not lost along the way. I would point to their drive to add veracity to the Big Data 3 Vs (volume, variety and volatility) as being key; for instance if things like Master Data Management are important in traditional solutions, it is even more critical with Big Data, where so much data is coming in from outside of an enterprise. The development of accelerators to help to automate the donkey work to get to value is another important innovation. People talk glibly about exploiting Facebook and Twitter, but building the lexicon to identify what is being said and the sentiment behind it is far from trivial, and it is things like that that IBM is automating. Another example would be how they are ensuring that the actual deployment of the data on a Hadoop cluster is managed so it is optimised, for the workload, in a dynamic fashion.
Big Data has the potential to be highly disruptive, and those with the most to lose are the established players. Of those big players, the one that would appear to be in the best place to ride out the wave of disruption, because it is being equally innovative and cost effective, looks to be IBM.