BLU Acceleration
Published:
Content Copyright © 2013 Bloor. All Rights Reserved.
Also posted on: Accessibility
BLU Acceleration has been introduced with the latest version of IBM DB2 (10.5), for Linux, UNIX and Windows and some elements of the technology have also been included in the Informix Data Warehouse Accelerator.
It is based on a combination of parallel vector processing, dynamic in-memory caching, columnar storage, a query technique known as data skipping and extended compression. It eliminates the need for indexes and aggregation (therefore removing the tuning necessary for these artifacts) and without requiring any change to existing SQL or the schema. It operates on the data while it’s still compressed thus saving the CPU time that would otherwise be needed for decompression. Not only does BLU acceleration perform predicates on compressed data, but joins, grouping, in-lists and LIKE predicates as well. IBM claims the technology is “better than in-memory” since whatever data resides in the cache is in-memory optimised, but like a traditional database the data can be larger than cache and pre-fetched on-demand while queries run. That helps for large marts and warehouses where the most active part of the data is small enough to fit in memory (such as the most recent year), but a larger data volume (perhaps 10 years) needs to be available for occasional access.
IBM is reporting some big speedups with significantly reduced DBA tuning. Database memory, workload management, and other configuration details adapt to your server automatically. At their launch event on April 3 IBM reported typical performance gains of 8x-25x, with multiple reference customers and partners standing up with even larger numbers (25x-74x) though they admit that this will vary.
Basically, what this all means is that the relational storage engine in DB2 (as opposed to the XML and graph storage engines) will now be able to store data in either of two types of tables: a conventional row-based relational table or in a compressed, encoded columnar table. Of course, row-based tables are also compressed but you would expect better compression on columns because all data in a single column has the same datatype and you can therefore optimise your compression algorithms more efficiently. As noted, you do not have to change your schema to implement column-based storage.
The advantages of columns have been well rehearsed: they reduce the need for indexes, and often mean reading far less data. However, they are not a panacea: there are some types of queries for which it may still make sense to have indexes even when using columnar storage. Just look at Sybase IQ, which supports a variety of index types despite being exclusively columnar. In the case of DB2 you will have a choice: row-based data with indexes or column-based data without indexes – what this implies is that you will need to think carefully about what data to store in columns and what data in rows. Queries, incidentally, can span both row and column-based storage and IBM claims that it is easy to migrate tables from rows to columns. However, you only get BLU Acceleration for the columnar data.
Data skipping is similar, in theory, to Netezza’s zonemaps. That is, it allows queries to skip over data that is not required to answer the query in hand. However, remember that this data is in cache and on disk where it would be in the case of Netezza-based appliances (PureData System for Analytics). The BLU Acceleration technology provides data skipping regardless of whether the data is coming from disk or memory.
The parallelism I mentioned is actually what I would call cross-core parallelism (because I think that’s easier to understand – you can parallelise across the cores within a single CPU as well as across sockets). Columnar processing and operating on compressed data in memory are useful tricks, but with workload speedups in the 8x-74x range there is probably more to the technology as well. IBM claims one of their strengths is the efficiency of the parallelism made possible by some deep engineering to reduce “memory access latency”, in other words the time it takes to get data from RAM into the CPU where it can be processed. Most systems have a fair bit of memory access latency, but it gets disruptive across sockets. BLU Acceleration is doing something special to keep those latencies low.
The first big question is how this will stack up in performance terms with things such as SAP HANA and other purely in-memory approaches. Certainly, IBM’s approach should mean that you need less memory for the same tasks and, other things being equal, this would mean a lower cost offering because there’s no need to buy enough memory for all the data to fit there. The question will be about how performance compares. IBM’s quoted figures are that things like cube loads and query performance should improve by an order of magnitude and I understand that, for some queries, performance may improve considerably more than this. At their product launch on April 3, IBM quoted some individual query speedups of over 1000x.
The second big question surrounds what used to be Netezza. With these features DB2 is starting to look like something that can compete directly with the PureData System for Analytics for analytic workloads. Of course this has hardware acceleration (FPGAs) that DB2 doesn’t have but what is memory if not a hardware accelerator? On the other hand I daresay that the developers working on the Netezza platform will also be looking to see how they can leverage memory (and SSDs) so that they can stay ahead of the DB2 game.