Kx Systems and event processing
Kx Systems is not an event processing vendor, at least not in the conventional sense, but it does compete with event processing vendors, especially at investment banks, hedge funds and so forth, for applications such as (but not limited to) algorithmic trading.
So, how is Kx the same and how is it different from the event processors? Well, it is similar in that you can process streams of events. The way that kdb+ (which is the product) works is that you use q (the company’s programming language that has embedded SQL capability) to parse events as they reach the processing environment, so that these can be stored in tables using time series and so on. Now, in practice you can query these constructs, either using q or SQL (although the latter is limited) before the data is committed to the database. So, this isn’t the way that event processing works precisely but it certainly does something similar, in the sense that you can query streams of data before that data is persisted.
Where kdb+ is different is that, despite having this streaming capability, this feature is rarely used (according to Kx Systems—but it isn’t necessarily aware of all the applications its customers build) and more usually the data is committed to the in-memory (and intra-day) kdb+ database before queries are run.
Originally, kdb (the version pf the product before kdb+) was a column-based relational database that stored data in memory and on disk. With kdb+ the product retains all of its original elements plus the addition of 64-bit processing capabilities and multi-threading.
I have extolled the benefits of column-based databases enough here in the past that I will not repeat them. Suffice it to say that Kx Systems’ customers do not seem to have performance problems and the company seems to be able to compete effectively with event processing vendors.
Having said that, I have not yet described any particular benefits that Kx can offer. No doubt the company would claim that there are many but the standout example is the fact that when using Kx it is reasonable to run queries that combine streamed data with historic data (because of the rapid nature of retrieval from a column-based database). This contrasts with event processing vendors where data is only really stored to enable playback and back-testing rather than provide proactive query capability.
Now there are lots of times and places where this is not a necessary or even a “nice to have” requirement. For example, if you are monitoring a network of some type (railway, pipeline, IT etc) then historic information is not a lot of use. Even in financial services (where Kx Systems focuses) there are many occasions when it is not needed: you don’t need it for VWAP (volume weighted average price) for example and probably not even for 30 day moving averages. However, that doesn’t mean that there aren’t occasions when a hedge fund, say, might not want to make use of such facilities. And it only takes a small advantage to make a lot of money.
So the rapid retrieval of historic data is the big advantage that Kx can offer (indeed, its average user has a 5Tb database) and this will determine where it can most effectively compete with conventional event processing vendors. Primarily this is in financial services but one can imagine it also applying to telecommunications as well as certain applications within the security and defence industries, amongst others.