The problem with SIEM 3
Published:
Content Copyright © 2010 Bloor. All Rights Reserved.
Also posted on: Accessibility
In the third article in this series on the deficiencies of SIEM architectures I want to turn my attention to the front-end. The situation is this: you have a bunch of events occurring all at once and you need to identify if this is an attack. If it is an attack you want to know what sort of attack it is (low-and-slow, man-in-the-middle, denial of service-there are lots of them) or if it is an insider threat of some sort. Once you know that, you want to take appropriate remedial action as quickly as possible.
The first problem with the typical SIEM approach is that query processes (and in this case we are talking about real-time query processes) are slow. This is because the approach used is traditional. You parse and normalise the data, and store it, along with relevant indexes, in a database or file store. When you’ve done all of that you can query the data. This creates unnecessary delays. Wouldn’t it be much more intelligent to run the data through a CEP (complex event processing) engine so that you can run all of your queries (pattern analysis, anomaly detection and so on) in the CEP engine before you commit all the data? And given that the largest installations that the SIEM vendors have in place, typically using multiple servers, are running at no more than 250,000 events per second this should be a walk in the park for the CEP vendors who are well used to handling twice that volume or more.
The second problem at the front-end is related to the fact that the standard method for recognising patterns and anomalies is to employ one or more correlation engines within your software architecture that run against rules that you have developed for recognising these various attacks and threats and what the appropriate actions should be. However, as with all rules-based systems building rules typically takes consulting time in the first instance, is always reactive (in other words, it doesn’t cater for new types of attack you haven’t seen before) and requires continuous maintenance. In other words it is slow, incomplete and expensive.
I am glad to say that I have talked with one vendor that appreciates the advantage of using CEP and is likely to move in this direction and there is one company, Tier-3, that actually uses event stream processing already. Moreover, rather than using a rules-based approach it detects behavioural anomalies by building up a baseline of expected activity over time and by then comparing actual events with those that are expected. In effect, the solution is self-learning, which is a much easier approach to maintain and keep up-to-date than a rules-based one.
There doesn’t seem to be as much momentum behind a move to using CEP as there is towards analytic warehousing but there is some awareness of its benefits. With Microsoft’s introduction of StreamInsight (as a part of SQL Server 2008 R2) we can expect to see downward pressure on prices in the CEP market and this may help to drive wider adoption in a number of areas including, I hope, SIEM.