The requirements of a security analytics platform
In order to be effective, a security analytics system needs to integrate relevant information from multiple sources, be able to determine the relative importance of different events seen in the information, and project the state of the system based on those events. The model must be accurate and must be updated regularly and in real time to reflect current events so that an organisation can gauge the status of all systems and devices on the network in real time, and thus gain a better awareness of what the current situation is to aid in better decision making.
Security analytics platforms should be based on a data-centric architecture. All systems should contribute data to a central collection point, with the data normalised so that it appears to be from a single source. The central collection point should be a centralised database, which all applications required can access, regardless of the distributed nature of the system. The central collection point should deploy data-centric middleware that aggregates, correlates, cleanses and processes all sensor data in real time as situational awareness is dependent on the real time nature of the information collected.
Event processing technology provides a way of tracking and analysing information sent from sensors to the central database regarding events that occur so that actions can be taken when an event indicates that there is a problem, such as a sensor broadcasting information that a temperature threshold has been exceeded. For distributed systems such as sensors, a fairly new technology is complex event processing, which can collect event feeds from multiple sources and can analyse large volumes of data in real time to provide a fast response when problems are encountered.
Complex event processing technologies not only perform traditional database and data mining processes such as data validation, cleaning, enrichment and analysis, but can also query, filter and transform data from multiple sensors to enable events to be detected in real time. They provide the ability to automate pattern monitoring, which allows events to be correlated so that event response mechanisms can be developed and critical events can be isolated so that efficient remediation can be taken. This will allow such events to be prioritised according to their criticality so that those with the highest impact can be given top priority, while at the same time automating many of the tasks to reduce the burden on human operators.
In order that those events are understandable to human operators, the technology should provide visualisation capabilities that provides comprehensive visibility over all events that occur and that provides the option of observing specific events of interest in greater detail, as well as presenting data at a high level for overall awareness of the situation. It should give an aggregated view of all events from all sources in the network and should provide added context to events, such as time and location.
For providing context to information, metadata is as important as the data itself and should be captured by the system as this provides a far richer source of context for the data than merely the data itself. It will also allow data to be compared directly across heterogeneous networks, allowing like to be compared with like. Not only should the system capture all associated metadata, but it should also include this information in its analysis of the data.
One further requirement of security analytics platforms is that, although they should collect event data in real time, they should also store historical data in an easily retrievable form. An example of why both real time event information should be correlated with historical information can be seen in the use of such data for predictive maintenance for equipment on which sensors have been mounted. Continuous real time monitoring will show the current status of the equipment, but only through correlation with historical information can an operator determine whether or not it is operating within normal bounds. For example, a sensor displaying a high temperature may indicate that the equipment is malfunctioning, but historic trends may indicate that the temperature tends to increase when the output of that equipment is increased to meet spikes in demand. This will help operators to take better informed decisions as to when maintenance is actually required or other action needs to be taken. Correlating historic and real time information will also enable the organisation to prevent unexpected equipment failure by spotting long-term trends in usage, aiding in asset utilisation and even extending the lifetime of the equipment by ensuring that it is in good working order.
By correlating historic and real time information, the data can show trends that can be used to check against best practice policies and security controls so that refinements can be made to the system and compliance with regulatory requirements can be seen over time, rather than just as a snapshot in time. Operators will also be better able to detect and respond to security incidents, with all information cross-correlated for early breach detection and notification, and historic information will also allow for forensic investigation of incidents that have occurred so that the organisation can learn from them and take steps to remedy such situations.
Security analytics platforms that collect, monitor, analyse and report on information from throughout the organisation will be a great aid in providing the visibility that organisations need across extended networks in order to make more informed decisions and better manage the overall risks that they face.