Monday 13 May 2013

The Big Security Data Challenge


Big Data is not only a challenge for customer-facing organizations—but for security teams as well. Over the past decade, the demand for stronger security has driven the collection and analysis of increasingly larger amounts of event and security contextual data. Security Information and Event Management (SIEM) has long been the core tool that security teams have depended on to manage and process this information. However, as security data volume has grown, relational and time-indexed databases that support SIEM are struggling under the event and analytics load. Legacy SIEM systems have raised doubts about the potential success of SIEM implementations due

to their slow performance, inability to manage data effectively, and the extremely high
costs associated with scaling. This paper addresses the Big Security Data challenge
and highlights the key criteria organizations need to consider for processing security
information in light of today’s dynamic threat landscape.

Big Security Data

Why security data has become a Big Data problem is obvious for anyone who has tried to manage
a legacy SIEM, particularly when you look at the definition of Big Data. Big Data consists of data sets
that grow so large that they become awkward to work with using existing database management tools.
Challenges include capture, storage, search, sharing, analytics, and visualization.
With this in mind, it’s easy to see that IT and IT security have repeatedly wrestled with Big Data challenges. In fact, SIEM itself was invented to address a fundamental lack of data processing capabilities. In the early 2000s, the amount of security information and the level of accuracy of this security data exceeded the capability of existing technologies, and the lack of centralized visibility developed a strong need for automated data analysis. 
Enter the early SIEM tools, which were designed to handle firewall, vulnerability assessment, and
intrusion detection systems (IDS) data with the primary purpose of reducing false positives from IDS plus
the ability to investigate logs. These early SIEM vendors leveraged existing database management tools
and provided specialized analytics on top of event data to enable organizations to eliminate a large number
of IDS false positives.
While SIEM initially was adopted by security-conscious industries—such as large financial services and
government—broad adoption did not take off as a viable market until the mid-2000s, when Sarbanes Oxley
audit became a reality. Overnight, event management was a core component of the “control framework” in
Sarbanes Oxley section 404, and internal and external auditors were requiring it. Sarbanes Oxley was quickly followed by PCI DSS for retail organizations and card processors, another major regulation that required log review to pass audit and the automation that SIEM promised to provide. And then the regulatory explosion began. The SIEM market exploded along with it—into a billion dollar market.
Compliance not only increased SIEM adoption but also led to a flood of additional security instrumentation
and increased logging levels. This simultaneously increased the flood of data SIEM now had
to manage and further stretched analytic capabilities. Legacy SIEM systems had always struggled to
manage any increases in volume and correlation of security data. This dramatic growth in data and
correlation requirements further revealed the inherent scale and analytic limitations that these SIEM
solutions faced.
Fast forward to 2012. The demands on SIEM systems continue to intensify. Devastating data breaches
at organizations that had passed purportedly stringent compliance-based security audits have pushed
IT security to move from “check-the-box” compliance to comprehensive security programs that include
perimeter, insider, data, and system security. In response to these increased security controls, innovative
and persistent attackers have evolved the sophistication level of their attack methods—creating a need
for SIEM to detect low-and-slow attacks, rapidly detect anomalies in event flow, and gain contextual
information about data, applications, and databases.

No comments:

Post a Comment