10 June 2013

The yin and yang of metrics


Many aspects of information security that would be good to measure are quite complex.  There are often numerous factors involved, and various facets of concern.  Take ‘security culture’ for example: it is fairly straightforward to measure employees’ knowledge of and attitudes towards information security using a survey approach, and that is a useful metric in its own right.  It becomes more valuable if we broaden the scope to compare and contrast different parts of the organization, using the same survey approach and the same survey data but analyzing the numbers in more depth.  We might discover, for instance, that one business unit or department has a very strong security culture, whereas another is relatively weak.  Perhaps we can learn something useful from the former and apply it to the latter.  This is what we mean by ‘rich’ metrics.  Basically, it involves teasing out the relevant factors and getting as much useful information as we can from individual metrics, analyzing and presenting the data in ways that facilitate and suggest security improvements.

‘Complementary’ metrics, on the other hand, are sets of distinct but related metrics that, together, give us greater insight than any individual metric taken in isolation.  Returning to the security culture example, we might supplement the employee cultural survey with metrics concerning security awareness and training activities, and compliance metrics that measure actual behaviors in the workplace.  These measure the same problem space from different angles, helping us figure out why things are the way they are. 

Complementary metrics are also useful in relation to critical controls, where control failure would be disastrous.  If we are utterly reliant on a single metric, even a rich metric, to determine the status of the control, we are introducing another single point of failure.  And, yes, metrics do sometimes fail.  An obvious solution (once you appreciate the issue, that is!) is to make the both the controls and the metrics more resilient and trustworthy, for instance through redundancy.   Instead of depending on, say, a single technical vulnerability scanner tool to tell us how well we are doing on security patching, we might use scanners from different vendors, comparing the outputs for discrepancies.  We could also measure patching status by a totally different approach, such as patch latency or half-life (the time taken from the moment a patch is released to apply it successfully to half of the applicable population of systems), or a maturity metric looking at the overall quality of our patching activities, or metrics derived from penetration testing.  Even if the vulnerability scanner metric is nicely in the green zone, an amber or red indication from one of the complementary metrics should raise serious questions, hopefully in good time to avert disaster.

A natural extension of this concept would be to design an entire suite of security metrics using a systems engineering approach.  We expand on this idea in the book, describing an information security measurement system as an essential component of, and natural complement to, an effective information security management system.

No comments:

Post a Comment

Have your say!