04 December 2012

PRAGMATIC security metrics for competitive advantage

Blogging recently about Newton's three laws of motion, we mentioned that organizations using PRAGMATIC metrics have competitive advantages over those that don't.  Today, we'll expand further on that notion.

Writing in IT Audit back in 2003, Will Ozier discussed disparities in the way information security and other risks are measured and assessed.  Not much seems to have changed in the nine years since it was published.  Ozier suggested a "central repository of threat-experience (actuarial) data on which to base information-security risk analysis and assessment": today, privacy breaches are being collated and reported fairly systematically, thanks largely to the privacy breach disclosure laws, but those are (probably) a tiny proportion of all information security incidents - at least, in my experience things such as information loss, data corruption, IP theft and fraud are far more prevalent and can be extremely damaging.  Since these are not necessarily reportable incidents, most don't  become public knowledge, hence we don't have reliable base data from which to calculate the associated risks with any certainty. 

"In my experience" is patently not a scientific basis however.  I doubt that adding "Trust me" would help much either.

Talking of non-scientific, there is no shortage of surveys, blogs and other sources of anecdotal information about security incidents.  However, the statistics are of limited value for making decisions about information security  risks.  The key issue is bias: entire classes of information security incident may not even be recognized as such.  Take human errors, for instance.  Human errors that lead to privacy breaches may be reported but for all sorts of reasons there is a tendency not to want to blame someone, hence often the cause is unstated or ascribed to something else.  Most such incidents probably remain undetected, although some errors are noticed and quietly corrected.

However, while we lack publicly-available data about most information security incidents, organizations potentially have access to a wealth of internal information, provided that information security incidents are reported routinely to the Help Desk or wherever.  Information security reviews, audits and surveys within the organization can provide yet more data, especially on relatively serious incidents, and especially in large, mature organizations.

OK, so where is this rambling assessment leading us in relation to information security metrics?  Well in case you missed it, that "wealth of internal information" was of course a reference to security metrics.

And what have security metrics, PRAGMATIC security metrics specifically, got to do with competitive advantage?  Let me explain.

Aside from selecting or designing information security metrics carefully from the outset, management should review the organization's metrics from time to time to confirm and where necessary improve, supplement or retire them.  This should ideally be a systematic process, using metametrics (information about metrics) to examine the metrics, comparing their value rationally against their information requirements.  Fair enough, but why should they use PRAGMATIC metametrics?  Won't SMART metrics do?

The Accuracy, Independence and Genuinness of measurements are important concerns, especially if there might be systematic biases in the way the base data are collected or analyzed, or even deliberate manipulation by someone with a hidden agenda and a blunt ax.  This hints at the possibility of analyzing the base data or measurement values for patterns that might indicate bias or manipulation (Benford's law springs immediately to mind) as well as for genuine relationships that may have Predictive value.  It also hints at the need to check the quality and reliability of individual data sources, for instance the variance or standard deviation are guides to their variability and, perhaps, their integrity or trustworthiness.  Do you routinely review and reassess your security metrics?  Do you actually go through the process of determining which ones worked well, and which didn't?  Which ones were trustworthy guides to reality, and which ones lied?  Do you think through whether there are issues with the way the measurement data are gathered, analyzed, presented, and/or interpreted and used - or do you simply discard hapless metrics that haven't earned their keep without truly understanding why?

Relevance and Timeliness are both vital considerations for all metrics when you think about it.  How many security situations have been missed because some droplet of useful information was submerged in a tsunami of junk?  How many times have things been neglected because the information arrived too late to make the necessary decisions?  To put that another way, how much more efficiently could you direct and control information security if you had a handle on the organization's real security risks and opportunities, right now?  

In respect of competitive advantage, Cost-effectiveness pretty much speaks for itself.  It's all very well 'investing' in a metrics dashboard gizmo with all manner of fancy dials and glittery indicators, but have you truly thought through the full costs not just of generating the displays, but using them?   Are the measurements merely nice to know, in a coffee-table National Greographic kind of way, or would you be stuffed without them?  What about the opportunity cost of either being unable to use or discounting other, perfectly valid and useful metrics that, for some reason, don't look particularly sexy in the dashboard format?  Notice that we're not railing against expensive dashboards per se, provided they more than compensate for the costs in terms of the value they generate for the organization - more so than other metrics options might have achieved.  Spreadsheets, rulers and pencils have a lot going for them, particularly if they help focus attention on the information content rather than its form.

In contrast to the others, Meaningfulness is a fairly subtle metametric. We interpret it specifically as a measure of the extent to which a given information security metric 'just makes sense' to its intended audience.  Is the metric self-evident, smack-the-forehead blindingly obvious even, or does it need to be painstakingly described, at length, by a bearded bloke in a white lab coat with frizzy hair, attention-deficit-disorder and wild, staring eyes?  A metric's inherent Meaningfulness is a key factor in relation to its perceived value, relevance and importance to the recipient, which in turn affects the influence that the numbers truly have over what happens next.  A Meaningful metric is more likely to be believed, trusted and hence actually used as a basis for decisions, than one which is essentially meaningless.  Let the competitors struggle valiantly on with their voluminous management reports, tedious analysis and, frankly, dull appendices stuffed with numbers that nobody values.  We'll settle for the Security Metrics That Truly Matter, thanks.

The Timeliness criterion is also quite subtle.  In the book we explain how the concept of feedback and hysteresis applies to all forms of control, although we have not seen it described before in this context.  A typical  manifestation of hysteresis involves temperature controls using relatively crude electromechanical or electronic sensors and actuators.  As the temperature  reaches a set-point, the sensor triggers an actuator such as a valve or heating element to change state (opening, closing, heating or cooling as appropriate).  Consequently the temperature gradually changes until it reaches another set point, whereupon the sensor triggers the actuator to revert to its original state.  The temperature therefore cycles constantly between those set points, which can be markedly different in badly designed or implemented control systems.  Hysteresis loops apply to information security management as well as temperature regulation: for instance, adjusting the settings on a firewall between "too secure" and "too insecure" is better if the metrics relating to firewall traffic and security exceptions are available and used in near-real-time, rather than on the basis of, say, a monthly firewall report, especially if the report takes a week or three to compile and present!  The point is that network security incidents may exploit that gap or delay between "too secure" and "too insecure", so Timeliness can have genuine security and business consequences.

Finally for today, spurious precision is a factor relating to several of the PRAGMATIC criteria (particularly Accuracy, Predictability, Relevance, Meaning, Genuinness and Cost-effectiveness).  We're talking about situations where the precision of reporting exceeds the precision of measurement and/or the precision needed to make decisions. Have your competitors even considered this when designing their security metrics?  Do they obsess over marginal and irrelevant differences between  numbers derived from inherently noisy measurement processes, or appreciate that "good enough for government work" can indeed be good enough, much less distracting and eminently sensible under many real-world circumstances?  A firm grasp of statistics can help here but it's not necessary for everyone to be a mathematics guru, so long as someone who knows their medians from their Chi-squared can be trusted to spot when assumptions, especially implicit ones, no longer hold true.  

We'll leave you with a parting thought.  Picture yourself presenting and discussing a set of PRAGMATIC security metrics to, say, your executive directors.  Imagine the confidence you will gain from knowing that the metrics you are discussing have been carefully selected and honed for that audience because they are Predictive, Relevant, Actionable ... and all that.  Imagine the feeling of freedom to concentrate on the knowledge and meaning, and thus the business decisions about security, rather than on the numbers themselves.   Does that not give you a clear advantage over your unfortunate colleagues at a competitor across town, struggling to explain let alone derive any meaning from some near-random assortment of pretty graphs and tables, glossing over the gaps and inconsistencies as if they don't matter?

No comments:

Post a Comment

Have your say!