10 April 2013

Security metric #52: external lighting

Security Metric of the Week #52: proportion of facilities that have adequate external lighting

This week's example metric represents an entire class of metrics measuring the implementation of information security controls.  In this particular example, the control being measured is the provision of external security lighting that is intended to deter intruders and vandals from the facilities.  It is obviously a physical security control, one of many.  The metric could be used to compare and contrast facilities, for example in a large group with several operating locations.  While we've picked on external lighting for the example, the metric could be used to measure almost any control.

The metric's PRAGMATIC score is rather low:

P
R
A
G
M
A
T
I
C
Score
2
5
70
42
11
46
35
18
31
29%


Why has ACME's management evidently taken such a dislike to this metric?  Its shortcomings are laid out in some detail in the book (for instance, what does it mean by "adequate"?) but for now let's take a quick look at those dreadful ratings for Predictability and Relevance.

The Predictability rating is a percentage on a notional scale delineated by the following five waypoints:
  • 0% = The metric is purely historical and backward-looking, with no predictive value whatsoever;
  • 33% = The metric is principally historic but gives some vague indication of the future direction such as weak trends;
  • 50% = The metric is barely satisfactory on this criterion (50% marks the transition between unsatisfactory and satisfactory);
  • 67% = The metric definitely has predictive value such as strong trends, but some doubt and apparently random variability remains;
  • 100% = Highly predictive, unambiguously indicative of future conditions with very strong cause-and-effect linkages.
ACME managers evidently believe the metric is almost entirely historical and backward looking with next to no predictive value.  In their experience, the proportion of facilities that have adequate external lighting is a very poor predictor of their information security status.

Similarly, the metric is believed to have little Relevance to information security.  Possibly, there is some misunderstanding here about the necessity for physical security in order to secure information assets.  Perhaps physical security is managed and directed quite separately from information security within ACME.  The metric would presumably score higher for Relevance to physical security.

If for some reason someone wanted to push this particular metric, they would clearly have to address these and the other poor ratings, trying to persuade management of its purpose value ... implying, of course, that it is actually worth the effort.  They might need to redesign the metric, for instance broadening it to take account of other physical security controls that are more obviously relevant to information security such as physical access controls around the data center or corporate archives.  In the unlikely event that there were no better-scoring metrics on the table, the proponent would have their work cut out to rescue one as bad as this from the corporate scrapheap.

Believe it or not, that is in fact a very worthwhile PRAGMATIC outcome.  Many organizations limp along with some truly dreadful security metrics in their portfolio, metrics that get dutifully analyzed and reported every so often but have next to no value to the organization.  Occasionally, we come across metrics that are so bad as to be counterproductive: they actually harm information security!  Reporting them is a retrograde step.  The problem is that although almost everybody believes the metrics to be useless, there is a lingering suspicion that they must presumably be of value to someone since they appear without fail in the regular reports or on the dashboard.  Nobody has the motivation or energy to determine which metrics can or should be dropped.  Few except senior managers and Audit have visibility across the organization to determine whether anyone needs the metrics.

Unnecessary/avoidable costs are the consequence of this.  The costs can be substantial if you take into account the likelihood of poor quality metrics in all business areas, not just information security.   What a waste!

A systematic process of identifying and PRAGMATIC-scoring all the organization's information security metrics is a thorough way to identify and weed-out the duds.  Less onerously, metrics that are 'clearly dreadful' can be singled out for the chop.  Another possible approach is to identify or nominate "owners" or "sponsors" for every metric, and have them justify and ideally pay the associated measurement costs from their departmental budgets.  Suddenly, cost-effective security metrics are all the rage!  Yet another option is for the CISO or Information Security Manager to identify and cull weak metrics, either openly in collaboration with colleagues or quietly, behind the scenes, perhaps swapping duds for stars (which brings up the idea of "one in, one out" - for every additional information security introduced into the measurement system, another has to be retired from service and put out to graze, in order to avoid information overload and contain measurement costs).  

-------------------------------------------

Since this is our fifty-second Security Metric of the Week, we will shortly announce our fourth Security Metric of the Quarter and our very first PRAGMATIC Security Metric of the Year.  Watch this space.

No comments:

Post a Comment

Have your say!