19 March 2013

Metric of the week #49: infosec risk score

Security Metric of the Week #49: information security risk score

Most risk analysis/risk assessment (RA) frameworks, processes, systems, methods or packages generate numbers of some sort - scores, ratings or whatever - measuring the risks.  We're not going to delve into the pros and cons of various RA methods here, nor discuss the differences between quantitative and qualitative approaches.  We know some methods only go as far as categorizing risks into crude levels (e.g. low, medium, high, extreme), while others produce percentages or other values.  We assume that ACME has chosen and uses one or more RA methods to analyze its information security risks, and on the whole management finds the RA process useful.
  
The question is: are information security risk scores produced by RA valuable as an information security metric?


P
R
A
G
M
A
T
I
C
Score
72
60
55
70
71
40
60
60
50
60%






In the view of ACME's management, the RA scores have some merit as an information security metric.  They do quite well in terms of their Predictiveness, Genuinness and Meaning, but there are concerns about their Actionability, Accuracy and Cost-effectiveness.   The overall PRAGMATIC score is somewhat disappointing.


If ACME management definitely wants to measure information security risks but there are no higher-scoring metrics on the cards, they have a few choices.  They might:

  • Accept the metric as it is;
  • Weight the PRAGMATIC ratings to emphasize the factors that are most important in this specific area (e.g. Predictiveness and Relevance), then re-compare the scores and reconsider the candidate metrics; 
  • Adopt this metric as a temporary measure for now but, while gaining experience of the metric, actively search for something better;
  • Reject the metric and carry on searching for something better;
  • Make changes to the metric in order to address its PRAGMATIC weaknesses, hopefully without compromising its strengths;
  • Conduct a trial, comparing a few metrics in this area, including variants of this one, over the course of a few months;
  • Reconsider what it is that they really want to know and try to be more explicit about the goals or objectives of measurement in order to prompt the selection/design of better metrics candidates;
  • Review the PRAGMATIC ratings for this and other risk-related metrics, challenging their assumptions and considering more creative approaches;
  • Adopt complementary metrics or use some other approach to compensate for the weaknesses in this metric.

No comments:

Post a Comment

Have your say!