I developed this 3x3 matrix today as part of an information security awareness module about risk management. The matrix plots the severity (or impact or consequences or costs) of various kinds of information security incident against the frequency (chance, probability or likelihood) of those same kinds of incident. It is color coded according to the level of risk.
Clearly, the incidents shown are just a few illustrative examples. Furthermore, we could probably argue all day about their positions on the matrix (more on that below).
Some might claim that it doesn't even qualify as a metric since there are no actual numbers on the matrix. "Cobblers" I say! It is information that would be relevant to decision making and that's good enough for me, but if you feel so inclined, go ahead and pencil-in some suitable numbers at the boundaries of those severity and frequency categories ... and be prepared to argue with management about those numbers as well!
Anyway, it set me thinking about whether the matrix might form the basis of a worthwhile information security metric. The obvious approach is to score it on the PRAGMATIC scale, in the context of an imaginary organization ("ACME Enterprises" will suffice):
P
|
R
|
A
|
G
|
M
|
A
|
T
|
I
|
C
|
Score
|
70
|
70
|
50
|
65
|
70
|
40
|
55
|
45
|
85
|
61%
|
The 61% score implies that this metric has some potential for ACME, but it doesn't stand out as an must-have addition to their information security measurement system.
There is a question mark about its Independence since the people most likely to be preparing such a matrix (i.e. information security pros like me) generally have something of a vested interest in the risk management process.
I wouldn't exactly call it Actionable since it doesn't directly indicate what we need to do to reduce the severity or frequency of the incidents ... but it does at least show that reducing either aspect will reduce the risk. It is Actionable in the sense that preparing and discussing such a matrix would be a useful part of the risk analysis process, particularly if the discussion engaged other risk professionals and business managers (which would also increase its Independence rating, by the way).
Involving managers in the process of preparing, reviewing or updating the matrix would increase the cost of the metric, but at the same time would generate more value, so the Cost-effectiveness would end up more-or-less unchanged.
I mentioned earlier that we might 'argue all day' about the positions of incidents on the matrix, but in so doing we would be considering and discussing information security incidents, risks and controls in some depth - which is itself a valuable use or outcome for this metric. Understanding the risks would help management prioritize them and, in turn, allocate sufficient resources to treat them sensibly (for example investing in better malware controls rather than, say, expensive state-of-the-art CCTV system to reduce a risk that is already in the green zone).
So, all in all, it's an interesting security metric although it needs a bit more thinking time yet ...
UPDATE: there is a remarkably similar metric at the heart of the "Analog Risk Assessment" ARA method I describe here.
UPDATE: there is a remarkably similar metric at the heart of the "Analog Risk Assessment" ARA method I describe here.
No comments:
Post a Comment
Have your say!