25 June 2012

SMotW #12: Firewall rule changes

Security Metric of the Week #12: count of firewall rule changes

This is one of the lowest-ranked example metrics in our collection of 150, with a pathetic PRAGMATIC score of just 9%.  What makes this one so bad?

For starters, as described, it is expressed as a simple number, a count.  What are recipients of the metric expected to make of a value such as, say, 243?  Is 243 a good number or does it indicate a security issue?  What about 0 - is that good or bad?  Without additional context, the count is close to meaningless.  

Additional context would involve knowing things such as:
  • The count from previous periods, giving trends (assuming a fixed period)
  • Expected value or ranges for the count (often expressed in practice by traffic-light color coding)
  • Verbal explanation for values that are outside the expected range

Even assuming we have such contextual information, sufficient to recognize that the latest value of the metric is high enough to take it into the red zone, what are we expected to do about it?  Presumably the number of firewall rule changes in the next period should be reduced to bring the metric back into the green.  Therefore tightening-up the change management processes that are reviewing rule changes to reject a greater proportion of changes would be a good thing, right?  Errr, no, not necessarily.  The very purpose of most firewall rule changes is to improve network security, in which case rejecting them purely on the basis that there are too many changes would harm security ... and this line of reasoning raises serious questions about the fundamental basis of this  metric.  We're going in circles at this point.  If we move on to ask whether rule changes on different firewalls are summed or averaged in some way, and what happens if some firewalls are much more dynamic than others, and we are fast losing the plot.



It must be obvious at this point that we have grave doubts about the metric's Relevance, Meaning and Actionability, which in turn mean is is not at all Predictive of security.  The Integrity (or Independence) rating is terrible and Accuracy rating poor since the most likely person to measure and report the metric is most likely the same person responsible for making firewall changes, and they are hardly going to recognize, let alone admit to others, that they might be harming security.  Unless the metric is much more carefully specified, they have plenty of leeway to determine whether a dozen new rules associated with, for instance, the introduction of IPv6 'counts' as 12 or 1.


The PRAGMATIC scoring table sums it up: this is a rotten metric, a lemon, almost certainly beyond redemption unless we are so totally lacking in  imagination and experience that we can't think up better way of measuring network security! 

P
R
A
G
M
A
T
I
C
Score
2
1
1
10
2
33
14
4
17
9%




That's almost it for this week's metric, except to leave you with a parting thought: if someone such as the CEO seriously proposed this metric or something equally lame, how would you have dealt with the proposal before the PRAGMATIC approach became available?  Being PRAGMATIC about it gives you a rational, objective basis for the analysis but does this help, in fact?  We are convinced your discussion with the CEO will be much more robust and objective if you have taken the time to think through the issues and scores.  What's more, the ability to suggest other network security metrics with  substantially better PRAGMATIC scores means you are far less likely to be landed with a lemon by default.

Regards,
Gary & Krag


No comments:

Post a Comment

Have your say!