So far in this series of bloggings, I have critiqued the top five metrics identified in the Hannover Research/Tripwire CISO Pulse/Insight Survey. I'll end this series now with a quick look at the remaining six metrics and an overall conclusion.
Metric 6: "Legitimate e-mail traffic analysis"
While the analysis might conceivably be interesting, isn't the metric the output or result of that analysis rather than the analysis itself? I'm also puzzled at the reference to 'legitimate' in the metric, since a lot hinges on the interpretation of the word. Is spam legitimate? Are personal emails on the corporate email system legitimate? Where do you draw the line? Working on the assumption that this metric, like the rest, is within the context of a vulnerability scanner system, perhaps the metric involves automatically characterizing and categorizing email traffic, then generating statistics. Without more information, the metric is Meaningless.
Metric 7: "Password strength"
This could conceivably be a fairly sophisticated metric that takes into account a wide variety of characteristics of passwords (such as length, complexity, character set, character mix, predictability, quality of the hashing algorithm, time since last changed, relationship to known or readily guessed factors relevant to the users, relationship to users' privilege levels or data access rights and so on) across multiple systems. More often, it is a much simpler, cruder measure such as the length of an individual password at the point it is being entered by a user, or the minimum password length parameter for servers or applications. Both forms have their uses, but again without further information, we don't know for sure what the metric is about.
Metric 8: "Time to incident recovery" and metric 9: "Time to incident discovery"
These metrics concern different parts of the incident management process. At face value, they are simple timing measures but in practice it's not always easy to determine the precise points in time when the clock starts and stops for each one.
Metric 8 implies that incidents are recovered (not all are), and that the recovery is completed (likewise). If metric 8 were used in earnest, it would inevitably put pressure to close-off incidents as early as possible, perhaps before the recovery activities and testing had in fact been finished. This could therefore prove counterproductive.
Metric 9 hinges on identifying when incidents occurred (often hard to ascertain without forensic investigation) and when they were discovered (which may coincide with the time they were reported but is usually earlier). The metric is likely to be subjective unless a lot of effort is put into defining the timepoints. The tendency would be to delay the starting of the timer (e.g. by arbitrarily deciding that an incident only counts if the business is impacted, and the time of that impact is the time of the incident), and to stop the timer as early as possible (e.g. by making presumptions about the point at which someone may have first 'spotted something wrong'). The accuracy and objectivity of the metric could be improved by more thorough investigation of the timing points, but that would increase the Costs at least as much as the benefits.
Metric 8 implies that incidents are recovered (not all are), and that the recovery is completed (likewise). If metric 8 were used in earnest, it would inevitably put pressure to close-off incidents as early as possible, perhaps before the recovery activities and testing had in fact been finished. This could therefore prove counterproductive.
Metric 9 hinges on identifying when incidents occurred (often hard to ascertain without forensic investigation) and when they were discovered (which may coincide with the time they were reported but is usually earlier). The metric is likely to be subjective unless a lot of effort is put into defining the timepoints. The tendency would be to delay the starting of the timer (e.g. by arbitrarily deciding that an incident only counts if the business is impacted, and the time of that impact is the time of the incident), and to stop the timer as early as possible (e.g. by making presumptions about the point at which someone may have first 'spotted something wrong'). The accuracy and objectivity of the metric could be improved by more thorough investigation of the timing points, but that would increase the Costs at least as much as the benefits.
Metric 10: "Patch latency"
On the assumption that this is some measure of the time lag between release of [security relevant] patches and their installation, this could be a useful metric to drive improvements in the efficiency of the patching process provided care is taken to avoid anyone unduly short-cutting the process of assessing and testing patches before releasing them to production. Premature or delayed implementation could both harm security, implying that there is an ideal time to implement a given patch. Unfortunately, it's hard to ascertain when the time is just right as it involves a complex determination of the risks, which vary with each patch and situation (e.g. it may be ideal to implement patches immediately on test or development systems, but most should be delayed on production systems, especially business-critical production systems).
Metric 11: "Information security budget as a % of IT budget"
This is, quite rightly in my opinion, the least popular metric among survey respondents.
It presumes that security and IT budgets are or should be linked. That argument would be stronger if we were talking about IT security, but information security involves much more than IT e.g. physical security of the office.
In reality, there are many factors determining the ideal budget for information security, the IT budget being one of the least important.
It presumes that security and IT budgets are or should be linked. That argument would be stronger if we were talking about IT security, but information security involves much more than IT e.g. physical security of the office.
In reality, there are many factors determining the ideal budget for information security, the IT budget being one of the least important.
Concluding the series
A few of the metrics in the Hannover Research/Tripwire CISO Pulse/Insight Survey only make much sense in the narrow context of measuring the performance of a vulnerability scanner, betraying a distinct bias in the survey. Others are more broadly applicable to IT or information security, although their PRAGMATIC scores are mediocre at best. Admittedly I have been quite critical in my analysis and no doubt there are situations in which some of the metrics might be worth the effort. However, it's really not hard to think of much better security metrics - just look back through the Security Metrics of the Week in this blog, for instance, or browse the book for lots more examples. Better still, open your eyes and ears: there's a world of possibilities out there, and no reason at all to restrict your thinking to these 11 metrics.If you missed the previous bloggings in this series, it's not too late to read the introduction and parts one, two, three and four.
No comments:
Post a Comment
Have your say!