10 February 2015

Preventive, detective and corrective expenditure

A mediocre article based presumably on a press release from Deloitte hints at a financial metric concerning not the size of an organization's information security budget per se but its shape, specifically the proportions of the budget allocated to preventive, detective and corrective actions (albeit using Deloitte's versions of those labels).

The journalist and/or his source implies that Australian organizations ought to be emulating North American and British ones by spending a greater proportion of their security budgets on detection and correction. Although that advice runs counter to conventional wisdom, the article doesn't adequately explain the reasoning: one could just as easily argue that the Australians are ahead of the game in focusing more on prevention, hence the rest of the world ought to catch up! 

Anyway, a pie chart is an obvious way to represent proportions. The example below, for instance, uses nested pies to compare the budget breakdowns for two fictional organizations, or two business units within one organization, or even this year's security budget breakdown versus last year's:


According to the figure, 'they' are evidently spending a greater proportion of their security budget on preventive controls than 'we' do. Fair enough, but does that information alone tell us anything useful? Which is the better approach? It's hard to derive any useful insight without more information.

In PRAGMATIC terms, the metric doesn't score particularly well according to the mythical ACME Enterprises CISO who assessed it anyway:

  • Predictiveness: 55%. The expenditure on information security is a reasonable indicator of an organization's security status. The nested pies appear to tell us that, other things being equal, 'we' are more likely to suffer incidents than 'they' are but 'we' are also likely to identify, react and recover from them than 'they' are. Unfortunately, 'other things being equal' is a serious constraint: the comparison may be completely flawed otherwise.  Even if the two organizations are about the same size and in the same industry, one might be spending a fortune on its security while the other might be so tight it squeaks when it walks - and there are many more differences between organizations than that. 
  • Relevance: 60%. We probably shouldn't allow preventive, detective and corrective controls to get seriously out of balance, but it's is far from clear what 'balance' actually means in this context. Detective controls, for instance, tend to be relatively expensive compared to corrective controls, hence spending a markedly greater proportion of the security budget on detection rather than correction might be 'balanced' in fact.
  • Actionability: 35%. The metric doesn't prompt any obvious response from the audience, unless the proportions are seriously skewed (e.g. spending next to nothing on corrective controls would imply a risky strategy: if our preventive or detective controls were to fail in practice, we would probably be in a mess).
  • Genuinness: 50%. Security spending doesn't always fall neatly into one of the three catgories, hence there are arbitrary decisions to be made when allocating dollars to categories. This is a common cost-accounting issue. If , on seeing the pies, management takes the stratgegic decision to 'transfer funding from detective to preventive controls', the person compiling the metric might simply re-allocate expenses to appear compliant with the decision, without making any real changes.
  • Meaninfulness: 35%. The metric is not self-evident and needs to be explained to the audience, which is somewhat challenging! The colorful graph looks simple and striking, but as soon as anyone scratched the surface to figure out what it really means, we would struggle.
  • Accuracy: 20%. The 'other things being equal' thing is a concern here, as well as the cost-allocation issue. Unless the metric is measured independently by a competent and trustworthy person/team following strict guidelines, there is a high probability of errors. The drawback applies both to comparisons between organizations or business units, and to comparisons within the same organization over time.
  • Timeliness: 50%. The metric might be prepared and used as part of the budgetary planning process, but it would take some time to achieve any real accuracy. Alternatively, it could be drawn up more quickly as a rough-and-ready measure, at the cost of lower accuracy.
  • Integrity: 70%. Despite our comments above concerning arbitrary cost-allocation decisions, the figures could potentially be independency assessed or audited to establish whether there is a consistent and rational basis ...
  • Cost-effectiveness: 30%. ... which is just one of many ways that this could easily become an expensive metric, with uncertain business benefits.
  • Overall PRAGMATIC score: 45%.
There are loads more security metrics examples in our book including some financial and strategic metrics that out-score and outclass this one, while there is a vast array of other possibilities that we haven't even analyzed.  In short, this metric is a dud as far as ACME is concerned ... but you may feel otherwise, and that's fine. Your situation and measurement needs are different, hence YMMV (Your Metrics May Vary).  The point of this piece, the blog, the website and the book is not to spoon-feed you a meal of tasty information security metrics but to give you the tools to cook up your own, and prompt you to think about them in a structured, rational way.

Kind regards,
Gary

PS  By the way, did you notice that the article uses the phrasing 'so many cents of every dollar spent' rather than percentages? The numbers are identical of course, but cents-in-the-dollar emphasizes the financial aspect, making the presentation more businesslike - a neat little example of the value of expressing information security in business terms. Shame they picked such a dubious metric though!

No comments:

Post a Comment

Have your say!