20 November 2015

Decision-led metrics

Metrics in general are valuable because, in various ways, they support decisions. If they don't, they are at best just nice to know - 'coffee table metrics' I call them. If coffee table metrics didn't exist, we probably wouldn't miss them, and we'd have cut costs.

So, what decisions are being, or should be, or will need to be made, concerning information risk and security? If we figure that out, we'll have a pretty good clue about which metrics we do or don't want.

Here are a few ways to categorize decisions:
  • Decisions concerning strategic, tactical and operational matters, with the corresponding long, medium and short-term focus and relatively broad, middling or narrow scope;
  • Decisions about risk, governance, security, compliance ...;
  • Decisions about what to do, how to do it, who does it, when it is done ...;
  • Business decisions, technology decisions, people decisions, financial decisions ...;
  • Decisions about departments, functions, teams, systems, projects, organizations; 
  • Decisions regarding strategies/approaches, policies, procedures, plans, frameworks, standards ...;
  • Decisions relating to threats, vulnerabilities and impacts - evaluating and responding to them;
  • Decisions made by senior, middle or junior managers, by staff, and perhaps by or relating to business partners, contractors and consultants, advisors, stakeholders, regulators, authorities, owners and other third parties;
  • Decisions about effectiveness, efficiency, suitability, maturity and, yes, decisions about metrics (!);
  • ... [feel free to bring up others in the comments].

Notice that the bullets are non-exclusive: a single metric might support strategic decisions around information risks in technology involving a commercial cloud service, for instance, putting it in several of those categories. 

If we systematically map out our current portfolio of security metrics (assuming we can actually identify them: do we even have an inventory or catalog of security metrics?) across all those categories, we'll probably notice two things. 

First, for all sorts of reasons, we will probably find an apparent excess or surplus of metrics in some areas and a dearth or shortage elsewhere. That hints at perhaps identifying and developing additional metrics in some areas, and cutting down on duplicates or failing/coffee-table metrics where there seems to be too many which is itself a judgement call or a decision about metrics - and not as obvious as it may appear. Simplistically aiming for a "balance" of metrics across the categories is a naive approach

Second, some metrics will pop up in multiple categories ... which is wonderful. We've just identified key metrics. They are more important than most since they evidently support multiple decisions. We clearly need to be extra careful with these metrics since data, analysis or reporting issues (such as errors and omissions, or unavailability, or deliberate manipulation) is likely to affect multiple decisions.

Overall, letting decisions and the associated demand for information determine the organization's choice of metrics makes a lot more sense than the opposite "measure everything in sight" data-supply-driven approach. What's the point in measuring stuff that nobody cares about? 


No comments:

Post a Comment

Have your say!