At the risk of appearing security-obsessed, I'd like explore the information security risks and control requirements that should be taken into account when designing an information security measurement system, particularly if (as is surely the aim) the metrics are going to materially affect the organization's information security arrangements.
I'm talking here about the measurement system as a whole, not just the elements and metrics within it. Information security is undoubtedly a concern for the executive suite's information security dashboard, the metrics database maintained by the CISO, and the monthly metrics report, but I'm taking a broader perspective.
It is not appropriate for me to propose specific information security controls for your information security measurement system since I can barely guess at your circumstances - the threats, vulnerabilities and impacts in relation to your security metrics, your business situation. However, the rhetorical questions that follow will hopefully prompt you to explore the security and related requirements for your metrics, and think about what matters to your organization:
- How are the source data for metrics - the base measurements - obtained? Are the sources themselves (both automated and manual) sufficiently trustworthy? Where are the biases, and how severe are they? Are sufficient data points available to generate valid and useful statistics? How much variability/variation is there, and how much should there be? Does measurement variance itself qualify as a worthwhile metric?!?
- Who is gathering, storing and processing measurements? Are they sufficiently competent, diligent and trustworthy? Are they well trained? Do they follow documented procedures? Are the criteria and processes for taking measurements sufficiently well defined so as to avoid ambiguity and to reduce the potential for abuse or fraud (e.g. selective use of ‘beneficial’ or positive data and disregard of negative values)?
- What about the IT systems and programs supporting the measurement processes: has anyone actually verified that analytic tools, spreadsheets and databases are correctly, accurately and completely processing measurement data? Are changes to the systems that generate, analyze and present security metrics properly managed, for instance are code or design changes adequately specified and tested before release? If the measurement processes or systems change, are prior data properly re-based or normalized for trends analysis?
- Is there a rational and systematic process for proposing, considering and selecting security metrics? Does it cope with changes to the information requirements or emphasis/priorities, new opportunities, newly-identified information gaps or constraints, novel metrics suggestions etc.? Is there a rational mechanism for specifying, testing, implementing, using, managing, maintaining and eventually retiring metrics?
- Do metrics reporting processes accurately present ‘the truth, the whole truth, and nothing but the truth’? How can we ensure sufficient objectivity and accuracy of reported data, and what do we mean by 'sufficient' anyway? Is there potential at any part of the process for malicious actors to meddle with things? Where are the weakest points? Where might the threats originate? What's in it for them?
- Are good decisions made sensibly and rationally on the basis of the metrics? Is the information used in the best interests of the organization? Or are people intentionally or unknowingly playing games with them? Does anyone monitor this kind of thing, or indeed the other issues raised here, and act accordingly?
- How reliable are the metrics? How reliable do they need to be? Are some metrics absolutely crucial, supporting business decisions that could prove disastrous if incorrect? Are there corroborating sources for such metrics, or ways to cross-check the metrics and/or correct the decisions? Are any of the metrics of limited/marginal value, making them candidates for retirement, reducing the amount of distracting noise as well as cutting costs?
- How serious would it be if the metrics turned up late? Would important meetings or decisions be delayed? Would this cause compliance issues? What if the metrics were completely missing - for whatever reason they could not longer be provided? Would people be forced to limp along without them? Might there be alternative sources of information - and if so, would they be as good? Are there situations where rough estimates, provided much sooner, would be at least as good if not better than more accurate and factual metrics provided later?
- Given that they concern the organization's information security, are the metrics commercially confidential? Are any of them particularly sensitive? Would anyone else be interested in them, outside the intended audience? Could they infer decisions and actions on security, incident levels and costs, vulnerabilities etc. from the metrics? Conversely, are any of the metrics suitable for wider publication, consideration and use, for example in awareness or marketing? Would any of them be beneficial and valuable for employees in general, business partners, sales prospects, authorities/regulators, auditors, owners or other stakeholders? Are any of them dangerous in the governance sense, being undeniable evidence that management has been made aware of certain issues and consequently can be held to account for their decisions?
To close, I'll just mention that these generic considerations apply in much the same way to virtually ANY measurement system in ANY context: financial metrics, HR metrics, strategic metrics, risk metrics, product metrics, health and safety metrics, societal and political metrics, scientific metrics ... you name it. Maybe it's worth talking to your colleagues about their metrics too.