Security Metric of the Week #71: Extent to which quality assurance (QA) is incorporated in information security processes
This week's metric, randomly selected from the 150-odd examples discussed in chapter 7 of PRAGMATIC Security Metrics, doesn't appear very promising, with mediocre PRAGMATIC ratings as far as ACME management is concerned and an overall score of 58%:
P
|
R
|
A
|
G
|
M
|
A
|
T
|
I
|
C
|
Score
|
75
|
70
|
66
|
61
|
80
|
50
|
35
|
36
|
50
|
58%
|
The premise - the rationale behind the metric - is that the
quality of various information security products (such as risk assessments, functional and technical specifications for security controls and security functions, architectures/designs, test plans, test scenarios, test results etc. in relation to application development projects, plus many other products in relation to other security activities) significantly influences (but does not entirely determine) the security achieved by the
corresponding information systems, business applications and associated business processes. Therefore QA efforts to improve the quality of the security processes and products should have a positive effect on the security of the organization. Therefore measuring the QA has some relevance to information security.
The metric's PRAGMATIC score is held back by low ratings for Timeliness and Independence. Could anything be done to address management's obvious concerns on those two parameters?
The low rating for Timeliness was justified because it was originally proposed that the metric would be analyzed, reported and hence acted-upon only once or twice a year. It would involve someone retrospectively examining the records relating to various information security processes, looking for evidence of QA activities, checking compliance with the procedures, and somehow coming up with a value for the metric.
Many of those QA activities could also generate metrics "live", feeding process information directly to management while the processes were running rather than months later. That way, the measured processes could be tweaked and quality-improved when doing so would have the most impact. With that approach in place, however, the proposed metric would be more or less redundant unless, for some reason, management really needed to double-check that QA was happening.
ACME managers were of the opinion that the proposed metric would be measured and reported by the people who had a vested interest in the outcome, hence its Independence (or Integrity) was in question. Could they be trusted to report the metric honestly if the expected QA activities were not, in fact, being performed routinely? Well most QA activities generate evidence in the form of checklists, sign-offs/approvals and so on, so the metric's base data could be reviewed and verified independently (e.g. by Internal Audit or QA people) if there was a suspicion that management were being painted an unrealistically rosy picture. With hindsight, perhaps ACME's the Independence rating was too low and should have been challenged, although on the other hand those audit and QA people's time is not free, so the additional checks would further depress the metric's Cost-effectiveness rating.
The upshot of this analysis is uncertain. It would be possible to address at least some of the identified shortcomings of this metric, changing its definition, data collection, analysis and/or reporting, but it may not be worth the effort, especially if there are other higher-scoring metrics on the table covering similar aspects. Management's appreciation that perhaps they ought to promote/support the QA activities directly rather than periodically measuring and reporting on them effectively sealed the fate of this metric for ACME, although YMMV (Your Metrics May Vary).
No comments:
Post a Comment
Have your say!