08 January 2014

Measuring health risks

I think it's fair to say that metrics is a "challenging" topic across all fields, not just information security. The issues are not so much with the actual mathematics and statistics (although it is all too easy for non-experts like me to make fundamental mistakes in that area!) as with what to measure, why it is being measured, and how best to measure, report and interpret/use the information.

As a reformed geneticist, here's an example I can relate to: measuring and reporting health risks resulting from off-the-shelf DNA test kits. A journalist for the New York Times took three different tests and compared the results. 

Underlying the whole piece is the fact that we're talking about risks or probabilities, with inherent uncertainties. The journalist identified several factors with these tests that make things even less certain for customers.

For a start, the three test companies appear to be testing for their own unique batteries of disease markers, which immediately introduces a significant margin for error or at least differences between them. To be honest, I'm not even entirely certain that all their markers are valid. I don't know how they (meaning both the markers and the companies) are assessed, nor to what extent either of them can be trusted.

Secondly, the test results were reported relative to 'average' incident rates for each disease, using different averages (presumably separate data sets, quite possibly means of samples from entirely different populations!). This style of metric reporting introduces the problem of 'anchoring bias': the average numbers prime the customers to interpret the test results in a certain way, perhaps inappropriately.

Thirdly, except in a few specific situations, our genes don't directly, indisputably cause particular diseases: most of those disease markers are correlated to some extent with a preponderance to the disease, rather than being directly causative. If I have a marker for heart disease, I may be more likely to suffer angina or a heart attack than if I lacked the marker, but just how much more likely is an open question since it also depends on several other factors, such as whether I smoke, over-eat or am generally unfit - and some of those factors, and more besides, are themselves genetically-related. There are presumably genetic 'health markers' as well as 'disease markers', so someone with the former might be less prone to the latter.

A fourth factor barely noted in the NY Times piece concerns the way the results are reported. In a conventional clinical setting, diagnostic test results are interpreted by specialists who truly understand the tests, the natural variation between people, and the implications of the results, given the context of the actual patient (particularly the presence/absence, nature and severity of other symptoms and contributory factors). The written lab test reports may highlight specific values that are considered outside the normal range, but what those numbers actually mean for the patient is left to the specialists to determine and explain. In cutting out the specialists, the off-the-shelf test kit companies are left giving their customers general advice, no doubt couched very carefully in terms that avoid any liability for mistakes. On top of that, they have a responsibility to avoid over- and under-playing the risks, implying a neutral bias. In the doctor's surgery, the doc can respond to your reactions, give you a moment to let things sink in, and offer additional advice beyond the actual test results. That interaction is missing if you simply get a letter in the mail. 

There's a fifth factor that isn't even mentioned in the report, namely that the samples and tests themselves vary somewhat. It's a shame the reporter didn't take and submit separate samples to the same labs (perhaps under pseudonyms) to test their repeatability and inherent quality.

The final comments in the NY Times are right on the mark. Instead of spending a couple of hundred dollars on these tests, buy a decent set of bathroom scales and assess the more significant health risks yourself! While I have a lot of respect for those who develop sophisticated information security risk models and systems, I'm inclined to say much the same thing. An experienced infosec or IT audit pro can often spot an organization's significant risk factors a mile off, without the painstaking risk analysis. 

2 comments:

  1. Couldn't agree more but these particular tests are an easy target. Off-the-shelf DNA tests of the sort discussed in the NYT are particularly dodgy; however, practically all screening tests have limitations, are usually only appropriate for certain populations, and may be associated with serious harms. But screening tends to be widely and uncritically promoted in the media and by celebrities. There is little attention to the risks and trade-offs inherent in screening decisions. Doctors may also be unreliable guides because they themselves don't adequately understand the test, have conflicts of interest, or don't have the time to adequately counsel patients. And groups like the USPSTF that try to provide balanced information have often been vilified in the media, by politicians, and specialists with financial interests in screening.

    See for example:
    Woloshin, Steven, and Lisa M Schwartz. “The Benefits and Harms of Mammography Screening: Understanding the Trade-offs.” JAMA: The Journal of the American Medical Association 303, no. 2 (January 13, 2010): 164–165. doi:10.1001/jama.2009.2007.

    Also see other works on medical risk and risk communication by the same authors and their colleagues at Dartmouth:
    http://tdi.dartmouth.edu/initiatives/research/healthy-skepticism

    ReplyDelete
  2. Also see “Our screening sacred cows” that discusses a Guardian article earlier this week.
    http://www.healthnewsreview.org/2014/01/our-screening-sacred-cows/

    ReplyDelete

Have your say!