Pages

Saturday, 22 December 2012

How you die vs If you die


Which is more important: how you die, or if you die? Research in medicine often uses mortality (death) as an important outcome. Death (from any cause, so called ‘all-cause’ mortality) is easy to measure, it is not subject to misclassification, and it is the most important outcome for many conditions and treatments. Many researchers, however, favour ‘disease-specific’ mortality (only counting the deaths from the disease being studied) rather than all-cause mortality. The argument is that this measurement is more sensitive to changes in treatments that specifically target that condition (as there is less ‘noise’ from deaths from other causes). For example, it makes sense to measure deaths from heart disease if you are testing the effect of a treatment for heart disease. However, the use of disease-specific mortality can be misleading, it is arguably less important, and it results in an overestimation of the benefits and underestimation of the harms from many interventions.

The reviews

In this 2002 review of randomised trials of cancer screening, the researchers compared disease-specific mortality and all-cause mortality. In all 12 trials they reviewed, disease specific mortality was reduced in the groups that were screened compared to those who were not screened. So cancer screening is great, right? Not quite. When they looked at all-cause mortality, the reduction seen in the screened group for each study was either less than the reduction in disease-specific mortality, no different, or all-cause mortality was increased in the screened groups.

The Cochrane review on breast cancer screening used data from multiple studies on hundreds of thousands of patients and showed a significant reduction in breast cancer (disease-specific) mortality with screening. Yet this was not the case in studies that were considered at lowest risk of bias (the better studies). This is because using disease-specific mortality is open to bias. Furthermore, the review reports that breast cancer screening makes no difference to all-cause mortality (relative risk = 0.99).

The results of prostate cancer screening are hotly debated, but those who support screening usually point to the reduction in disease-specific mortality. The Cochrane review notes a small (but not statistically significant) improvement in this outcome, but again, there is no difference in all cause mortality (relative risk = 1.00). That finding is not reported in the summary, however.

When you read the summary of the Cochrane review on colorectal cancer screening, which includes trials on over 300,000 patients, it states that screening reduces disease-specific deaths. You have to read a lot further to find that the relative risk of dying (all-cause mortality) in the unscreened group is 1.00; that means the all cause mortality is exactly the same in the screened and unscreened groups.

Apart from cancer screening, studies of disease treatment tend to show the same thing: disease-specific mortality is always better, but all-cause mortality is either unchanged or not much different. Or worse. A review of those fancy, expensive (and now standard-of-care) drug eluting cardiac stents, comparing them to the old-fashioned, cheap plain metal stents, found that there was no difference in cardiac mortality with the fancy stents. And even that was an overestimation of the benefit, because the overall mortality was higher with the fancy ones.

The argument

The interesting thing about the lack of difference in all-cause mortality is that the 95% confidence interval (the interval in which we are 95% confident that the true value lies) is very narrow in these large reviews. For the colorectal example above, the interval is 0.99 – 1.01. This negates the argument that as disease-specific deaths make up only a small proportion of total deaths, you would need enormous studies in order to be able to detect a difference in all-cause mortality, and therefore it should not be used as an outcome. The problem with that argument is that enormous studies have been done and they consistently show that there is no difference in all-cause mortality, and that there is unlikely to be a true difference. It is not valid to assume that there is a difference and that the studies are simply not large enough - it is wishful thinking.

The problem

The main problem is that measuring cause of death is surprisingly inaccurate, whereas measuring death is not. If something is inaccurate, it is open to bias. It has been estimated that over 1/3 or over 1/2 of death certificates are wrong, and that this rate has not changed over time. Even the address of the patient is often wrong (here).

Another problem is that the intervention may be harming patients in other (unknown) ways. A hypothetical example: if your chest CT scans are causing cancers in a small proportion of patients, then any benefit from detecting lung cancers early (and reducing disease-specific mortality from lung cancer) may be offset by the increase in deaths from other (CT-induced) cancers. Less hypothetically, screening tests lead to more diagnoses, including false positives and cases of cancers that were never going to kill the patient anyway. The overdiagnosis leads to overtreatment, such as radiotherapy, chemotherapy and surgery, all of which may increase overall mortality.

The bottom line

Disease-specific mortality is not as ‘specific’ as we would like it to be. It overestimates the benefit of interventions and is an outcome that is less accurate and less important than (overall) mortality. If the aim of an intervention is to prevent death, then death should be the outcome. If the intervention didn’t prevent any deaths, then I really don’t care how many disease specific deaths were prevented, because for every one of them, there is a corresponding death that the process may have caused.

If you insist on measuring disease-specific mortality, do it accurately, and please combine it with all-cause mortality so that the benefit that you will inevitably show can be placed in perspective.

Addit: there is a good BMJ article (here) on this from 2011 (here) as part of a debate, but it is pay-only.

Addit 31 August 2014: This study is a great example whereby the drug (a beta blocker) reduced the rate of the primary outcome (cardiovascular events like heart attack and cardiovascular death) compared to placebo. However, overall you were more likely to die if you took the drug.

6 comments:

  1. That's quite interesting. My question would be is there a difference in time to death? Or alternatively is there a prolongation of life if a disease is detected early - as an earlier diagnosis can theoretically lead to less spread/ill effects of disease etc?

    ReplyDelete
    Replies
    1. These outcomes, in fact all outcomes, are measured at specific time points. Some of the studies referred to use multiple time points, like 2 years, 5 years, 10 years. Other studies have looked at rate of death (hazard ratios of survival curves). Either way, it is common to find that disease-specific survival/mortality/whatever is improved, but with little or no change in overall mortality/death rate/survival/whatever.

      The theory that you refer to sounds great. But like so many things that sound like a good idea in medicine, they end up being a lot more complicated and having unintended consequences. There are so many biases in screening, I only had room to point out one of them. A good reference for screening biases is found here: https://onlinecourses.science.psu.edu/stat507/book/export/html/74
      Specifically look at "lead-time bias". Also try this link: http://www.cancer.gov/cancertopics/pdq/screening/overview/patient/page5#Keypoint15

      Delete
  2. It's worth mentioning 'relative survival', a methodology commonly used in medical research. This methodology avoids the problems of trying to identify the cause of death.

    Basically, relative survival involves comparing your group (e.g. cancer survivors) with expected survival in the general population.

    Hope it helps.

    http://en.wikipedia.org/wiki/Relative_survival

    ReplyDelete
    Replies
    1. Thanks, yes relative survival or standardised mortality can make mortality rates more useful, but when you are performing an RCT (an experiment) you do not need to go beyond the direct comparison. The relative mortality then is simply comparing one group to another.

      Delete
  3. Dear Dr. Skeptic,

    Congratulations for the blog.

    Regarding this post: if we consider that in a RCT the only difference between both groups is the treatment, it seems pretty obvious that, if we find "all-cause mortality" increased in the intervention patients, this is due to the intervention itself. Of course one may argue that we also have to take into account the "unknownn unknowns" that affects the randomization, but it is a fact that, if taken seriously, would invalidate the RCTs as a valid tool.

    Kind regards,

    Marcelo Derbli Schafranski

    ReplyDelete
    Replies
    1. Thanks. Yes, there are always "unknown unknowns" in trials or any type. We can't take them into account, otherwise they would be known.

      This is why it is helpful for findings to be replicated in other studies, in different populations, with other (unknown) differences. This shows that the effect is consistent, more likely to be true, and generalisable.

      Delete

Note: only a member of this blog may post a comment.