Pages

Monday, 11 August 2014

Lessons from history #11: Extra- to Intra-cranial Bypass Surgery

This story is about a procedure that made sense and had supporting evidence, became common practice, but was later discontinued because a high quality study showed it to be ineffective. The story of extra-cranial intra-cranial bypass surgery ticks all the boxes: overestimation of benefit, seduction by the theory, unrecognised bias in studies, and just plain ineffectiveness despite our best effort and beliefs.

The idea
The idea is appealing: if an intracranial (inside the skull) artery to the brain is clogged and causing strokes, then bringing in blood from a good blood vessel (extracranial – outside the skull) to the brain artery to bypass the narrowing will prevent future problems. This extracranial to intracranial (ECIC) bypass was first done in the 1960s and became an established procedure, but not based on any randomised trials comparing the procedure to non-operative treatment.

The trial
The randomised trail (RCT) published in 1985 (here) showed that surgery did a little worse than non-operative treatment. The trial started in 1977 and was of a surprisingly high standard, particularly for that time. For example, it used concealed, central randomisation, a valid sample size calculation, clinically important outcomes, a large number of participants (for a surgical study), 100% follow up, only about 2% crossover each way (again, good for a surgical trial), and long term follow up (average 5 years). And importantly, the surgical procedure worked – the surgical complication rate was low and they had a 96% graft patency rate, so the poor results were not due to a failure to achieve the goal of bypass surgery.

The results
Fatal and non-fatal strokes occurred more frequently and earlier in the surgical group. The results were enough to reject the hypothesis that surgery improved the outcomes. The difference can be seen in the graph below.



They also played around with the analyses, adding and subtracting ineligible patients, looking at severe strokes only, type of strokes, strokes per person, size of the treating centre, the artery involved, etc.. In every analysis, there was no benefit to surgery. They even did an analysis excluding patients who had strokes prior to surgery (a best case scenario, biased to make surgery look good), and there was still no benefit.

So how do we explain the previous “successes”?
Simply, the previous studies were biased. They used surrogate endpoints, like cerebral blood flow, cerebral metabolism and EEG tracings. The RCT used clinically important endpoints (stroke and death). The previous studies did not have an adequate (concurrent, randomised) control group, they used “historical” controls. They also relied on variations in the natural history, like transient ischaemic attacks (TIAs). For example, one previous (uncontrolled) study showed an 86% benefit from surgery. The RCT showed a similar high rate of reduction in TIAs of 77%, but the reduction was 80% in the non-operative group, which highlights the need for a control group.

Basically, the randomised trial was a more objective, less biased test, and therefore more likely to provide an estimate of the effect that was closer to the true effect. The previous studies were more likely to show a positive effect, even when there wasn’t one. This is partly because that is what the researchers wanted to and expected to find, and this influenced their interpretation of the data.

The bottom line

The ECIC bypass story is a great example of how much bias is out there in medical studies; bias that leads us to overestimate the effectiveness of the treatment. But it is also a good example of how a more rigorous application of scientific principles leads us to a better estimate of the truth and consequently better treatment, while simultaneously revealing biases that were previously unrecognised.

5 comments:

  1. Great again thanks!

    ReplyDelete
  2. Is it bias (the researchers, surgeons) are unaware of, or is it wilful misrepresentation, changing of the endpoints halfway through, dropping part of the questions to be asked, and fudging of the numbers/results to suit one's routine/belief/income? And what is the solution?

    ReplyDelete
    Replies
    1. Yes, good point. I raise the third alternative (1. Publication bias, 2. methodological bias, and 3. Fraudulent data) but I honestly believe that most of it is subconscious - a kind of confirmation bias where researchers believe something a prior, then only really see what they want to see - what fits their beliefs. Fraud I can deal with, this other kind of bias is much harder.

      Delete
  3. I find the fraudulent version harder to deal with. First because of our inherent belief and trust in the profession (and the 'art') and then because this version is harder to 'solve'. In the case of subconscious bias, simply involving someone, an outsider not invested in the question/results can shift the balance and introduce a level of control that will challenge the fallacies being pursued. In the fraudulent version the aim is to hide the true result and making up the results, which no doubt involves a considerable clout and brain power and reveals a morally corrupted core - which I find a very confronting realisation, (perhaps revealing myself being more naive than one should be...) With such motivation the continuing effort will be to hide the true results no matter what and maintain the facade of 'research' and 'trial'. With every correcting measure there will be another clever solution to avoid being confronted with the facts/evidence that is undesirable, and the act of hiding will just get cleverer and more sophisticated.

    ReplyDelete
    Replies
    1. Thanks, all good points. Maybe I am naive? My estimate is that fraud is much less common than good old fashioned bad science. And not only is it less common, it often gets found out.

      Delete

Note: only a member of this blog may post a comment.