My thesis is that the effectiveness of medical interventions is overestimated and that the harms are underestimated; that the perception of medicine is rosier than the reality. The reason for this is multifactorial, but an important contributor to this effect is bias in the scientific record: the 'literature'. Whether it be from studies being withheld by drug companies, conflicted authors producing biased results, or just the bias from researchers so keen to show that something works that they don't even realise their mistakes, it all produces the bias in the same direction: towards the overestimation of benefit and underestimation of harm.
Given that I am always asking for the evidence, where is the evidence for this bias? Here on this page is where. I will generate a collection of classic articles on this topic and post the links on this page.
Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias (K Dwan et al, PLOS One 2008)
Empirical Evidence of Bias: Dimensions of methodological quality associated with estimates of treatments effects in controlled trials (KF Schulz et al, JAMA, 1995)
Empirical Evidence of Design-Related Bias in Studies of Diagnostic Tests (JG Lijmer, JAMA, 1999)
Empirical Evidence of Bias in Treatment Effect Estimates in Controlled Trials with Different Interventions and Outcomes: Meta-epidemiological Study (L Wood, BMJ, 2008)
Influence of Reported Study Design Characteristics on Intervention Effect Estimates from Randomized, Controlled Trials (J Savovic et al, Ann Intern Med, 2012)
Effects of study precision and risk of bias in networks of interventions: a network meta-epidemiological study (Chaimani et al, Int J Epidemiology 2013. Less precise [smaller] studies tend to exaggerate the effects of the active or new intervention)
Risk of bias versus quality assessment of randomised controlled trials: cross sectional study (L Hartling et al, BMJ 2009. Studies with low risk of bias had more conservative estimates of effect)
Quantitative Analysis of Sponsorship Bias in Economic Studies of Antidepressants (CB Baker, Br J Psychiatry, 2003)
Drug Development: Raise standards for preclinical cancer research (CG Begley, Nature, 2012. Only 6 of 53 papers reporting new findings in cancer research were reproducible)
Empirical Evidence for Selective Reporting of Outcomes in Randomized TrialsComparison of Protocols to Published Articles (A-W Chan et al, JAMA 2004)
Stopping Randomized Trials Early for Benefit and Estimation of Treatment EffectsSystematic Review and Meta-regression Analysis (D Bassler et al, JAMA 2010)
Randomisation to protect against selection bias in healthcare trials (R Kunz et al, Cochrane, 2008). This Cochrane review provides evidence that non-randomised trials overestimate the effect of interventions compared to randomised trials, and that within randomised trials, those without allocation concealment (not knowing what group a person will be allocated to) also overestimate the treatment effects.
Association between unreported outcomes and effect size estiamtes in Cochrane meta-analyses (Furukawa et al, JAMA 2007)
The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. Kirkham et al, BMJ 2010).
Given that I am always asking for the evidence, where is the evidence for this bias? Here on this page is where. I will generate a collection of classic articles on this topic and post the links on this page.
Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias (K Dwan et al, PLOS One 2008)
Empirical Evidence of Bias: Dimensions of methodological quality associated with estimates of treatments effects in controlled trials (KF Schulz et al, JAMA, 1995)
Empirical Evidence of Design-Related Bias in Studies of Diagnostic Tests (JG Lijmer, JAMA, 1999)
Empirical Evidence of Bias in Treatment Effect Estimates in Controlled Trials with Different Interventions and Outcomes: Meta-epidemiological Study (L Wood, BMJ, 2008)
Influence of Reported Study Design Characteristics on Intervention Effect Estimates from Randomized, Controlled Trials (J Savovic et al, Ann Intern Med, 2012)
Effects of study precision and risk of bias in networks of interventions: a network meta-epidemiological study (Chaimani et al, Int J Epidemiology 2013. Less precise [smaller] studies tend to exaggerate the effects of the active or new intervention)
Risk of bias versus quality assessment of randomised controlled trials: cross sectional study (L Hartling et al, BMJ 2009. Studies with low risk of bias had more conservative estimates of effect)
Quantitative Analysis of Sponsorship Bias in Economic Studies of Antidepressants (CB Baker, Br J Psychiatry, 2003)
Drug Development: Raise standards for preclinical cancer research (CG Begley, Nature, 2012. Only 6 of 53 papers reporting new findings in cancer research were reproducible)
Empirical Evidence for Selective Reporting of Outcomes in Randomized TrialsComparison of Protocols to Published Articles (A-W Chan et al, JAMA 2004)
Influence of trial sample size on treatment effect estimates: meta-epidemiological study (M Egger, BMJ, 2013). An overview of biases that might influence the findings of meta-analyses, but covers good examples of general bias in literature.
Single-Center Trials Show Larger Treatment Effects Than Multicenter Trials: Evidence From a Meta-epidemiologic Study (A Dechartres et al, Ann Int Med 2011)
Single-Center Trials Show Larger Treatment Effects Than Multicenter Trials: Evidence From a Meta-epidemiologic Study (A Dechartres et al, Ann Int Med 2011)
Comparative effect sizes in randomised trials from less developed and more developed countries: meta-epidemiological assessment (Panagiotou et al, BMJ 2013)
Stopping Randomized Trials Early for Benefit and Estimation of Treatment EffectsSystematic Review and Meta-regression Analysis (D Bassler et al, JAMA 2010)
Bias in metaanalysis detected by a simple, graphical test (M Egger et al, BMJ, 1997. Bias detected by funnel plot, and when meta-analyses are contradicted by single large studies)
Randomisation to protect against selection bias in healthcare trials (R Kunz et al, Cochrane, 2008). This Cochrane review provides evidence that non-randomised trials overestimate the effect of interventions compared to randomised trials, and that within randomised trials, those without allocation concealment (not knowing what group a person will be allocated to) also overestimate the treatment effects.
Association between unreported outcomes and effect size estiamtes in Cochrane meta-analyses (Furukawa et al, JAMA 2007)
The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. Kirkham et al, BMJ 2010).
Des Spence cites “poor regulation” as one of the phenomena that compound the profit-driven pollution of Evidence-Based Medicine (EBM).[1] He is not alone. This conception seems to be gaining in popularity.[2,3]
ReplyDeleteIndeed, the current regulation is handmaiden to the polluters, as this partial list of examples indicates:
• The regulation does not demand that the research agenda be driven strictly by patient needs, not corporate interests.
• It is silent about the adequacy of selection criteria, outcome measures, and statistical significance, three variables that are often used by the polluters to manipulate evidence.
• It says nothing about what should count as scientific and unscientific research. This lacuna allows the latter to take place too, provided, of course, that it labels itself as "scientific".
• The regulation introduces exceptions to the head-to-head rule, exceptions that allow the polluters to test every new drug against placebo or no treatment thereby showing us exactly what they want: efficacy, but not necessarily over the current treatment.[4]
• It does not ban regulators, health care institutions and medical professionals from having financial conflicts of interest. Worse than that, "transparency", the only thing it insists on and quite feebly so, gives both the doctor and the patient nothing but the misleading impression that they can make a truly informed choice.
• The regulation does not ban subject recruitment through financial incentives, a practice capable of introducing outcome bias.
• It does not ban seeding trials, i.e., marketing exercises concealed as scientific research.
• It does not ban manipulative advertising to both doctor and patient inside or outside "scientific" journals.
• It does not ban medicalisation and “me too” drugs.
• It does not regard polluted information, whether it involves misconduct or not, as a sufficient condition for rendering disclosure inadequate. Thus, it lets informed consent degenerate into a legal fiction and the principle of autonomy into a cynical farce.[5]
• Worst of all, it is perfectly ethical: being the codified expression of the collective conscience of our medicine, it naturally purports to be moral.
In light of these examples we should ask ourselves: If the polluters of medical knowledge can tick the ethical box, then what does that say about our ethic?
http://www.bmj.com/content/348/bmj.g22/rr/680463
Wow, and thanks. On the positive side, I think that recognition of these system faults is increasing, particularly due to some of the authors you reference, such as Goldacre and Gotzsche.
DeleteYou will like this: http://www.mayoclinicproceedings.org/article/S0025-6196%2813%2900405-9/abstract
ReplyDeleteExcellent. Thank you very much. I think I saw this before but I have not read it in full. This will be useful for a future post, I think.
DeleteI thought I had seen it before. I covered it in a previous post on replication in medical research: http://doctorskeptic.blogspot.se/2014/09/the-replication-problem.html
Delete