Sunday, 10 February 2013

Ethical double standards

Ethics committees (IRBs in the US) are now firmly entrenched in the research environment such that clinical research can only be performed with their approval. Clinical practice, however, is not subject to such approval, yet in many cases the risk of harm (individually and to society) from clinical practice is greater. Are researchers being held to a higher standard than clinicians? Has our concentration on ethical standards for clinical research led to an ethical blind spot for clinical practice?

Rightly, the ethics committee has considerable control over research. Their role is to minimise the risk of harm from clinical research. Examples of such harms exist, and these examples (including WWII atrocities) led to the formation of ethical standards for research. However, the committees only have control over what is submitted to them, and they tend not to concern themselves with clinical practice at their institution. For example, the follow up of treated patients (research) needs ethics approval, but clinicians can perform new techniques and implant new prostheses (practice) without ethics approval.

Drugs versus devices
The requirements for drugs to be approved prior to clinical use are that it be safe and at least equivalent to current treatments. Mistakes are still made and the bias in the assessment and approval of new drugs is well documented (Ben Goldacre’s book Bad Pharma for a general overview, and the antidepressant story here). For implants and devices the requirements are lower (history here, documentation of the differences (for the US) here). Placebo trials are not necessary. Large scale equivalence trials are not necessary. Mostly, devices require theoretical and lab support to show that they perform as intended. For some procedures, like autologous stem cell injections (in Australia anyway), you don’t need anything, just a syringe and a centrifuge and you can open up for business. For techniques that do not involve devices or drugs, like new surgical techniques, you just need to try it out a few times and there is no requirement for oversight or reporting.

Practice versus research
For research however, the standards are different. For example, if you are doing a procedure that has not been subjected to a trial, or if there are practice variations with an intervention, ethics committee approval is not required to perform the procedure, but it is required to measure the outcomes (if any patient contact is required and publication is expected).

Shouldn’t it be the other way around?
Rather than researchers asking for ethics approval to follow up patients, shouldn’t it be the other way around? Shouldn’t those in charge of ethical standards be demanding that we measure our outcomes? To me, not measuring the outcomes is unethical. And shouldn’t those in charge of ethical standards for an institution cover all clinical activity, not just the research?

Ethics of research vs ethics of clinical practice
There is a difference between the ethics of research and the ethics of clinical practice, but it does not explain the double standard. In fact, the ethical standards for clinical practice are stricter than those for research, yet it is clinical practice that is not covered by ethics committees. For clinical practice, you have duty of care, confidentiality etc. but the guiding principle covering new or untested techniques is to “do no harm”. In other words, if you aren’t sure about the treatment, you shouldn’t be going there. The ethics for research says: if you aren’t sure about the treatment, find out: do the research to measures the outcomes. The ethics for research does not say to “do no harm”, but to “balance the (individual) harms against any (societal) benefits”.

The problem
What should happen is that clinicians should not be performing any treatments until they have been tested. What is happening is that clinicians are performing many treatments that have not been adequately tested, and researchers are being hampered from evaluating those treatments.

But if we raise the barriers to clinical practice, we might delay the introduction of beneficial treatments?
Yes, we might. However, what often happens is the opposite: treatments become entrenched such that it then becomes “unethical” to do a controlled trial. This is how we end up with the current problems of overtreatment due to ineffective and harmful treatments. Examples or treatments that were/are common practice and were later shown to be ineffective abound in this blog (platelet rich plasma, vertebroplasty, knee arthroscopy for arthritis,  spinal steroid injections, fusions for back pain, the ASR hip replacement). In fact, every surgical procedure that has been subjected to a placebo-controlled trial has failed to show a benefit (here), yet we are told that such surgical trials are unethical.

A hypothetical
What if we did a trial of back fusion surgery versus placebo for back pain, and the trial showed no difference (a real possibility). How many people have been harmed and how much money has been spent in the last 50 years on that procedure alone? Would it not have been more ethical to have done trials on a few hundred patients first, if there was a possibility that we could have avoided performing millions of spine fusions in the future?

The bottom line
The ethical standards for research are higher than for clinical practice. We need to lower the ethical standards for clinical research (particularly for things like patient follow up and surveys), and raise the ethical standards for clinical practice. Otherwise we have the current situation whereby I can use the latest metal-on-metal hip replacement without any special consent or approval, but to have independent objective follow up afterwards I need special consent and approval to phone them afterwards. Telemarketers don’t even need that.

Addit 25 April 2014: This topic is nicely summarised in a section of the book Testing Treatments, here.


  1. Medicine by committee? I thought treatment is tailored to the individual patient, not the mathematical average.

    May I ask you a question: how long ago did you qualify in your speciality?

    1. Thanks for your comments Eugene,

      To do research, we need approval from a committee. To do a new operation or vary a technique, we don't (sometimes we are told we do, but nobody pays much attention to that and adherence is not policed). There have been many examples of sustained bad practice that could have been prevented with oversight. If you are my age, you remember Chelmsford.

      My point is that there is an ethical double standard. If properly qualified practitioners should be free to treat patients without constraint (and I am happy for you to make that argument), why can't we have the same rule for properly qualified researchers?

      Regarding treating the individual, I think this is often used to justify unproven treatments. We all treat individuals and to some extent, some tailoring of the treatment occurs. However, on what do we base our major treatment decisions (eg surgery versus non-op treatment)? It is based on probabilities. The probability of things like infection, complication, return of function, relapse, disease-free survival and mortality. These probabilities are based on statistical summaries of data from large groups. The mean survival from treatment A is better than that for treatment B. Therefore, all other things being equal treatment A is preferred.

      The argument against "mathematical averages" has been going for at least 200 years (read my post on Lessons from History: The use of statistics is a fundamental part of scientific practice. "Art" and individual differences in patients and practitioners exist, but they are often held up to override more rational decision making.

      When did I qualify? The same year as you for medicine, and about the same time as you for specialty training.

  2. I completely agree with you on this. If we're going to mess around with patients, let's keep track of what we're doing to them and be honest about the absence of safety and efficacy data.

    The latest trick, btw, is to use "retrospective chart review" to publish results of experimental clinical care. You push some experimental intervention as "my best advice" on your patients, then you acquire lots of data in charts, THEN you go to the IRB and say "I just happen to have all this interesting data. Can I publish what I see in the charts?" And the IRB says "eh, no harm will come from a chart review. Sure, publish it."

    Backdoor experimentation on completely unwitting patients. I'm seeing it all over the lit.

    1. Thanks, what you describe is a classic example of the lack of ethical safeguards over clinical practice. It seems that things only come under the ethical radar once somebody wants to publish something.

  3. Dr Skeptic, thanks for your reply. The reason I asked about your experience is that I am a little surprised that a practicing doctor wants more policies to cover his practices. The usefulness of that is debatable either way. I guess in some instances the stats are pretty straightforward. On the other hand, there are plenty of gray areas where research does not exist and is not possible to conduct.

    I think the statistics should serve as a guideline, not the law.

    I agree wholeheartedly that modern medicine over treats patients. Hell, everyone is on a a statin even though there is no evidence of their value in primary prevention! I agree with Taleb's notion that medicine should focus on high risk situations, where iatrogenics do not have significant impact, as opposed to trying to reduce the mortality from 1:100,000 to 1:101,000.

    1. Thanks Eugene,

      I think there are rules and there are rules. There are good rules and bad rules. My main aim was to highlight the difference between the control over research and the control over clinical practice. However, I do see a lot of aggressive practice and overtreatment and wonder how to reduce it. I would rather control this by educating the doctors, than by applying restrictive policies.