Pages

Friday, 12 October 2012

The Uncertainty Principle: from Heisenberg to Hawthorne


I know that Heisenberg’s Uncertainty Principle refers specifically to physics (in that you cannot simultaneously measure the momentum and position of an electron),1 and I know that its interpretation has been generalised to the point where some take it to mean that nothing is certain, but at the crux of the Uncertainty Principle is the concept that you change things by measuring them. Specifically, that you will change the very thing that you are trying to measure, simply by measuring it, and you have to admit, that’s pretty cool. The Uncertainty Principle can be fun in popular culture2 but in medical research, it causes problems.

In medicine it is often referred to as the Hawthorne Effect (or the Observer Effect), which (again) refers to changing an outcome (or clinical measurement) by measuring it. It explains much of the perceived effectiveness of our treatments, particularly in before-and-after studies, and you guessed it: it tends to results in an overestimation of the benefit of our treatments.

Hawthorne was the name of a Western Electric factory in the USA. From the 1920’s, researchers looked at the effects of changing various aspects of the production line (lighting, work breaks, etc.) on production. For example they shortened the work day, and production increased. Then they restored the normal work day, and production went up again. In fact, most of the things they tried increased production. Much of the research that has come out of this has focussed on why this occurs; why we seem to get good results when we do something and measure the result. The workers probably wanted to help the researchers. Or keep their jobs. Either way, the effect is not isolated to the Hawthorne plant.

Trial bias is a form of the Hawthorne Effect. When you set up a clinical trial to test something, you will plan to measure some kind of outcome. If you tell patients that you are doing a study to see how well a new drug decreases pain, you have given them an expectation of decreased pain, and they are likely to feel better and report less pain. You just changed the outcome by measuring it. The effect is also shown with observer-expectancy bias and volunteer bias. In the latter, the kinds of people who volunteer for trials tend to be more helpful and healthy and can bias the results towards your treatment.

The Hawthorne Effect is a particular problem in before-and-after studies. Like the Hawthorne experiments, these studies nearly always show a benefit after you introduce a new way of doing things. You simply collect the data (usually retrospectively) from the period before your intervention (when no one knew that a study was going to be done), then introduce the intervention and closely monitor the situation. If everybody knows that you are trying to improve the adherence to a protocol (for example) they will make sure that everyone adheres to the protocol and presto: you have a great result.

The few studies that have looked at what happens after a before-and-after study have shown that the improvement often fades.

Just for fun, I searched the medical literature for before-and-after studies (yes, that is my idea of fun). I found 18,156 articles with “before and after” in the title. In order to make it manageable, I limited it to “Clinical trials”, “Human” and “Last year”. That brought it down to 113 papers but I ended up excluding a lot of irrelevant studies.3 After dumping those, I ended up with 64 studies. 54 (84%) were positive. Here is an example of a positive before-and-after study, just so you know what I am talking about.

But half of the ‘negative’ ones were looking for harms (or complications), and showed that the particular intervention being studied was NOT harmful (here, here, here, here, and here), so even they favoured the intervention. That makes 92% supportive of the intervention. Of the 5 remaining, 4 either showed no difference, or an improvement that was not statistically significant.

This was the only study that showed that anything got worse after treatment. The risk of sustaining a hip fracture was slightly raised in the year after having a knee replacement. But don’t worry, the risk went back to normal after the first year.

I’m speculating, but if 90% of everything we tried really worked, we would be making progress in leaps and bounds (we’re not, by the way).

The bottom line
The Hawthorne Effect is just one more bias within medical research that leads to overestimation of the treatment benefits, therefore providing another explanation for the difference between the perceived effect and the specific (real) effect of treatment. The Hawthorne Effect is the manifestation of our, and our patients’, enthusiasm to see the treatment work.4


1. I like the idea that if you know the location of something (at a particular point in time), you can’t tell where it is going, and if you know where it is going (it has to cover a distance over time), you can’t pin down its location.

2. The uncertainty principle often pops up in popular culture. Professor Farnsworth from Futurama (my role model) complained when his horse lost a race in a ‘quantum finish’: “No fair. You changed the outcome by measuring it!”.

3. Some, for example, just tested blood levels of something before and after a kidney transplant, some compared the results of giving a drug before or after a procedure, some were just letters, and some had the words ‘before’, ‘and’ and ‘after’ in the title, but not next to each other. 7 studies were listed twice. I also excluded studies if I had no idea what the hell they were talking about (like this one). If the study reported multiple outcomes, I counted them as positive if at least one of the outcomes was positive (well, that’s what the authors did!).

4. Another great line from Futurama, when told that her boyfriend was not cheating on her, the girl said: “Oh I would dearly love to believe that were true … so I do!”

2 comments:

  1. Placebo effect, Hawthorne effect, have become well known concepts. But mostly still misunderstood. Regression to the mean explains many outcomes that are classified as the above 2 effects, and to my untrained eye, it seems that would apply to explain the ephemeral nature of the intervention experiments that you cite.

    ReplyDelete
    Replies
    1. Thanks,
      Regression to the mean does not explain differences due to observation alone, without sampling bias. The explanation in many cases is psychological. On the surface, these effects are interpreted as being a result of the intervention, which is why it is so important to consider these factors when determining the effect of medical interventions.

      Delete

Note: only a member of this blog may post a comment.