Sunday 17 June 2012

Doctors doctoring the research: fraud and error


I have been reading about publication retractions. They are scientific-speak for “Whoops”. This can either mean “Whoops, I made a mistake” (error), or “Whoops, you caught me” (fraud). It is sometimes hard to distinguish between them. Either way, it is another example of published research that is wrong, and it looks like there is little we can do to stop it.

How big is the problem? Why does it matter? Why does it happen? and How can we stop it?

How big is the problem?

The extent of the problem on an individual level can be seen on the Retraction Watch blog, but is best illustrated by the case of Dr Fujii, an anaesthetist from Japan who currently holds the record for the number of articles retracted (nearly 200), which is more than I have ever had published. But as he has not admitted any wrongdoing, we don’t know whether he is fraudulent, or whether he is a doctor who makes a LOT of mistakes. I am not sure which is worse.

A recent New York Times article covers the problem of the rise of fraud and retractions, and this chart by Neil Saunders is also very interesting. It seems that despite increased standards and information gathering power, the number of retractions is rising a lot faster than the rate of publications (here).

For those interested in individual cases, some notable examples are: Anil Potti, Andrew Wakefield, Hwang Woo-suk, John Sudbo, Dipak Das, and Werner Bezwoda. Quackwatch is also a useful website for this topic.

The fact that fraud is usually only discovered when somebody bothers to look for it makes me wonder how much fraudulent research is out there. Data is not routinely checked, so editors and reviewers have to take the numbers on face value; they have to trust the researchers to do the right thing. We really have no idea about the true extent of fraud or errors in research, because there is no reliable way of quality checking all published articles.

Why does it matter?

The presence of fraud is a problem for those of us who rely on an analysis of the validity (internal logic) of studies as a basis for clinical practice. Such critical appraisal does not take into account the fact that the data may have been fabricated (or merely tweaked, because fraud is a spectrum) in the first place. The problem has been discussed in an article from the Australian Prescriber that points out the fact that our appraisals of the validity of studies do not reflect the accuracy or soundness of the data used. We need to look beyond the study, into the context in which that study arose (including conflicts of interest).

On an even bigger picture, scientific fraud can have significant ramifications for the advancement of knowledge. It can slow the eventual discovery of truths, and it can lead to harm from ineffective, dangerous treatments being given, based on fraudulent research. Perhaps the best example of this was the thousands of women with severe breast cancer who underwent high-dose chemotherapy and bone marrow transplant in the 1990’s based on fraudulent research (click here). Another example is the drop in vaccination rates associated with the Andrew Wakefield fraud.

Basically fraud matters not because it decreases our faith in science – science doesn’t need faith. It matters because it deviates us from the truth, and seeking the truth is the whole reason we do scientific research in the first place.

Why does it happen?

Remember that we are talking about two things here: misconduct (fraud) and error, the distinction between the two is clear on paper, but it gets very blurry in real life. It is rare that all data in a research paper are fabricated. That would be difficult to do in a big institution, on a paper involving grant money, ethics committee oversight and several other authors. More commonly, the fraud is done within a genuine study. Data that do not fit the expected results might be put aside, the statistical method, outcome measure and time periods chosen are the ones that give the results we want, patients might be selectively followed up, etc. That is certainly the case for many examples where the authors have not admitted fraud. Many of these authors still believe that they have not committed fraud, and many have been cleared of fraud.

Clearly, some researchers have deliberately fabricated or manipulated data for personal gain, and it is these examples that hit the headlines. But what about an enthusiastic inventor of a technique who believes that it works, and maybe has a financial interest? I think he will unconsciously influence the methods of the research (things like patient selection, what outcomes to include, how to measure them, blinding, adjusting patient expectations, etc.) so that we end up with results that are deviated from the truth in such a way that they are not only wrong, but are biased in favour of the intervention. Whether you call this misconduct or error, the result is the same.

How can we stop it?

What can be done to prevent or detect fraud? Open publishing (perhaps on line) of all research data may help, but only if it is scrutinised. ‘Science police’ have been recommended, to audit researchers on a random basis, just like the tax office conducts random audits of taxpayers. Organisations like the Office of Research Integrity are helping, but they remain powerless to stop a researcher from submitting data that is wrong. Signing declarations at the time of manuscript submission is now common practice, but without much evidence that it has reduced fraud.

What about peer review? Peer review is the sacred cow of the scientific community, supposedly guarding us against bad publications. However in a study where reviewers were tested with manuscripts that had deliberate errors inserted, the reviewers picked up less than a third of the errors. Richard Horton (former Lancet editor) is of the opinion that “the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong”. I review for several journals, and regularly have my research reviewed, and the best I can say for the system is that it is far from perfect, and certainly cannot act as an effective barrier to fraud or error.

So what can youdo? Be sceptical of favourable results, particularly when they sound too good to be true, and wait for the findings to be reproduced by independent research from another institution, preferably without financial conflicts.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.