Welkom‎ > ‎Wetenschap‎ > ‎Psychologie‎ > ‎The Pace Study‎ > ‎

25.06.17- D.Tuller: Trial by Error, Continued: Is PACE a Case of Research Misconduct?


Trial by Error, Continued: Is PACE a Case of Research Misconduct?
24 JUNE 2017
by David Tuller, DrPH

[June 25, 2017: The last section of this post, about the PLoS One study, has been revised and corrected.]

I have tip-toed around the question of research misconduct since I started my PACE investigation. In my long Virology Blog series in October 2015, I decided to document the trial’s extensive list of flaws—or as many as I could fit into 15,000 words, which wasn’t all of them—without arguing that this constituted research misconduct. My goal was simply to make the strongest possible case that this was very bad science and that the evidence did not support the claims that cognitive behavior therapy and graded exercise therapy were effective treatments for the illness.

Since then, I have referred to PACE as “utter nonsense,” “complete bullshit,” “a piece of crap,” and “this f**king trial.” My colleague and the host of Virology Blog, Professor Racaniello, has called it a “sham.” Indeed, subsequent events have only strengthened the argument against PACE, despite the unconvincing attempts of the investigators and Sir Simon Wessely to counter what they most likely view as my disrespectful and “vexatious” behavior.

Virology Blog’s open letters to The Lancet and Psychological Medicine have demonstrated that well-regarded experts from the U.S, U.K. and many other countries find the methodological lapses in PACE to be such egregious violations of standard scientific practice that the reported results cannot be taken seriously. In the last few months, more than a dozen peer-reviewed commentaries in the Journal of Health Psychology, a respected U.K.-based academic publication, have further highlighted the international dismay at the study’s self-evident and indisputable lapses in judgement, logic and common sense.

And here’s a key piece of evidence that the trial has lost all credibility among those outside the CBT/GET ideological brigades: The U.S. Centers for Disease Control still recommends the therapies but now insists that they are only “generic” management strategies for the disease. In fact, the agency explicitly denies that the recommendations are related to PACE. As far as I can tell, since last year the agency no longer cites the PACE trial as evidence anywhere on its current pages devoted to the illness. (If there is a reference tucked away in there somewhere, I’m sure a sharp-eyed sleuth will soon let me know.)

It must be said that the CDC’s history with this illness is awful—another “bad science” saga that I documented on Virology Blog in 2011. In past years, the agency cited PACE prominently and has collaborated closely with British members of the biopsychosocial school of thought. So it is ridiculous and—let’s be frank—blatantly dishonest for U.S. public health officials to now insist that the PACE-branded treatments they recommend have nothing to do with PACE and are simply “generic” management strategies. Nevertheless, it is significant that the agency has decided to “disappear” PACE from its site, presumably in response to the widespread condemnation of the trial.

[...]

Given the logical impossibility of meeting an outcome threshold at baseline, it is understandable why the authors made no mention of the fact that so many participants were simultaneously found to be “disabled” and “within normal range”/“recovered” for physical function. Any paper on breast cancer or multiple sclerosis or any other illness recognized as a medical disease would clearly have been rejected if it featured such an anomaly.

[...]

Another patient-researcher, Tom Kindlon, pointed out in a subsequent comment that the investigators themselves chose the alternative assumptions, which they were now dismissing as unfair to caregivers. “If it’s ‘controversial’ now to value informal care at zero value, it was similarly ‘controversial’ when they decided before the data was looked at, to analyse the data in this way,” wrote Kindlon. “There is not much point in publishing a statistical plan if inconvenient results are not reported on and/or findings for them misrepresented.”

Whatever their reasons, the PACE investigators’ inclusion in the paper of the apparently false statement about the sensitivity analyses represents a serious lapse in professional ethics and judgement. So does the unwillingness to correct the paper itself, given the exchanges in the comments. Does this constitute “misrepresentation of data” within the context of the MRC/RCUK definition of research misconduct?

[...]

Next post: The Lancet’s awful new GET trial

*Explanation for the changes: In the original version, I should have made clear that my concerns involved an analysis of what the investigators called cost-effectiveness from the societal perspective, which included not only the direct health-care costs but other considerations as well, including the informal costs. I also mistakenly wrote that the paper only presented the results under the assumption that informal care was valued at the cost of a home-care worker. In fact, for unexplained reasons, the paper’s main analysis was based on none of the three assumptions mentioned in the statistical analysis plan but on a fourth assumption based on the national mean wage.

In addition, I mistakenly assumed, based on the statistical analysis plan, that the sensitivity analyses conducted for assessing the impact of different approaches included both the minimum wage and zero-cost assumptions. In fact, the sensitivity analyses cited in the paper focused on the assumptions that informal care was valued at the cost of a home-care worker and at the minimum wage. The zero-cost assumption also promised in the protocol was not included at all. I apologize to Professor McCrone and his colleagues for the errors and am happy to correct them.

However, this does not change the fact that Professor McCrone’s subsequent comments contradicted the paper’s claim that, per the sensitivity analyses, changes in how informal care was valued “did not make a substantial difference to the results” and that the findings were “robust” for the alternative assumptions. This apparently false claim in the paper itself still needs to be explained or corrected. The paper also does not explain why the investigators included the zero-cost assumption in the detailed statistical analysis plan and then decided to drop it entirely in the paper itself.

[...]