On September 9, 2016, Queen Mary University of London released data from the PACE trial in compliance with a First Tier Tribunal decision on a Freedom of Information Request by ME patient Alem Matthees. The day before, the PACE authors had released (without fanfare) their own reanalysis of data using their original protocol methods. Today, Matthees and four colleagues published their analysis of the recovery data obtained from QMUL on Dr. Vincent Racaniello’s Virology Blog. These two sets of data reanalysis blow the lid off the PACE trial claims.
The bottom line? The PACE trial authors’ claims that CBT and GET are effective treatments for ME/CFS were grossly exaggerated.
First, take a look at what the PACE authors’ own reanalysis showed. When they calculated improvement rates using their original protocol, the rates of improvement dropped dramatically.
As shown in the above graph by Simon McGrath, the Lancet paper claimed that 60% of patients receiving CBT or GET improved. But the reanalysis using the original protocol showed that only 20% of those patients improved, compared to 10% who received neither therapy. In other words, half of the people who benefited from CBT or GET would likely have improved anyway. Remember, the PACE authors made changes to the protocol after they began collecting data in this unblinded trial. Those changes, used in the Lancet paper, inflated the reported improvement by three-fold.
One would think that the PACE authors would be at least slightly embarrassed by this, but instead they continue to insist:
All three of these outcomes are very similar to those reported in the main PACE results paper (White et al., 2011); physical functioning and fatigue improved significantly more with CBT and GET when compared to APT [pacing] and SMC [standard medical care].
Sure, twice as many people improved with CBT and GET compared to standard medical care. But 80% of the trial participants DID NOT IMPROVE. How can a treatment that fails with 80% of the participants be considered a success?
Not only that, but the changes in the protocol were like a magic wand, creating the impression of huge gains in function: 60% improved! The true results, however, are close to a failure of the treatment trial.
Today’s publication on Dr. Racaniello’s blog presents the analysis of the recovery outcome data obtained by Alem Matthees. Once again, the mid-stream changes to the study protocol grossly inflated the PACE results.
As the graph from the Matthees paper shows, the PACE authors claimed more than 20% of subjects recovered with CBT and GET. Using the original protocol, however, those recovery rates drop by more than three-fold. Furthermore, there is no statistically significant difference between those who received CBT or GET and those who received standard care or pacing instruction. In other words, the differences among the groups could have easily been the result of chance rather than the result of the therapy delivered.
Matthees, et al. conclude, “It is clear from these results that the changes made to the protocol were not minor or insignificant, as they have produced major differences that warrant further consideration.” In contrast, long time CBT advocate Dr. Simon Wessley told Julie Rehmeyer that his view of the overall reanalysis was, “OK folks, nothing to see here, move along please.”
Taken together, the reanalysis of data on improvement and recovery show that the changes in the protocol resulted in grossly inflated rates of improvement and recovery. Let me state that again, for clarity: the PACE authors changed their definitions of improvement and recovery and then published the resulting four-fold higher rates of improvement and recovery without ever reporting or acknowledging the results under original protocol, until now. Furthermore, the PACE authors resisted all efforts to obtain the data by outside individuals, spending £250,000 to oppose Matthee’s request alone.
Tuller’s detailed examination of the PACE trial and these new data analyses raise a number of questions about why these changes were made to the protocol:
- Were the PACE authors influenced by their relationships with insurance companies?
- Did they make the protocol changes after realizing that the FINE trial had basically failed using its original protocol?
- Why did they change their methods in the middle of the trial? (Matthees, et al. note that changing study endpoints is rarely acceptable)
- Were they influenced by the fact that the National Health Service expressed support for their treatments before the trial was even completed?
- Since data collection was well underway when the changes were made, and because PACE was an unblinded trial, we have to ask if the PACE authors had an idea of the outcome trends when they decided to make the changes?
- Was their cognitive bias so great that it interfered with decisions about the protocol?
- Did the PACE authors analyze the data using the original protocol at any point? If so, when? How long did they withhold that analysis?
The grossly exaggerated results of the PACE trial were accepted without question by agencies such as the Centers for Disease Control and institutions such as the Mayo Clinic. The Lancet and other journals persist in justifying their editorial processes that approved publication of these grossly exaggerated results.
The voices of patients have been almost unilaterally ignored and actively dismissed by the PACE authors and by journals. We knew the PACE results were too good to be true. A number of patients worked to uncover the problems and bring them to the attention of scientists. Their efforts went on for years, and finally gained traction with a broader audience after Tuller and Racaniello put PACE under the microscope.
For five years, the claim that CBT and GET are effective therapies for ME/CFS has been trumpeted in the media and in scientific circles. Medical education has been based on that claim. Policy decisions at CDC and other agencies have been based on that claim. Popular views of this disease and those who suffer with it have been shaped by that claim.
But this claim evaporates when the PACE authors’ original protocol is used. Eighty percent of trial participants did not improve. Not only that, but we do not have any data on how many people in that group of 80% were harmed or got worse. CBT and GET may not be neutral therapies worth trying in case you fall in that lucky 20% who improved spontaneously or due to the treatment. We don’t know how many people got worse with these therapies, so we cannot assess the risks.
The end result is this: the PACE authors made changes to their protocol after data collection had begun, and published the inflated results. But when the original protocol is applied to the data, CBT and GET did not help the vast majority of participants. The PACE trial is unreliable and should not be used to justify the prescription of CBT and GET for ME patients.
As Matthees, et al., stated in their paper:
The PACE trial provides a good example of the problems that can occur when investigators are allowed to substantially deviate from the trial protocol without adequate justification or scrutiny. We therefore propose that a thorough, transparent, and independent re-analysis be conducted to provide greater clarity about the PACE trial results. Pending a comprehensive review or audit of trial data, it seems prudent that the published trial results should be treated as potentially unsound, as well as the medical texts, review articles, and public policies based on those results.