A shorter version of this article was published on STATNews on January 10, 2019.
One message dominates NIH’s talk about ME/CFS research: submit more high quality grant applications. Funding would increase if there were more high quality grant applications. Give us a chance to prove we’re serious, but send more high quality grant applications. Thank you for your petition with 50,000 signatures, but send more high quality grant applications.
In a field desperate for research funding, one might think that ME/CFS researchers would be flooding NIH with grant applications. Yet that does not seem to be happening. One significant reason why is that NIH’s business-as-usual approach actually increases the barriers to success for ME/CFS grant applications.
A researcher who wants NIH funding for ME/CFS research has to navigate an obstacle course that begins long before the grant application gets in front of reviewers, an obstacle course which arises from NIH’s own broken response to ME. There are at least six questions a researcher must consider in deciding whether to apply for funding:
- Does NIH want my grant? A researcher may decide the answer is no, especially if she wants to generate hypotheses or has been discouraged by NIH’s lack of interest.
- Is there NIH funding for my grant? Given that NIH currently has no Funding Opportunity Announcements targeted at ME/CFS, researchers could very well conclude the answer is no.
- Can I write this grant? On top of the time pressure and institutional challenges that all researchers face, ME/CFS researchers may face additional barriers such as lack of support from institutions, lack of mentors, and the general stigma associated with this disease.
- Who will review my grant? Based on the SEP rosters for the last eighteen years, researchers should expect that some reviewers will be ME/CFS experts but they may not make up the majority.
- Will the SEP members review my grant fairly? Given the unique challenges of the field that are not recognized by non-experts, applicants may conclude the answer is no.
- Who is my competition? There is no set answer to this question. Depending on timing, the competition could be fellow ME/CFS researchers or a much larger and harder to define pool.
Let’s follow a hypothetical researcher as she runs this gauntlet to submit her ME/CFS proposal to NIH.
Does NIH want my grant?
Before the first word is typed on a grant application page, a researcher asks herself whether NIH will be interested in her project. There are multiple reasons why she may conclude that the answer is no.
First, NIH does not fund hypothesis-generating research. A proposal that boils down to “I’m going to look and see what I can see” is not going to succeed. Yet ME/CFS research needs these projects at this stage. This field has not just been ignored; it has been suppressed by decades of stigma and the false narrative that it is caused by deconditioning and depression. Ironically, NIH has tacitly admitted that hypothesis-generating research is needed. The Clinical Care Center study run by Dr. Avindra Nath is designed to collect reams of data that will then generate hypotheses for further research.
Second, a researcher may be deterred by NIH’s demonstrable lack of interest as evidenced by low ME/CFS funding. NIH currently has no Funding Opportunity Announcements targeted at ME/CFS (see below). In addition, NIH funding has been appalling over time–including the 17% decrease in funding last year–so a researcher may conclude that NIH simply isn’t interested in ME/CFS projects.
Third, if the researcher wants to conduct a clinical trial of a drug treatment, she will have trouble at NIH. Dr. Nancy Klimas has tried, but she said, “There is no door to walk through at the NIH” for clinical trial funding in ME/CFS. Klimas attributed this to the fact that the ME/CFS Special Emphasis Panel (see below) does not review clinical trial applications.
Finally, a researcher may be individually discouraged from applying. I have heard multiple stories along these lines, although people are understandably reluctant to go on the record. Dr. Ron Davis went public with the rejection of two of his pre-proposals in 2015. One of the reasons given by the National Institute of Neurological Diseases and Stroke was that, “It was not clear if the proposal falls within the mission of NINDS.”
Is there NIH funding for my grant?
A researcher might decide NIH is interested in her project, but she also has to ask if there is funding available for it. To answer that question, she will look at two general types of Funding Opportunity Announcements.
First, she will look for a Program Announcements, or PA. The PAs are like open house invitations. NIH says, We’re interested in seeing grant proposals in such-and-such area of research. The invitation is really important, because it tells a researcher where the open house is and what time it is happening. There is no guarantee that there will be enough food and booze to go around, but the researcher knows that if she shows up at the specific place and time then she can try to fight her way to the buffet. However, NIH’s last open house invitation for ME/CFS research was issued in 2012 and it expired in 2015. There is nothing whatsoever targeted for ME/CFS at this time.
Incidentally, on the last NIH telebriefing, Dr. Vicky Whittemore said that NIH would no longer be issuing Program Announcements. However, when I followed up with her after the call she said that her comments were premature. Apparently NIH is contemplating moving away from PAs but no announcement has been made yet.
The second type of Funding Opportunity Announcement is the Request for Applications, or RFA. Unlike a general invitation, an RFA is a specific type of funding competition. NIH says, We have X dollars set aside and we want to spend that on this specific type of research. This is more like a competitive swim meet than an open house. You have to qualify for the swim meet in order to compete, but someone is definitely going home with a gold medal. If an ME/CFS researcher is doing the kind of research the RFA wants, then she knows her application has a shot at the set aside funding.
This is why ME/CFS advocates and researchers are constantly asking for RFAs instead of PAs: someone is going to get money out of an RFA competition. This is also why NIH is very reluctant to issue RFAs: it requires NIH to decide in advance how much money it will invest and then set that money aside for the competition.
Just to be clear, NIH issues plenty of RFAs across its full research portfolio, including forty-three RFAs in October 2018 alone. NIH can do this. But NIH has been clear that it has no intention of issuing an RFA in ME/CFS research any time soon.
It’s easy to understand why a researcher might give up on applying to NIH, given the picture I’ve painted thus far. However, let’s assume that our hypothetical researcher has concluded that NIH wants her grant. And despite the fact that there is no funding opportunity targeted at ME/CFS, our researcher has concluded that there is a chance that NIH might fund her grant. Next she has to ask:
Can I write this grant?
Generally speaking, scientific researchers at academic institutions are responsible for obtaining their own funding. Universities do not fund much research themselves. Researchers know they have to write successful grant applications to get funding, and it is essential to their careers to do so.
Yet it is not that simple. An NIH grant application can take several months to write, and that is after months of planning time. The typical NIH application might be 30 pages long, but the applications for the Collaborative Research Center RFA were hundreds of pages long. The more complex the project and the more collaborators involved, the more difficult and time consuming it is to write the application. Submitting successful applications is part of the job description, but so is conducting current research, teaching a full course load, supervising graduate students, and successfully publishing study results. Oh, and there are committee meetings and other administrative duties. The average professor works sixty hours per week.
And our hypothetical researcher does not just need time. She needs support from her institution in the form of equipment, space, and staff. She needs her department head to support her ideas (or at least not actively squash them), and her application must include letters from her institution and collaborators to prove she has that support.
Obviously, this can go wrong in multiple ways, and many of these issues are not limited to ME/CFS research. However, the decades of stigma and misinformation have a unique impact in ME/CFS. Support from institutions and colleagues is harder to come by. Mentors are few and far between. All of the well-known challenges of writing successful grant proposals are multiplied in this field, increasing the difficulty of our hypothetical researcher’s obstacle course. NIH has done nothing to alleviate the challenges that have arisen from its own history with ME/CFS.
Who will review my grant?
NIH’s peer review system is at the core of its funding decisions. The Center for Scientific Review appoints reviewers with relevant expertise to Study Sections and Special Emphasis Panels. The reviewers score applications on a variety of criteria, and come up with an overall impact score. This peer review is not the final decision on an application, but it is critically important. A bad score in peer review is fatal for the application.
Given the importance of the peer review scores, it’s obvious that reviewers must have the appropriate knowledge and expertise. Yet this has not always been the case when it comes to ME/CFS research.
I have been tracking the rosters of the various incarnations of the CFS Special Emphasis Panel, or SEP since 2000, and I’ve seen definite trends. In earlier years, the SEP covered the areas of CFS, Fibromyalgia, and Temperomandibular Joint Dysfunction. As a result, the SEP reviewers were predominantly dentists, psychologists, and pain experts. Between 2000 and 2010, the average representation of CFS experts on the SEP was 15%.
In November 2010, the SEP was assigned a more narrow scope of just CFS (changed to ME/CFS in 2012). The new scope had an immediate effect on the representation of experts on the rosters. Between November 2010 and the end of 2017, ME/CFS experts made up 72% of the rosters on average.
There is one exception, and that is the roster of the SEP that evaluated the applications for the Collaborative Research Centers and Data Management Center in 2017. ME/CFS experts made up only 26% of that roster. There are several possible reasons for this. First, NIH’s conflict of interest policy meant that anyone applying for RFA funding would have been excluded from the roster, along with many of their colleagues. (Read more about the COI policy here ). Second, the nature of the applications required peer review by experts in population studies, computational biology, and other areas outside of ME/CFS research.
Then, the SEP rosters took a puzzling turn in 2018. ME/CFS experts made up 25-44% of the rosters at the three review meetings. I have no explanation as to why the rosters have shifted to include fewer ME/CFS experts. Since the SEP panel is reconstituted for every review cycle, there is also no way to predict representation on future meeting rosters.
Will the SEP members review my grant fairly?
Our hypothetical researcher should be prepared for her application to be reviewed by a variable mix of ME/CFS experts and non-experts, and so she has to wonder if she will get a fair and accurate review score. I am not assuming that non-experts will automatically trash ME/CFS grant applications, but I also do not assume that all SEP members will use the right standards.
First, it is possible that ME/CFS experts on the SEP are not able to assess all aspects of all ME/CFS grant applications. For example, a POTS researcher on the panel may not be familiar with design of genome-wide association studies or computational biology. A psychologist may not be able to critique a study with newer technology like QEEG. Expertise in ME/CFS does not automatically convey expertise in every possible study of the disease.
Second, reviewers bring their own biases with them. Sleep researchers have a different understanding of fatigue than ME/CFS experts. Reviewers may be unfamiliar with post-exertional malaise, including how it differs from fatigue and how to assess it. The worst case scenario is a reviewer who believes the lie that ME/CFS is depression and deconditioning. Dr. Ian Lipkin said that this is exactly what happened with one of his applications in 2014:
I have been in competition now twice to get funded, and the people there who reviewed me gave me abysmal scores. And the critiques of my work were unfair, and one of the people who critiqued my work said, in fact, that this is a psychosomatic illness. I was floored. I protested, and for reasons that are obscure to me this same individual wound up back on the study section, and I got a similar unfundable score.
Third, ME/CFS research has unique challenges that are well-known inside the field, but potentially not understood by scientists outside the field. Case definition is an obvious example of this. The field has used multiple case definitions over the years, some of which have fatal flaws. NIH has refused to select a single gold standard case definition, arguing that researchers should justify their chosen definition in the applications. But how is a non-expert supposed to evaluate that choice and justification? Someone outside the field is probably unfamiliar with the differences between the Oxford, Fukuda, Reeves, Canadian Consensus, and National Academy of Medicine criteria. Outside reviewers will have difficulty assessing the impact of chosen criteria on a study, and they are unlikely to appreciate the challenges of recruiting appropriately diagnosed subjects.
Fourth, peer reviewers will bring expectations from their own fields of study. A cancer or heart disease researcher is used to multi-center studies, with sample sizes in the thousands. That kind of study has been and remains impossible in ME/CFS. Reviewers may have unrealistic expectations about data quality and study design. Non-ME/CFS experts will also be unable to assess whether an area of study is a strategic priority in the field.
The peer review process is a cornerstone of funding decisions at NIH, but it is far from the only factor in play. Our hypothetical researcher faces additional barriers, including her competition.
Who is my competition?
Competition for NIH funding is fierce, not matter what the area of study. NIH’s overall application success rate was 18.7% in 2017. However, an ME/CFS grant application has to compete in ways that put our hypothetical researcher at a disadvantage.
First, an ME/CFS grant application is naturally competing against all the other applications reviewed at a specific SEP meeting. This can actually happen more than once. NIH allows researchers to revise and resubmit applications based on reviewer comments. On the second submission, an ME/CFS application will compete against an entirely new group of applications in front of a different group of reviewers. That means that an application for a proteomic study could have been scored in comparison to other -omics studies in one round, but then compared to POTS or infection studies in the next round. Given that each review meeting has a new roster, new reviewers may have different criticisms of the application than the first group. So our hypothetical researcher could revise her application based on comments from Group 1, and then get entirely new and different criticisms from Group 2.
Second, the competition pool is heavily influenced by the Funding Opportunity Announcement. Recall the open house vs. swim meet analogies I discussed earlier. Those are very different sets of competitors. With a Program Announcement, our hypothetical researcher is competing against everyone else headed for the buffet at the open house. With an RFA, our researcher is competing against just the swimmers in the pool—and someone is guaranteed to win. This will influence the peer review scores. Reviewers for an RFA know that a good score will basically guarantee funding, and select from among the applications in front of them. Normal Program Announcement review is a more diffuse competition, in part because no one is guaranteed funding and the full group of applicants might not be reviewed by the same group of reviewers.
Third, applications that score well at the SEP stage are then sent to the relevant Institute’s Council for consideration. At this level, the application is now competing against all the other applications reviewed at the Council meeting, regardless of the field. For example, an ME/CFS infection study will compete against every other grant coming before the Council of the National Institute of Allergy and Infectious Diseases. The infection study might be fabulous compared to other ME/CFS applications, but not as strong compared to hepatitis and influenza studies that have huge sample sizes, etc. In addition, ME/CFS is not a named priority at any Institute. The ME/CFS immune study might be critical in our field, but the Council (which has no ME/CFS experts on it) might see it as a much lower priority given the Institute goals.
As I pointed out before, ME/CFS has neither an active RFA nor PA. New applications are currently being submitted under very general parent announcements like this one. It invites applications (to any of twenty-three participating Institutes) for defined projects “in scientific areas that represent the investigators’ specific interests and competencies”. This is an incredibly broad net, and the competition is basically all grants being considered in a funding cycle by a particular Institute. There is no target our hypothetical researcher can aim for, other than get the best score she can and cross her fingers.
Should I stay or should I go?
NIH’s proposed fix for the dismal state of ME/CFS research funding is simply “submit more high quality grant applications.” In order to do that, our hypothetical researcher has to climb over a series of barriers created and maintained by NIH’s actions in ME/CFS research.
Should our hypothetical researcher submit another ME/CFS grant application? NIH sees no reason why she shouldn’t. As a person with ME, I desperately want her to submit one. But will she decide to invest the time and effort, roll the dice, and apply? How many times will she try? Can anyone blame her if she decides to move to another field?
NIH is responsible for erecting and maintaining this obstacle course. Yet they wash their hands of the problem and repeat the refrain, “Send more high quality grant applications.” NIH’s normal approach to encouraging more proposals will not work in ME/CFS.
There is no single silver bullet that can fix NIH’s broken response to ME. However, there are many actions NIH could take to lower the barriers and truly encourage more applications. Difficult problems require complicated solutions. It is time for NIH to tackle this problem with more than just words.
Who Reviews ME/CFS Applications for NIH?
Note: After publishing this post, I discovered that I had inadvertently missed one meeting in 2017. This post was updated on February 12, 2019 to reflect all new calculations. The changes are not significant enough to alter any conclusions.
There is no question that NIH’s funding of ME/CFS research has been minuscule relative to the size of the public health crisis. Review of ME/CFS grant applications at NIH has drawn scrutiny from the public as one contributing factor. The public perception is that the grant review panelists have not been ME/CFS experts, and that this has led to the unfair denial of qualified applications.
That first point—that grant reviewers are not ME/CFS experts—has a factual answer. The second allegation—that the lack of experts has negatively impacted funding decisions—is harder to answer with publicly available information. Nevertheless, in 2013 I embarked on a project to gather the evidence and answer these questions.
This article will focus on the first issue: who is reviewing the applications. My analysis of the data points to two main conclusions:
Let’s begin by reviewing the basics of NIH’s grant review process.
How NIH Reviews Grant Applications
When a grant application is submitted to NIH, a multi-level review process begins. In the first stage, a review panel of non-federal scientists with relevant expertise evaluates and scores the application on a variety of criteria.
The Center for Scientific Review (CSR) at NIH is responsible for selecting reviewers for the panels. CSR manages hundreds of these panels, which fall into two general categories: standing study sections and special emphasis panels. Special emphasis panels (or SEPs) are comprised of temporary members, selected specifically for the applications under review at a single meeting. Most SEPs are used once and then dissolved, but there are a dozen or so recurring SEPs for areas with an ongoing need for review. ME/CFS is one of those topic areas, and its recurring SEP has a new roster for each meeting.
Each study section and SEP is managed by a Scientific Review Officer (SRO). This is not a desk jockey job; the SRO has a substantive impact on the peer review process. The SRO is responsible for selecting scientists for the panel, monitoring potential conflicts of interest, and preparing summaries of the peer review scores and critiques.
Review panel members must have substantial relevant scientific expertise and knowledge of the most current science. SROs look for reviewers who have themselves received major peer-reviewed grants, and who understand the peer review process. The quality of grant application reviews is largely dependent on selecting the right scientists to review them.
The Methods of This Project
The obvious first step for my analysis was to gather all the SEP rosters and look at who served. Study sections and SEPs are federal advisory committees, and as such their membership must be made public. You might think that getting the rosters would be easy. You would be wrong.
In 2013, I looked for the rosters online, and found very few. When I asked NIH about it, I was told that the rosters were not posted publicly “due to threats some previous panel reviewers have received.” (this is an interesting story for another time) I was instructed to file a FOIA request for the rosters. NIH then denied that request, and to make a long story short, it took me two years of appeals to finally obtain the rosters. For several more years, NIH absurdly required me to file a FOIA request for each roster. It took intervention by Dr. Joe Breen in 2016 to finally change CSR’s policy on publishing the ME/CFS SEP roster.
Since one of my main objectives was to identify how many ME/CFS experts participated, I had to define who qualified as an expert. I did not assume that I knew all the experts and could simply rely on name recognition. For purposes of this analysis, I set the expertise bar very low. I defined an ME/CFS expert as anyone who—at the time they served on the SEP—had at least one publication on ME/CFS or had an NIH grant for ME/CFS research.
I compiled all the roster names for the SEP meetings from 2000 through 2018. I searched PubMed for each person’s ME/CFS publications at the time he or she served on a SEP. I also did my best to identify the scientific specialization of all the members by reviewing their institutional profile pages and CVs. Then I looked for the trends and patterns.
Representation As A Whole
Between January 2000 and December 2018, the ME/CFS SEP met 62 times.* A total of 327 people served as reviewers. Of those 327 panelists, 58 (or 17.7%) qualified as ME/CFS experts under my liberal definition.
Half of all reviewers served more than once, and each roster varied between 5 and 36 members. To calculate the average number of times individuals served, I counted the combined roster seats across all the meetings: 836 seats. Of the total 327 panelists, each person served an average of 2.6 meetings. However, the 58 ME/CFS experts served a combined 207 seats, or 24.7% of the total seats. Those 58 experts served an average of 3.6 meetings each.
First finding: Between 2000 and 2018, 17.7% of the reviewers were ME/CFS experts, and they served 24.7% of the total roster seats.
The percentage of ME/CFS experts at each meeting varied between 0 and 100%. Eight meetings included no ME/CFS experts whatsoever, while four meetings were 100% experts. Over the entire time period, ME/CFS experts made up 20% or less of the rosters of 32 meetings.
Second finding: Just over half of the meetings included 20% or less ME/CFS experts, and eight of those meetings included no experts at all.
Of the 327 total individuals who served on the SEP, I identified 65 (20%) that have psychology or psychiatry degrees. Note that this includes researchers who are ME/CFS experts, such as Drs. Jarred Younger and Lenny Jason. Twenty-four people (7.3%) specialize in craniofacial diseases such as Temporomandibular Disorders. Fourteen (4.2%) are sleep researchers. There are six people who appear in more than one of these categories (such as a psychologist specializing in insomnia).
To measure the influence of these specialties, I looked at how many times these individuals served on the SEP. The 65 psychologists served a total of 214 times, or 25.6% of the total seats. Adding in the sleep and craniofacial specialists (and taking the overlaps into account), these three categories combined represent 29% of the total individuals, but 36.7% of the meeting seats.
Third finding: One-third of all reviewers specialize in psychology/psychiatry, sleep, and/or craniofacial areas, and occupied 36.7% of the meeting seats between 2000 and 2018.
As mentioned above, each reviewer served an average of 2.6 times. However, this is a bit misleading because 71% (233 people) served only once or twice, and the remaining 29% served three or more times. The reviewers who only served once or twice occupied just 36% of the review seats. That means 29% of the reviewers (experts and non-experts) occupied 64% of the seats. To be clear, this means just 94 people filled 534 seats between 2000 and 2018 because they served so many times.
Fourth finding: A minority of reviewers (29%) had a disproportional influence on the review process because they served so many times (64% of seats overall).
The ME/CFS Experts
As I stated in the Methods description above, I used a very liberal definition of ME/CFS “expert.” I classified an individual as an expert if he or she had at least one ME/CFS publication or at least one NIH grant for ME/CFS research at the time of service on the SEP. It turned out that there are a few reviewers who served on the SEP prior to having a publication or grant in ME/CFS, and then served again afterward. I adjusted my analysis to take this into account. You can read the entire list of ME/CFS expert reviewers here.
A total of 58 out of 327 reviewers (17.7%) met the expert definition for at least one meeting served. Many of the names will be immediately recognizable as experts, but others may be a surprise. For example, Dr. Ila Singh published on XMRV and then left the ME/CFS field. Dr. Jordan Dimitrakoff co-authored a paper with his colleagues from the CFS Advisory Committee, but he is a pelvic pain specialist and has done no ME/CFS research. Yet under my liberal definition, both are counted as ME/CFS experts. I was also surprised to find five people who were CDC employees when they served on the SEP: Dr. Jim Jones, Dr. Elizabeth Unger, Dr. Alison Mawle, Dr. Mangalathu Rajeevan, and Dr. Alicia Smith. I do not know if it is unusual for CDC employees to serve on NIH grant review panels.
Fifth finding: Using the most liberal definition of ME/CFS expert, only 17.7% of the reviewers qualified. Multiple people on the list were never involved in much ME/CFS research and/or left the field. Five individuals were CDC employees at the time they served on the SEP.
ME/CFS experts served an average of 3.6 meetings each, but this is misleading because 40% of the group served only once. When I removed the one-timers from the calculation, the remaining 35 reviewers served 184 times, which is 89% of the total number of expert seats. Concentrating grant review assignments to such a small number of scientists is risky. One person’s bias, expectations, preferences, and professional experience can shape the direction of NIH funded research, for better or worse. This is especially true for the reviewers who serve most frequently. At the very top of that list are:
These four reviewers served a combined 49 times, which is 23.6% of the total expert seats. The heavy influence of Dr. Friedberg is an example of the inherent risk of this approach. While he has worked in this field for more than fifteen years, and has received $3.9 million in NIH grants, he is a psychologist. Proposals that rely on computational biology, cutting edge imaging, or immunology could be challenging for a behavioral psychologist to properly evaluate. There are other ME/CFS experts, including other psychologists like Dr. Jarred Younger, who may be better positioned to review these applications.
Sixth finding: Just 35 ME/CFS experts have served a combined 184 times (89% of expert seats). Just four experts (Friedberg, Baraniuk, Biaggioni, Hanson) have occupied 23.6% of those seats. They have likely wielded great influence on application scores and critiques.
Before and After November 2010
So far, I have presented my findings based on all the rosters from January 2000 to December 2018 combined. That is not the whole story, however. NIH changed its approach to reviewing ME/CFS grant applications in November 2010.
Prior to November 2010, the SEP reviewed grant applications related to Chronic Fatigue Syndrome, Fibromyalgia, and sometimes Temporomandibular Disorders (TMD). The rosters had titles like “CFS/FM SEP” and “CFS/FMS/TMD.” Beginning with the SEP meeting on November 2, 2010, NIH narrowed the focus of the panel to CFS only. The meeting titles changed to “Chronic Fatigue Syndrome” and “Myalgic Encephalomyelitis/Chronic Fatigue Syndrome.”
The name of the SEP was not the only difference. The types of reviewers appointed to the panels changed significantly. Pain researchers and dentists were out, and ME/CFS experts were in.
As you can see, beginning with the November 2010 meeting the SEP rosters are almost directly opposite to the earlier rosters. The expert representation went from 11% to 61%, while non-expert representation dropped from 89% to 39%. I do not know why the shift was made at that particular time, but there is no doubt that it was. It seems unlikely that this was the sole decision of the SRO at the time, but I have no documentary evidence that points to how the decision was made.
Seventh finding: Beginning in November 2010, the focus and composition of the SEP shifted dramatically and included substantially more ME/CFS experts than any meetings prior to that date.
As good as things look after November 2010, there is one troubling trend. Eight of the 25 meetings had 50% or less ME/CFS experts. Seven of those meetings were held since April 2017, including the panel that reviewed the RFA proposals in July 2017.
The roster for the RFA review went through multiple iterations. The final version included 37% ME/CFS experts. This roster must have been difficult to put together because there were so many experts participating in one or more of the fifteen proposals reviewed at that meeting. The conflict of interest policy would have excluded many of them from service on the panel.
The panels for the meetings since July 2017 may signal a dangerous shift in approach. All four had less than 50% ME/CFS experts, with the April 2018 meeting including only one expert and seven non-ME/CFS experts. All four rosters were overseen by Dr. Jana Drgonova. What her approach will be going forward remains to be seen.
Eighth finding: The SEP that reviewed the RFA proposals included only 37% ME/CFS experts, possibly due to the conflict of interest policy excluding many reviewers. The use of experts on the normal SEP panels declined to less than 50% after July 2017, for reasons unknown.
Summary
Rather than repeat the legend that ME/CFS grant applications are reviewed by dentists and psychologists, I set out to examine the data on who reviews these applications. My analysis points to two main conclusions.
First, there is an inside/outside club of reviewers. For ME/CFS experts and non-experts alike, a small subset wields great influence through service at multiple meetings. Among ME/CFS experts, 60% of the experts occupied 89% of the expert seats. The top four individuals occupied 23.6% of the seats. Among non-ME/CFS experts, 48% of the reviewers occupied 78% of the non-expert seats. Given how these subsets wield out-sized influence through repeated appearances, one hopes that this is favoring high-quality reviews and not unreasonably negative ones.
Second, these data show that NIH adjusted its approach in November 2010. The reliance on ME/CFS experts jumped overnight, and the SEP was refocused on ME/CFS applications alone. However, the negative trend to use fewer experts in 2018 bears careful watching.
The real question is how these rosters impacted grant funding decisions. My next article will present that analysis.
Recap of Findings:
*There was a meeting scheduled for February 22, 2011 but it was canceled. A meeting was eventually held on March 24, 2011 with a different roster. I have excluded the February meeting from this analysis.