Bias Found; Prestigious Scientists Publish Papers More Easily

By Julianna LeMieux — Oct 12, 2016
The scientific review process should be rigorous, fair and unbiased. However, a new study in JAMA indicates that none of those may be true, finding that those who author a paper influences how stringently the data are judged. 

The currency of success for scientists is the number of papers published. Also taken into account is the quality of the journal that the paper is published in.

After the experiments are done, the data analyzed, and conclusions made, a manuscript is written and submitted through the peer-review process. Scientific integrity depends on that process being rigorous, fair, and unbiased. 

But, a new study finds that who wrote the paper influences how stringently the data are judged. 

To test this, the authors of the study passed the same paper through both a single-blind and a double-blind review at the same journal and compared the results of the two tracks. The big difference - in a single-blind review, the reviewers know who the authors of the paper are, in a double-blind review, the author's identities are unknown to the reviewers. The standard review process used by journals today is a single-blind review. 

The paper was "written" by two past presidents of the American Academy of Orthopedic Surgeons from renowned institutions. Notably, the manuscript had five subtle errors to help assess how rigorous the paper was reviewed. 

The paper was reviewed by a total of 119 reviewers - 57 through the double-blind process and 62 through the single-blind process. Strikingly, the paper was "accepted" more through the single-blind process (87%) than when reviewed through the double-blind process (68%). Because the only difference between these two is that the author's names were revealed in the single-blind process, it speaks to a bias that stems from who those authors are. 

This finding may start to shift the current practice since most papers are reviewed through a single-blind process, based on the assumption (and previous data) that knowing the authors does not skew the review process. 

Of the five errors intentionally put into the paper, less than one (on average) was detected, regardless of group, indicating a potential lack of rigor in the review process. 

These findings should make us step back and look at our current peer review process and ask if it is rigorous enough. And, if we are letting prominent researchers publish their findings more easily than younger, lesser known investigators, then prominence becomes self-fulfilling and it becomes incredibly more difficult for new, less established voices to be heard. The rich will keep getting richer in a system that is already incredibly hard to establish oneself. The most important point is that the science should be judged for what it is, not on who did it.