21st century science takes money – in most cases, lots of it. In the United States, a significant portion of the funding supporting basic research in the sciences is supplied by the federal government, particularly the National Science Foundation (NSF) and National Institutes of Health (NIH). These agencies disburse billions of dollars each year, with a funding system that depends heavily on peer review in academic journals. Journal peer review is generally considered the “gold standard” of good science, the self-correcting aspect of the discipline that makes it uniquely qualified to discover nature’s secrets. Unfortunately, deeply-rooted incentives in the funding system tend to worsen pre-existing problems in science publication, and the overall process warrants a closer look.
Perhaps the most insidious of these problems is publication bias. Publication bias appears when researchers do not publish negative results, focusing instead on promoting research that indicates some sort of positive result. By and large, scientific journals only publish positive results, meaning that researchers whose work demonstrates a negative or inconclusive result may just move on to a new research question, making it unlikely for anyone else to ever learn about the “unsuccessful” experiment. This practice also disadvantages researchers who produce negative or inconclusive results, since they will generate fewer publications and be less competitive for future grant funding.
Since proposals to the NSF or NIH are required to cite published work to make their case, they rely on literature that is already subject to publication bias. This means that researchers might unwittingly propose research that has already been done or that is based on an inaccurate scientific consensus. If the project is funded, it may well go on to experience the same difficulties as similar previous projects – difficulties that were never communicated because they were never published. These scenarios could happen over and over, needlessly duplicating effort and wasting scientists’ money and time.
Additionally, the length and content of papers published in journals has changed over time. Research papers have become shorter and more focused, sometimes providing less supplemental background information in order to save space. Proponents argue that these papers are easier and faster to read and write, helping scientists communicate results more quickly. However, briefer papers with a decreased emphasis on methodological details make it more difficult for researchers to replicate their peers’ work. The trend towards publishing many shorter papers also puts more pressure on journals and reviewers, driving acceptance rates down and encouraging researchers to pump out ever-more papers in the hopes of publishing at least a few of them in high-impact journals.
Funding agencies encourage this problem. The NSF, for instance, allows researchers who generate a large number of publications from a particular grant to access a streamlined “Accomplishment-Based Renewal” application for funding, rather than forcing them to go through the standard renewal process. If the number of publications generated per grant is the metric for success, researchers are encouraged to write shorter papers covering increasingly narrow topics. These papers often sacrifice data quantity and methodology to achieve their brevity, working to the detriment of the overall research community.
These problems are deeply-rooted in the systems we use to fund and communicate science. They are not, however, untreatable. Strategies exist and can be implemented to address both the problems themselves and their consequences.
For example, we could establish journals explicitly dedicated to publishing negative results. They would be well-situated to combat publication bias and would slot easily into the current systems of proposal, publication and peer review. This has worked in the past: in 2002, the Journal of Negative Results in BioMedicine (JNRBM) started taking submissions. Unfortunately, it was shut down in September 2017, with its website claiming that it “succeeded in its mission” by encouraging other journals to publish more articles reporting negative or null results. Clearly, this isn’t enough. Each field needs to have a journal that is widely accepted as a credible source of negative or null results, and publications in that journal should be considered just as impactful as publications in any other journal.
This culture shift would dovetail well with a change from major funding agencies in favor of disclosing negative results. For example, the NIH currently requires a “Research Progress Performance Report” with an “Accomplishments” section that discusses how a given research project has accomplished its major goals. If this report additionally encouraged or required researchers to describe their negative results, there would be at least some record of projects that did not succeed and reasons why that might have been the case. Researchers could even phrase their responses on these sections positively, since negative or null results would still be valuable to their fields.
We might also be able to achieve these goals by making targeted changes to the funding process. In particular, we could increase funding for high-risk, high-return research, since this is exactly the type of research that the current paradigm tends to work against. The NIH has taken some steps in this direction: in the 2009 stimulus package, for example, Congress gave the NIH $200 million for “challenge grants” intended to funnel money into new, risky areas of research. This sounds great at first, but the NIH received over 21,000 applications for just 200 of these grants. In order for high-risk, high-reward research to be sufficiently supported to make an actual difference, funding agencies must allocate significantly more resources towards such projects.
In all, the problems with journal peer review presented above are significant but treatable. With an increased acceptance of publishing or promoting negative results, researchers will be able to reduce their odds of mistakenly duplicating others’ unsuccessful work. More importantly, a marked increase in science funding would free researchers to pursue high-risk, high-reward research that would otherwise have been avoided. These actions are possible inside currently-existing frameworks and would be excellent steps towards better journal publications and superior science.
Glenn is a physics and Plan II junior.