By Bruno Martin
“Publish or perish” is a popular saying in the world of academia.
There is a reason why the phrase gets tossed around so cheerfully: like most clichés, it is descriptive and accurate. Before the success of modern academic research during the Second World War, scientists numbered a few hundred thousand. Today, to forge a career in science, academics must compete with several million fellow researchers.
Science is a trusted source of truth – it has earned that trust through self-policing. But with increasing graduates vying for limited scientific occupations, could research quality be giving way to quantity?
The deluge of studies submitted for publication means that leading journals must reject most manuscripts they receive. Editors are more likely to print striking findings on hot topics, which in turn tempts academics to exaggerate or cherry-pick results.
One important consequence of this careerism is that no-one shares negative results anymore. A wrong hypothesis doesn’t make for exciting reading, and is unlikely to get published.
And yet the collective knowledge of what is false is larger and more reliable than what we know about the truth. Or rather, what we think we know, as misreadings of statistical noise lead to even more false positives being published. By not reporting failures, scientists waste money and time chasing dead-ends that others have reached before them.
There is another crucial scientific practice that won’t advance a researcher’s career: replication of results. Verification of past studies is as important as the publication of future ones, but there simply appears to be no time for it.
What little replication does occur is usually carried out by drug companies hoping to produce lucrative goods. And even then the experience is disenchanting; when Amgen Inc. tried to recreate 53 ‘landmark’ experiments in cancer research during 2012, only six lived up to the test.
We’ve been told that reproducibility is at the heart of the scientific method, so why were the remaining 47 studies published? Were they not peer reviewed?
Well, it seems peer review may be failing too, as we found out last year; five months after its publication in Ethology, a quotation was found deep within a paper on the mating behaviour of Mexican fishes which read: “should we cite the crappy Gabor paper here?”
The episode, while amusing, evidenced that all five authors of the study, the editors of Ethology and the paper’s peer reviewers had done a poor proofreading job.
In some extreme cases, reading is dispensed with altogether. This is the sad instance of pseudo ‘journals’ that will publish anything, for a fee. A few scientists have managed, quite easily, to submit nonsensical computer-generated texts, proving just how unscrupulous these publications are.
Fortunately, not all hope is lost. Many journals are rolling out post-publication peer review systems in the form of online comments, adding an extra level of evaluation to already published articles. Some government funding agencies are encouraging replication of studies, although both funders and publishers need to set aside the money and the pages, respectively, for ‘unexciting’ work to go ahead.
There is still much room for improvement. Trial data from experiments should be publicly available for inspection, and research methods could be logged online in advance of studies taking place. This would prevent tampering with results or experimental design in order to liven up the data set.
Science has come a long way to its current position of authority. If it is to maintain the public’s respect, it cannot grow complacent.
Photograph: Weichao zhao on flickr