Publishing in respected journals is hard work but it also takes time. It’s very important that people – especially people in leadership positions – appreciate how long it takes.  My experience is that academic leaders want to know, but variation between disciplines means that they are never sure how long it really takes. I’m not sure either but I can speak for molecular biology and related fields – it usually takes about a year between first submission and the appearance of a paper. It’s slow.

Why is that important?

Many people will say that this delays science and is problematic. To me that isn’t the main issue. It is so easy to share results at conferences, via archive pre-print servers, or via other informal networks, that science marches on at a good pace, irrespective of the formal publishing cycle. Moreover, researchers need to focus relentlessly on their own work and oddly it is sometimes best not to be distracted too often by new developments from elsewhere!

No, to me the main problem with the delay in formal publication, is the impact it has on the careers of junior researchers – students and early career post-docs, or junior lab heads applying for their first grants. It is the junior people who are most vulnerable to delays.

If a student, or indeed any staff member, is considering moving on (to a post-doctoral position, to set up their own lab, or to embark on a new career) then one has to plan carefully with an eye on the timeframes. Some things in science are under our control – it is possible for students to manage their thesis write-up very efficiently, for example, but it is much harder for anyone to control the speed at which manuscripts are refereed, accepted and published in a respected journal.

Some say it is only the science that matters, not the formal publication process. They argue that provided the work is visible in a public archive the appointment or fellowship committee can judge the quality of the work. The selectors don’t need to, and indeed, should not, consider the formal publication data, the name of the journal, the impact factor, the citations the work is receiving.

In theory, I agree with this completely. I hate impact factors for their obvious flaws and their overuse too, and I wish the world would judge on quality, not ‘journal brand’, but sadly experience tells me a different story.

Experts can judge quality only within their area of expertise. And as science expands individual expertise shrinks. Beyond my immediate area I am little better than any interested lay person. I can judge works on gene regulation properly but even when it comes to related fields like signal transduction, apoptosis, cell cycle, membrane transport, structural biology, I will do my honest best but I can’t really tell what is startingly new and what is derivative of existing work. I cannot always judge originality or broader importance accurately.

I rely in part on the editorial boards of top journals (packed by experienced researchers) knowing their stuff. I accept that editors and carefully chosen discipline-specific referees really can judge quality better than me when they act in good faith – and mostly I believe they try to do that (though inherent biases will exist). Ultimately, all this means that where work is published counts – even though many scientists hate to admit it. It’s not perfect and there are mistakes but in general where papers are published matters.

So, let’s illustrate the impacts with something real – my lab’s latest paper. The PhD student who led the work, began the experiments four years ago. She completed them a year ago and we submitted the paper. The editor was efficient, the reviewers were fair, and as soon as we saw the first set of reviews, we were confident the paper would be accepted, so we didn’t have to try multiple journals. But nevertheless, we had spent a few months already.

The reviewers requested quite a few additional control experiments. My student set to work. She did some new in vitro binding assays, she prepared and analysed new cell lines, and she assessed complex cell sorting data. All this took about six months. Then the second reviews took a couple of months. Then there were a few minor comments that still required attention. Finally, there was the editorial process, checklists on formatting and completeness, and proofreading etc to work through. Then finally actual publication. It added up to a year.

I have heard of stories where the requirements of reviewers are such that it takes several years. When one is dealing with really important results then reviewers want reassurance and tend to request more and more controls. This is not necessarily wrong but there are costs.

The problem with the existing system is two fold.

Firstly, even if the process works well, it just takes too long. My student, like many students, was waiting to apply for a fellowship and needed formal acceptance of some good papers to strengthen her application. But things have worked out well and she will be able to go ahead with that now – but only because both my lab and her new lab are well established enterprises with diverse funding sources so we could shuffle things to free-up bridging funding to tide things over. In many labs bridging funds are not available so delays can be career limiting or even career threatening.

The second problem arises when the reviewers are over-zealous, when they ask too much, or occasionally, if they inappropriately delay publication for their own reasons – for reasons related to competition and self-preservation. Given reviews are anonymous no one really knows if this happens but human nature tells us that when competitors in the same area review work that is similar to their own, it is possible they could impose inappropriate requests and delays. Even if everyone acts properly some reviewers are just sticklers and insist not only that every possible control is done, but also that other interesting and parallel experiments are carried out to complete the story – irrespective of the costs in terms of both time and money. Sometimes one has to give up on a top journal and move to a lower ranked journal because one recognises that one reviewer has dug in and it will be impossible to convince them to accept the paper, so it is better to start the process afresh at a new journal.

Many people ask if there is a better way. The current system makes a major contribution in terms of quality control but sometimes at too great a cost. Quality, and knowing the quality of work, is now part of our system, but already the public nature of science and the way results are put up for scrutiny means that it is not always vital that every single new experiment suggested by reviewers is included in the very first publication on a topic. Science, when done properly, truly is self-correcting. So, if loose ends are left hanging, one can be confident that other hungry researchers will quickly fill in the gaps.

Personally, I would prefer a system, that journals like eLife have been exploring, where the editors declare upfront whether the work is interesting enough for publication. Then the reviewers send criticisms to the authors, who decide whether or not to respond and perhaps carry out more controls or acknowledge caveats in their paper. The review report and versions of the paper can all be made publicly available. In addition, the membership of the review committee can be revealed. It’s like a jury – one never knows who says what since there is just one summary report, but the identities of the members of the review committee are not hidden.

In this way all the information is there but the process can, in theory, be faster. The authors quickly find out whether their work is considered groundbreaking and then they can choose the level of perfection. It is also fairer because removing the cloak of anonymity from the review committee reduces the chance of real or perceived conflict of interest.

It still all relies on a sort of “editorial judgement” of quality – one star, two stars, three stars. But that’s life really. The thing about quality is that it isn’t quantity so by definition it is hard to measure – unless you’re a connoisseur. One wishes it was different but the thing about quality is it can’t be quantified, it can only be judged by appropriate arbiters, so it’s important to choose a good, diverse set carefully.

Merlin Crossley is DVC Academic at the University of NSW. His Crossley Lab blog appears in CMM, Fridays.



to get daily updates on what's happening in the world of Australian Higher Education