Why We Should Include One-Shot Revision in our Review Process

Editor’s notes: The post originally appeared on the SIGARCH blog. We cross-post it because it is very relevant to the SIGOPS community. We hope the cross-posting can trigger discussions on improving the publication process of the SIGOPS community.

Including one-shot revision in our peer review process could lead to lower overall reviewing load, better mental health for graduate students, and better overall science. 

Most of the top conferences in systems and architecture follow the binary decision model: the outcome of the review process is either an accept or a reject. An accepted paper may be shepherded, but the extent to which the paper might change or improve during shepherding is limited: authors might optionally include new experiments or data, but the shepherd cannot force them to do this. Similarly, the acceptance cannot be conditioned on the result of a new experiment turning out in a particular manner. Shepherding is also limited by the time frame: sometimes there is less than a month between notifications and the camera-ready deadline, which precludes any major changes to the paper. 

Disadvantages. There are a number of disadvantages to our current binary review system. One big problem is: what should we do with interesting papers that have some flaws? Reviewers naturally want to maintain a high standard for the conference; this would mean rejecting such papers, even if the flaws can be fixed with a few more experiments or rewriting (since shepherding does not allow much time for this). This results in a number of papers getting rejected that could have been accepted with just a bit more work. 

Yet another result of this binary model is randomness. Some reviewers are okay with accepting an imperfect paper, as long as the flaws are clearly mentioned and discussed. But not all reviewers take this position; hence, whether the same paper might get accepted or rejected might depend on luck, on which reviewers get assigned to the paper. 

What is the big deal with a paper rejection? After all, isn’t rejection a part of academic life? Shouldn’t students get used to it? Doesn’t rejection make the paper stronger?

Rejection is a problem due to two main reasons. First, as a community, we review the same papers again and again. With decisions depending so strongly on who the reviewers are, authors are tempted to resubmit borderline papers to the next conference in the hope of getting accepted. A new set of three or five reviewers are selected for the paper, and they spend a significant amount of time pointing out the paper’s flaws. If the paper is rejected, we go through the cycle again, wasting more reviewer time down the line until the paper is finally accepted. These resubmitted papers also increase the load on program committees, which are already overloaded with each PC member reviewing a dozen papers or more in a short period of time.

Even in the case where authors operate in good faith and fix the flaws pointed out in the first reject, they might submit and be unpleasantly surprised when the second set of reviewers point out another set of subjective flaws, and reject the paper. And the cycle repeats again. In the field of systems security, Davide Balzarotti notes that between 30-40% of submissions to the top 4 security conferences are resubmissions! 

Second, rejection takes a big mental toll on the junior members of academia, especially grad students. It is painful to get rejected, especially if you have been working for a year or more on a project. It is even more painful when the outcome seems random: papers of a similar caliber with similar flaws get accepted, while their paper gets rejected. We should strive to minimize rejections and randomness in the peer review process, to ensure that our graduate students have good mental health. Otherwise, a lot of talented students are going to quit.

So what is the solution to this problem? One-shot revisions

The best way I’ve seen this handled is by the database community. Authors submit to a hybrid journal, the Proceedings of VLDB (PVLDB). Authors can submit each month, and are guaranteed reviews within two months. Apart from Accept/Reject, a paper may get a Revise decision: authors have three months to make the revisions requested by the reviewers and resubmit. Upon resubmission, authors are guaranteed a final result of accept or reject within one and a half months. The resubmission is reviewed by the reviewers who judged the original submission. The one-shot revision prevents the situation in some of the other sciences where the paper is endlessly stuck in revision limbo, with authors changing the paper again and again for reviewers.  

By allowing revisions, we resolve the tension between accepting imperfect papers and the desire to maintain a high standard. Almost all papers get stronger due to revision, and randomness is reduced since the same reviewers judge the resubmission. The authors have an incentive to do the revisions in good faith; revised papers have a good chance of getting accepted if the requested changes are done. 

This also reduces the reviewing load on the community: it is much easier to review a submission than for entirely new reviewers to review a new submission. PVLDB requires that authors clearly delineate all changes, and that authors write a two page summary describing how the changes address reviewer concerns. The common complaint about revisions is that it increases the load on reviewers; while it is true that reviewers have to commit to reviewing a revision, the overall load of the community is reduced because resubmissions become revisions instead. 

Graduate students have better mental health: papers can still get rejected, but only if they cannot be fixed in three months. Beyond a certain quality level, a paper will always get revised and accepted. This is invaluable for the mental health of a graduate student, allowing them to work knowing that small nits will not cause the paper to be rejected. 

Revisions also allow us to review the work, instead of reviewing the paper. Our fast-paced binary review process causes an adversarial mindset among reviewers, as they try to find reasons to reject a paper. Instead, we should be trying to judge the underlying work, the ideas in the paper, rather than the current state of the paper; the state of the paper can always be improved with more work. With revisions, reviewers can focus on the big picture; smaller flaws can be corrected through the revision process. This leads to better science with a more collaborative model as the reviewers try to strengthen the paper with their feedback. 

Experiments with one-shot revisions. Both the architecture and systems communities have experimented with/are currently experimenting with using one-shot revisions. Prof. Moin Qureshi tried using one-shot revisions at MICRO-48: authors of top ranked papers had the option of revising the paper during a three-week period. The experiment was a success, with 85% of surveyed PC members saying that it improved the quality of PC decisions, and that reviewing the revised paper took negligible extra time. Unfortunately, one-shot revisions are not yet the norm at most systems conferences. NSDI 19 introduced a one-shot revision model where authors can submit a revised manuscript to the next NSDI deadline (typically six months later); however, revisions were limited to a small number (four in NSDI 19). One-shot revision is now a standard part of NSDI. Eurosys is trying one-shot revision for the first time in 2021: similar to MICRO, authors have a little over three weeks to produce a revised version. 

My hope is that revisions become a standard part of the review process in systems and architecture. Almost all papers should undergo revision, emerging stronger and more polished before being published. Revision should not be reserved for a small percentage of the submissions. While transitioning to including revisions will temporarily cause more work, I strongly believe in the long run it will contribute to lower reviewing load, better mental health, and better science. 

About the authorVijay Chidambaram is an Assistant Professor in the Department of Computer Science at the University of Texas at Austin. His research group, the UT Systems and Storage Lab, works on all things related to storage. 

Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGOPS blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or ACM SIGOPS or its parent organization, ACM.