Despite a variety of criticisms of its effectiveness (Wager and Jefferson 2001; Cooper 2009), peer review is a fundamental mechanism for validating the quality of the research that is published in today’s scientific literature (Baker 2002; Ware and Monkman 2008; Mulligan et al. 2013; Wareand Mabe 2015; Nicholas et al. 2015). It is a complex, multi-phase process that seems to be largely understudied (Squazzoni and Takács 2011) and there appear to be some growing concerns regarding how to improve its functioning. Given the increasing number of submitted articles and the limited pool of reviewers, acquiring a good and timely review is becoming progressively more challenging. Several journals emphasize the rapidity of their review process in order to attract submissions. Reviews can take even a year, depending on the complexity of the topic, the number of reviewers involved, and the details of the editorial procedures. In contrast, sometimes reviews can be very quick, for example when the paper is rejected directly by the editor.

In face of these problems, many suggestions have been proposed to make the peer review and editorial process more efficient and equitable (Bornmann 2011). In particular, the role of editors in the process of selecting and managing reviewers has been increasingly discussed (Schwartz and Zamboanga 2009; Kravitz et al. 2010; Newton 2010). However, these discussions are mainly focused on quality, ethical issues or qualitative recommendations for editors or reviewers (Cawley 2011; Resnik et al. 2008; Hames 2013; Wager 2006; Kovanis et al. 2015) that do not lead to measurable improvements to the efficiency of peer review process as viewed from the perspective of editors. Do the editors send out a sufficient number of reviewer invitations to obtain two or three timely reviews of a manuscript? How often should they draw on expertise of the same reviewers consuming their time and energy? How long should they wait for a review before they can repeat the invitation or assume that a response is unlikely? What is the statistical chance that reviewers will respond? Does it depend on whether they were previously reviewers for the same journal? Although it is likely that editors try to answer these and other questions when they optimise their workflow, they have to do it on their own using trial and error. Without an intensive discussion that could help to answer the aforementioned questions in a more systematic way one can be sure that the submission-publication editorial lags will be increasing in the years to come.

Our paper is meant to fill this gap with the help of quantitative analysis. We examine selected aspects of peer review and suggest possible improvements. To this end, we analyse a dataset containing information about 58 papers submitted to the Biochemistry and Biotechnology section of the Journal of the Serbian Chemical Society (JSCS). After separating the peer review process into stages that each review has to go through, we use a weighted directed graph to describe it in a probabilistic manner under the weak assumption that the process is Markovian. We test the impact of some modifications of the editorial policy on the efficiency of the whole process. Our quantitative findings allow us to provide editors with practical suggestions for improving their workflow.

The paper is organized as follows:

"Review process and initial data analysis" section describes the dataset used in the paper as well as the methodology employed to analyse the data. "Review time" section is devoted to the data driven theoretical analysis of review time. Simulations of various editorial policy scenarios and their impact on the efficiency of the process are presented in "Simulations of the review process" section. In "Discussion with conclusion" section we give concluding remarks and point out open problems that may be researched within the presented methodology in the future.