The current scientific publication system is afflicted with several serious problems: the cost and lack of transparency and, to a lesser extent, the time between obtaining scientific results and their publication. In addition, we believe that the economic model on which the current publication system is based creates a certain perversion of this system.
The cost
Most scientific journals are owned by large international publishing companies that take advantage of the needs of researchers to publish and read scientific articles. Indeed, research organizations and universities pay them very large sums of money each year by subscribing to their journals and paying the fees that these journals charge for the publication and/or open access of articles. While the development of IT tools and the dematerialization of items could have led to a reduction in costs, these costs have continued to increase in recent years. The size of the international market of scientific publication currently amounts to about $20 billions, paid in vast majority by public research institutions. In addition, this market is highly profitable, the profit margin of the six big publishers (Elsevier, Springer-Nature, Wiley, Thomson Reuters, Informa, Wolters Kluwer) being between 30 and 40% depending on the year, a figure comparable to Apple or Google.
In France, these costs are estimated to 150 million euros per year, which represents 25% of the budget of the National funding agency (ANR). This cost seems unjustifiable to us, given that most of the work leading to publication is carried out by the researchers themselves: writing articles, peer reviewing, making editorial decisions, proofreading and making corrections. The situation is even more complicated for researchers in developing countries. As most research institutes cannot afford such costs, the current publication system limits both their ability to publish and their access to scientific literature.
The lack of transparency
The peer review process, which ensures the quality of articles, is generally not public in journals. The reader of a scientific article does not have access to critical peer reviews. Its confidence in the validity of the article is therefore based on subjective elements disconnected from the quality of the article in question, such as the reputation of the journal – which is roughly estimated by its impact factor (IF).
We all have experience of reviewers’ critiques being sometimes insufficient to make it possible to increase the scientific quality of articles under review. It is sometimes hard to believe that editors can make a decision based on such low-quality reviews. It would be preferable to publish editorial decisions, reviewers’ criticisms and authors’ responses. This would give readers the material to evaluate the seriousness of the work done on each article and its scientific merit based on reviews. It would also be a very strong incentive for reviewers to perform uniformly thorough reviews since badly written or poorly constructive reports would be viewable by readers of the original article.
The deadlines
Between the time a research team obtains results and their eventual publication, there may be a delay of 6 months to several years, due to (i) the classic back-and-forth exchanges between authors, editor and reviewers to obtain an acceptable version of a manuscript and (ii) the “submission cycles” of papers being rejected outright or after reviews in different journals. The result is a very inefficient system: while one team tries to publish its results acquired months earlier, other teams working on the same subject could benefit from these results without knowing that they already exist. Proof of precedence is a sensitive subject in this context – how can one prove the novelty of their study when others can decrease publication time due to factors affecting review (e.g. authors’ fame and connection to the editorial board and referees) and/or circumvent them by publishing in less well-known venues? And finally, time to publication, and the ways to minimize it, affects the way scientists are hired because different publication strategies can lead to drastically different CVs, hence favoring candidates with a tendency to game the system.
Perversion of the system
Publishers are gradually moving from subscription towards an author-pay system by asking authors to pay article processing charges (APC) to ensure free reading access to their articles, in part because such articles are more likely read and cited and also because public grant agencies such as the NSF and ERC have made moves towards general open access of all publicly funded research. The turnover of publishers is thus more and more positively linked to the number of articles published. As a result, it becomes tempting for them to increase the proportion of articles accepted in their journals, to the detriment of their quality. This trend is not necessarily opposed by authors who themselves have an interest (for being hired, to get promoted in their careers, to obtain funding for their researches…) to publish quickly and massively.
Which opportunities?
The Internet provides free web publishing tools, which make it possible to publish on a very large scale at minimal cost (e.g. OJS). In addition, raw and not yet evaluated articles, called preprints, are increasingly being deposited directly and freely by researchers in open archives such as bioRxiv.org or arXiv.org, making research results quickly and freely available. This immediate availability also allows the use of social networks to comment on the results, thus promoting contact between science and the public. On the other hand, preprints are not evaluated and validated by the scientific community and this constitutes a problem.
The proposed solution
To solve this problem our idea is to establish communities of researchers (Peer Community In (PCI)) evaluating through peer-review and recommending articles in their scientific field (see the short video explaining the process). This initiative is based primarily on the deposit of preprints in open archives such as arXiv.org. The authors of a preprint deposited in these open archives may then request its evaluation by a PCI competent in a given discipline, for example Peer Community in Evolutionary Biology (PCI Evol Biol). The only condition will be that this preprint is not already published or being evaluated by a journal. Recommenders (scientists playing a role similar to that of editors in journals) of this PCI have access to the submission, and if one of them finds the article interesting, he/she can decide to handle its peer-review evaluation. On the basis of at least two reviewer reports, the recommender/editor in charge of the preprint may accept the preprint, ask for modifications or reject it. In case of acceptance by the recommender/editor, the reviews, a recommendation text signed by the recommender/editor, digital identifiers (DOIs) of the successive and corrected versions of the preprints, as well as the correspondence with the authors, are available free of charge to readers on the PCI website. The recommendation texts themselves have a DOI and can be cited (see an example of recommendation here). Importantly, the acceptance of a preprint by a PCI does not prevent its subsequent submission for publication in a journal. It is also noteworthy that a preprint can be recommended by several PCIs, specialized in different scientific fields, a situation particularly advantageous in case of multidisciplinary works.
A first community was launched in January 2017: PCI Evol Biol. PCI Evol Biol currently brings together 390 of the most eminent researchers in evolutionary biology. PCI Paleontology (79 recommenders/editors) and PCI Ecology (295 recommenders/editors) have been launched in January 2018. Our aim is to rapidly increase the number of new PCIs to cover many scientific topics. A large heterogeneity in the size of the scope of the future PCIs may be expected. Some PCI will be very specialized and have a narrow scope, and others will be multidisciplinary and have a wide readership.
In summary, the PCI system is based on the publication of critical assessments and recommendations of articles not yet published, but deposited – and freely accessible – in electronic form in an open archive available on the Internet. These evaluations and recommendations are carried out on a voluntary basis by researchers without any link with private publishers.
The publication costs disappear: PCI offers the possibility to validate, distribute and consult the articles submitted to it free of charge. The time limits for access to information are null and void: the scientific articles being evaluated are deposited in open archives as soon as they are written. The system becomes transparent: reviews, editorial decisions, authors’ responses and recommendations are published on the website of the scientific community concerned (such as PCI Evol Biol).
Transparency of article evaluations will surely lead to better practices, as critical article evaluation work is best done when it is publicly displayed. Problems of conflict of interest in critical evaluations will certainly be less frequent with this system. Indeed, conflict of interest situations are prohibited in PCIs (recommenders/editors and reviewers must declare that they have no conflict of interest with the authors or with the content of the preprint they are handling/reviewing), recommendations are signed and we encourage reviewers to sign their critical assessments. This mode of operation should curb any desire for “cronyism” or retaliation on the part of evaluators (see the criticisms of the system below).
Furthermore, it is not the purpose of PCIs to undertake the evaluation of all articles submitted to them. Evaluations are based on the voluntary work of community members, who choose the articles they consider relevant. This will limit the number of “food” items of no interest, intended to “inflate” the authors’ lists of publications.
Major criticisms of PCI
The first criticism concerns the originality and youth of the PCI initiative and the absence of an impact factor. PCI remains unknown and researchers, funding agencies and research institutes still tend to attach great importance to traditional scientific journals and associated Impact factors. Moreover, given that researchers are currently being recruited, evaluated and funded on the basis of their curricula vitae, it is understandable that they are reluctant to use this new system.
Indeed, PCI is not a publication medium and therefore has no impact factor. The consequence that critics mention is that authors may be afraid to submit their manuscripts to a system without an impact factor because it could harm their careers (recruitment, promotion, funding). The simplest answer is that the impact factor is not a measure of the quality of scientific work, but a measure of the reputation of the medium in which it is published, which is very different. However, it is possible to measure the number of citations of articles recommended by a PCI, for instance google scholar allowing to collect citations of preprints. There is therefore no obstacle to measuring the notoriety of items recommended by a PCI. In addition, it should be noted that in some disciplines (especially mathematics and physics), the open archives hosting preprints have impact factors that are comparable or superior to the best journals in these fields. Note also that the authors of an article recommended by a PCI can then submit their article to a journal displaying an impact factor. There is therefore no impossibility of being “assigned an impact factor” after receiving a recommendation from a PCI. Finally, and importantly, researchers and committees evaluating scientists’ projects and careers can change their mind and decide to consider articles recommended by PCI with the same value as “classic” articles published in journals. This is currently happening, for example in France, in several evaluation committees: at the National Council of the Universities, the ‘Comité national de la recherche scientifique’ (CoNRS), and the National Institute of Agronomic Research in the field of evolution and ecology.
The second common criticism is that, in the event of success, PCI may not be able to absorb a large number of submissions because it relies on low budget and does not have a managing editor. This criticism is unfounded for several reasons: 1) Recommenders/editors do not have to edit all submitted articles. Only articles that find a recommender/editor are evaluated. Articles that require heavy workload from a managing editor, because they are poorly written, presented, formatted, etc., will most likely not be handled by a PCI recommender/editor. 2) As PCI does not publish recommended articles (the articles stay in open archives), authors are responsible for the formatting and more generally for the aesthetic quality of their article. There is no correction and formatting work on the part of PCI (PCI only focuses on science) and this greatly reduces the workload, making unnecessary a managing editor. 3) Each PCI has a large number of recommenders/editors. This ensures the possibility of editing a large number of manuscripts without overwhelming those recommenders/editors with workload, as can be the case in traditional journals. In addition, this large number allows for a wide variety of topics for which recommenders/editors are specialists. This ensures that interesting and high-quality articles are handled and reviewed.
The third criticism concerns the fact that PCI may resemble a closed club in which cronyism flourishes. Regarding cronyism, it should be noted that recommenders/editors and reviewers must sign an ethical charter that prohibits cronyism, limits non-financial conflicts of interest and prohibits all financial conflicts of interest. In addition, each PCI has a managing board responsible, among other things, for verifying the absence of such a conflict of interest. Finally, transparency through the publication of editorial decisions (signed) and reviews (possibly signed) ensures that crony situations are detected and made difficult. Regarding the lack of openness of PCIs, it might be sufficient to note that the number of recommenders/editors is much higher at PCI than in traditional journals. The number of PCI recommenders /editors is simply not limited. Only the lack of expertise can limit their entry into a PCI.
The final criticism is that PCI cannot compete with well-established journals. PCI is not a journal because it does not publish scientific articles. It only publishes recommendations and critical comments of articles deposited in open archives. Thus, PCI does not directly compete with the current journal system, a property that increases its chances of success. Indeed, most journals nowadays accept the submission of articles whose preprints have been deposited in an open archive (http://www.sherpa.ac.uk/romeo/index.php). They should therefore accept and consider the preprints recommended by the PCIs. Some leading journals in the field of ecology and evolutionary biology (e.g. Ecology Letters, Trends in Ecology and Evolution, Molecular Ecology or Oikos) have publicly indicated that they will accept not only the submission of preprints recommended by PCI, but also take into account the reports of reviewers and the recommendation texts of these PCIs to speed up/improve/complete their evaluation process.
Conclusion: a call for institutional recognition
To enable this impetus and ensure the management of these PCIs, we hope to obtain the support – in terms of financial but also in terms of symbolic and intellectual resource – of research institutions (universities, major research institutes and funding agencies), at the international level. As already mentioned above, PCI needs to be recognized by leading research institutions. We call for a public recognition of PCIs and of the quality of the PCI article evaluation process by international research laboratories, departments, universities, research institutes, and funding agencies. It is only through such recognition that hiring, promotion and funding committees will change their habits and read PCI recommendations instead of looking at the Impact Factors of journals where articles are published. We also call for the creation of new PCIs. The goal is to cover all scientific disciplines from pure mathematics to history. It is noteworthy that existing PCIs are themselves an invitation for the creation of new PCIs and so forth: The more PCIs there will be, the more PCIs there will be created, and the more authors will trust a PCI to have their articles evaluated.
Thomas Guillemaud, Benoit Facon, and Denis Bourguet are researchers working in evolutionary ecology on pest insects at Inra, French National institute of agronomic research. They created Peer Community In. François Massol is an evolutionary ecologist working at the CNRS institute, France. He co-founded Peer Community in Ecology.