
- Peer Review System
- Production Service
- OA Publishing Platform
- Law Review System
- Why Scholastica?
- Customer Stories
- Testimonials
- eBooks, Guides & More
- Schedule a demo

Powerful peer review software designed with users in mind
A peer review management system your journal editors, authors, and reviewers will actually like using..

Get all the submission tracking and manuscript management tools you need in an intuitive system
Scholastica's peer review software is designed to help journals work smarter, not harder — with all the features editors, authors, and reviewers need for smooth submissions and manuscript management and none of the complexities they don't..

Set up your account quickly and easily
Start using Scholastica's peer review system fast with an intuitive setup process, easy data imports, migration support, and free training for all users.
Configure the system to fit your needs
Our peer review system comes with the workflow configuration options journals need, like custom submission form fields, manuscript tags, and more.
Eliminate manual work
Let Scholastica's peer review software do the heavy lifting for you when it comes to tracking manuscripts, assigning tasks, sending reminders, and more.
Improve your author and reviewer experiences
With features to make life easier for editors, authors, and reviewers and an intuitive interface, Scholastica makes peer review smoother for all involved.

Scholastica is user-friendly, well-designed, and affordable — and it has made our editorial review process much smoother. The templates are excellent and reviewers can get up and running on the system very quickly. We have been particularly pleased with service from Scholastica staff, who are knowledgeable and always available to answer questions. All in all, we've had a great experience and can recommend Scholastica!
Streamline editorial workflows to make decisions sooner

Communicate faster with built-in email and time-saving templates
Build out your reviewer list and track performance.

Easy setup and unlimited support with no added fees
Smooth migration and onboarding.
Seamlessly transition one or more journals to Scholastica's peer review system with easy software setup, data transfers, and free training.
Support and automatic updates included
Get user support for editors, authors, and reviewers and instant access to new features — no need to wait for or install upgrades.
Transparent pricing with no contracts
Always know your costs with clear pricing, no contracts, and no added software configuration, update, or support fees.
Optimize your journal workflows with data

Seamlessly process publication charges via CCC RightsLink
Screen for possible plagiarism with similarity check.

Save time with smart automations
Peer review anonymization.
Configure journals for single-anonymized or double-anonymized submissions
File versioning control
Attach new files to manuscripts as needed — we'll keep the versions straight
Reminders and notifications
Keep editors, authors, and reviewers on task with auto reminders and updates
To-do lists and activity feed
Assign personal and team member to-dos and track progress in activity feed
Easy email templates
Create easy-to-edit templates for decisions and common correspondences
Quick filters and custom tags
From manuscripts in progress to reviewers by specialty, find what you need fast
Get high-quality submissions with the rich metadata you need

Move manuscripts from peer review to publishing faster

Global Environmental Politics

Wine Business Case Research Journal

Journal of STEM Outreach

The Scandinavian Journal of Economics

Technology and Culture

Yale Journal of Biology and Medicine

Journal of Financial Education

Dumbarton Oaks Papers
- Terms of Service
- Browse Journals
- Institutional Accounts
- Law Author Resource Center
- Privacy Policy
- Predatory Publishing Policy

210 W Lake St Suite 218 Chicago, IL 60606
Peer Review Resources
The Wiley Network blog
results from our Wiley Peer Reviewer Study
Wiley's Review Confidentiality policy
the review models for Wiley journals
Other Peer Review Guidelines
- Peer Review: The Nuts and Bolts by Sense About Science (SAS)
- Peer review: An Introduction and Guide , PRC
- COPE’s Ethical guidelines for reviewers
- Wiley’s Best Practice Guidelines on Publishing Ethics
- A Guide to Peer Review in Ecology and Evolution , British Ecological Society
- Alan Meier's guidelines for reviewing technical papers
- The Council of Science Editors (CSE) gives guidelines on roles and responsibilities in peer review
- Reviewing Journal Manuscripts , by Charon Pierson
Studies of Peer Review
- Peer review in scholarly journals: Perspective of the scholarly community – an international study , Mark Ware & Mike Monkman, Publishing Research Consortium, 2008
- Rewarding reviewers – sense or sensibility? A Wiley study explained , Learned Publishing
Useful Organizations
Association of Learned and Professional Society Publishers (ALPSP) www.alpsp.org
International trade association for not-for-profit publishers and those who work with them. It is also the largest association of scholarly and professional publishers in the world. It provides representation of the sector, professional development activities and a wealth of information and advice. It runs many very useful and informative seminars and offers a wide variety of training courses at different levels.
Committee on Publication Ethics (COPE) www.publicationethics.org
COPE is a forum for editors of peer-reviewed journals to discuss issues related to the integrity of the scientific record. It supports and encourages editors to report, catalogue and instigate investigations into ethical problems in the publication process.
Consolidated Standards of Reporting Trials (CONSORT) www.consort-statement.org
CONSORT is a tool to improve the quality of reporting of randomized controlled trials (RCTs). It allows RCTs to be reported in a standard, transparent and evidence-based way. It comprises a checklist and a flowchart, which together are called CONSORT. The checklist contains all the things that should be addressed in a trial report.
Council of Science Editors (CSE) www.councilscienceeditors.org
The CSE’s mission is ‘to promote excellence in the communication of scientific information’ and its purpose is ‘to serve members in the scientific, scientific publishing, and information science communities by fostering networking, education, discussion, and exchange and to be an authoritative resource on current and emerging issues in the communication of scientific information’ (accessed 8 August 2006).
European Association of Science Editors (EASE) www.ease.org.uk
EASE is ‘an internationally oriented community of individuals from diverse backgrounds, linguistic traditions and professional experience who share an interest in science communication and editing’ (accessed 8 August 2006). It has an electronic forum for the exchange of ideas, and holds a major conference every 3 years. Its Science Editors’ Handbook contains much useful information, divided up into sections on: (1) Editing, (2) Standards and Style, (3) Nomenclature and Terminology, and (4) Publishing and Printing.
International Association of Scientific, Technical and Medical Publishers (STM) www.stm-assoc.org
The mission of STM is ‘to create a platform for exchanging ideas and information and to represent the interest of the STM publishing community in the fields of copyright, technology developments, and end user/library relations’ (accessed 8 August2006). STM includes large and small publishing companies, secondary publishers, and learned societies.
International Committee of Medical Journal Editors (ICMJE) www.icmje.org
The ICMJE is made up of a group of editors from general medical journals who meet annually to discuss the ‘Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication’. These guidelines have been extended to cover more than just manuscript preparation, and now include ethical considerations and many editorial issues. All editors and authors will benefit from looking at them and will find them a very valuable resource.
International Council for Science (ICSU) www.icsu.org (Acronym derived from the previous name, the International Council of Science Unions)
The ICSU is a non-governmental organization with a global membership that includes both national scientific bodies and international scientific unions. The ICSU provides a forum for the discussion of issues relevant to international science policy and it actively advocates freedom in science, promotes equitable access to scientific data and information, and facilitates science education. It addresses global issues in partnership with other organizations and acts as an advisor on a wide range of topics from ethics to the environment.
International Publishers’ Association (IPA) www.internationalpublishers.org
The IPA is a long-standing (established in 1896) non-governmental organization that represents the publishing industry, with consultative relations with the United Nations.
International Standard Randomised Controlled Trial Number Register (ISRCTN) http://isrctn.org
The ISRCTN is a simple numeric system (based on randomly generated 8-digit numbers prefixed by ISRCTN) for the unique identification of randomized controlled trials worldwide. The website gives answers to frequently asked questions and readers are referred there for further details and up-to-date information.
Office of Research Integrity (ORI) https://ori.hhs.gov
The ORI is part of the Office of Public Health and Science within the Office of the Secretary of Health and Human Services in the US Department of Health and Human Services. It promotes integrity in biomedical and behavioral research supported by the US Public Health Service at around 4000 institutions worldwide. ORI monitors institutional investigations of research misconduct and promotes responsible conduct of research through educational, preventative and regulatory activities.
Sense about Science (SAS) https://senseaboutscience.org/
Sense about Science is a charity that challenges the misrepresentation of science and evidence in public life. With a focus on peer review and the importance of scholarly publishing to public confidence in science, Sense about Science has developed innovative programs to engage researchers, the public, policymakers, and media. Wiley has partnered with Sense about Science for a decade, sponsoring resources and activities that promote an understanding of evidence and peer review to empower people in public life to make more informed decisions. Results of our partnership include Sense about Science’s popular guide to peer review, I Don’t Know What to Believe, their regular series of peer review and Voice of Young Science workshops for early career researchers, and most recently, the 2019 Evidence Week in UK Parliament.
Society for Scholarly Publishing (SSP) www.sspnet.org
The mission of SSP is ‘to advance scholarly publishing and communication and the professional development of its members through education, collaboration and networking among individuals in this field’ (accessed 8 August 2006). It provides the opportunity for interaction among members in all aspects of scholarly publishing, including journal and book publishers, librarians, manufacturers, and web editors. It has links to many organizations and resources on its website, and includes email and telephone contact details for some of the listings.
World Association of Medical Editors (WAME) www.wame.org
WAME is ‘a voluntary association of editors from many countries who seek to foster international cooperation among editors of peer-reviewed medical journals’ (accessed 8 August 2006). WAME’s website provides many resources that will be useful to all editors, not just those from medical journals – policy statements, ethical considerations, and guidance for editors before and after taking up editorial positions. It also has a comprehensive listing of its listserv discussions, with links to the postings.
- Main Website
- April Papers
- June Papers
- July Papers

5 Tools for Easy Literature Review (With 2 Bonus Tools)

Context matters. It matters when you’re watching a movie, when you’re part of a conversation, and it certainly matters when you’re presenting a research paper. Leaving it out of your article can be not only confusing but also disingenuous to your audience.
That’s where a literature review comes into play. And we’re here to discuss what is a literature review and how you can have an easier time writing it.
What is a literature review?
Your literature review is the lore behind your research paper. It comes in two forms, systematic and scoping , both serving the purpose of rounding up previously published works in your research area that led you to write and finish your own. A literature review is vital as it provides the reader with a critical overview of the existing body of knowledge, your methodology, and an opportunity for research applications.

Some steps to follow while writing your review:
- Pick an accessible topic for your paper
- Do thorough research and gather evidence surrounding your topic
- Read and take notes diligently
- Create a rough structure for your review
- Synthesis your notes and write the first draft
- Edit and proofread your literature review
Tools to streamline your literature review
A literature review is one of the most critical yet tedious stages in composing a research paper. Many students find it an uphill task since it requires extensive reading and careful organization .
Using the tools listed here, you can make your life easier by overcoming some of the existing challenges in literature reviews. From collecting and classifying to analyzing and publishing research outputs, these improve your productivity without additional effort or expenses.
1. SciSpace
SciSpace is a one-stop solution for an effective literature search and barrier-free access to scientific knowledge. It is a massive repository where you can find millions of peer-reviewed articles and full-text PDF files. You can use the platform in various ways to optimize your workflow.
Find the right information
The comprehensive search filter, teamed with the ' Trace ' does a quick and easy job of finding what you want. You can narrow down on papers based on PDF availability, year of publishing, document type, and affiliated institution. Then, once you find the right paper, you can use the Trace feature to find related papers, authors, topics, and more.
Find instant explanations for papers and their abstracts
SciSpace has an AI assistant called SciSpace Copilot . Its primary function is to explain papers in simple terms. You can highlight text, clip maths and tables, and ask any question you're curious about. Copilot will give you an instant answer. While you're conducting a literature review, you can use Copilot to get a better clarity on the abstract and decide how relevant the paper is to your project.
Assess credibility of papers
Since a literature review forms the foundation of your research, it should come from credible and peer-reviewed origins. Sometimes even grey literature. SciSpace Discover helps you assess the quality of a source by providing an overview of its references, citations, and performance metrics.
Get the complete picture in no time
SciSpace Discover’s personalized suggestion engine helps you stay on course and get all the information related to the topic from one place. Every time you visit an article page , it provides you links to related papers. Besides that, it helps you understand what’s trending, who are the top authors, and the leading publishers on a topic.
Conveniently refer sources
To ensure you don't lose track of your sources, it’s best to make notes while doing your research. SciSpace Discover makes this step effortless. Click the 'Cite' button on an article page, and you will receive preloaded citation text in multiple styles. All you have to do is copy-paste it into your manuscript.
2. Mendeley
Mendeley Citation Manager is a free web and desktop application. It helps simplify your citation management workflow significantly. Here are some ways you can speed up your referencing game with Mendeley.
Generate citations and bibliographies
Easily add references from your Mendeley library to your Word document, change your citation style, and create a bibliography, all without leaving your document.
Retrieve references
It allows you to access your references quickly. Search for a term, and it will return results by referencing the year, author, or source.
Add sources to your Mendeley library by dragging PDF to Mendeley Reference Manager. Mendeley will automatically remove the PDF(s) metadata and create a library entry.
Read and annotate documents
It helps you highlight and comment across multiple PDFs while keep them all in one place using Mendeley Notebook . Notebook pages are not tied to a reference and let you quote from many PDFs.
Zotero is a free, open-source tool for managing citations that works as a plug-in on your browser. It helps you gather the information you need, cite your sources, lets you attach PDFs, notes, and images to your citations, and create bibliographies.
Import research articles to your database
Search for research articles on a keyword, and add relevant results to your database. Then, select the articles you are most interested in, and import them into Zotero.
Add bibliography in a variety of formats
With Zotero, you don’t have to scramble for different bibliography formats. Simply use the Zotero-Word plug-in to insert in-text citations and generate a bibliography.
Share your research
You can save a paper and sync it with an online library to easily share your research for group projects. Zotero can be used to create your database and decrease the time you spend formatting citations.
Sysrev facilitates screening, collaboration, and data extraction from academic publications, abstracts, and PDF documents using machine learning. The platform is free and supports public and Open Access projects only.
Some of the features of Sysrev include:
Group labels
Group labels can be a powerful concept for creating database tables from documents. When exported and re-imported, each group label creates a new table. To make labels for a project, go into the manage -> labels section of the project.
Group labels enable project managers to pull table information from documents. It makes it easier to communicate review results for specific articles.
Track reviewer performance
Sysrev's label counting tool provides filtering and visualization options for keeping track of the distribution of labels throughout the project's progress. Project managers can check their projects at any point to track progress and the reviewer's performance.
Tool for concordance
The Sysrev tool for concordance allows project administrators and reviewers to perform analysis on their labels. Concordance is measured by calculating the number of times users agree on the labels they have extracted.
Colandr is a free, open-source, internet-based analysis and screening software based on machine learning. It was designed to ease collaboration across various stages of the systematic review procedure. The tool can be a little complex to use. So, here are the steps involved in working with Colandr.
Create a review
The first step to using Colandr is setting up an organized review project. This is helpful to librarians who are assisting researchers with systematic reviews.
The planning stage is setting the review's objectives along with research queries. Any reviewer can review the details of the planning stage. However, they can only be modified by the author for the review.
Citation screening/import
In this phase, users can upload their results from database searches. Colandr also offers an automated deduplication system.
Full-text screening
The system in Colandr will discover the combination of terms and expressions that are most useful for the reader. If an article is selected, it will be moved to the final step.
Data extraction/export
Colandr data extraction is more efficient than the manual method. It creates the form fields for data extraction during the planning stage of the review procedure. Users can decide to revisit or modify the form for data extraction after completing the initial screening.
Bonus Tools
SRDR+ is a web-based tool for extracting and managing systematic review or meta-analysis data. It is open and has a searchable archive of systematic reviews and their data.
7. Plot Digitizer
Plot Digitizer is an efficient tool for extracting information from graphs and images, equipped with many features that facilitate data extraction. The program comes with a free online application, which is adequate to extract data quickly.
Writing a literature review is not easy. It’s a time-consuming process, which can become tiring at times. The softwares mentioned in this blog do an excellent job of maximizing your efforts and helping you write literature reviews much more efficiently. With them, you can breathe a sigh of relief and give more time to your research.
Frequently Asked Questions (FAQs)
1. what is rrl in research.
RRL stands for Review of Related Literature and sometimes interchanged with ‘Literature Review.’ RRL is a body of studies relevant to the topic being researched. These studies may be in the form of journal articles, books, reports, and other similar documents. Review of related literature is used to support an argument or theory being made by the researcher, as well as to provide information on how others have approached the same topic.
2. What are few softwares and tools available for literature review?
• SciSpace Discover
• Mendeley
• Zotero
• Sysrev
• Colandr
• SRDR+
3. How to generate an online literature review?
The Scispace Discover tool, which offers an excellent repository of millions of peer-reviewed articles and resources, will help you generate or create a literature review easily. You may find relevant information by utilizing the filter option, checking its credibility, tracing related topics and articles, and citing in widely accepted formats with a single click.
4. What does it mean to synthesize literature?
To synthesize literature is to take the main points and ideas from a number of sources and present them in a new way. The goal is to create a new piece of writing that pulls together the most important elements of all the sources you read. Make recommendations based on them, and connect them to the research.
5. Should we write abstract for literature review?
Abstracts, particularly for the literature review section, are not required. However, an abstract for the research paper, on the whole, is useful for summarizing the paper and letting readers know what to expect from it. It can also be used to summarize the main points of the paper so that readers have a better understanding of the paper's content before they read it.
6. How do you evaluate the quality of a literature review?
• Whether it is clear and well-written.
• Whether Information is current and up to date.
• Does it cover all of the relevant sources on the topic.
• Does it provide enough evidence to support its conclusions.
7. Is literature review mandatory?
Yes. Literature review is a mandatory part of any research project. It is a critical step in the process that allows you to establish the scope of your research and provide a background for the rest of your work.
8. What are the sources for a literature review?
• Reports
• Theses
• Conference proceedings
• Company reports
• Some government publications
• Journals
• Books
• Newspapers
• Articles by professional associations
• Indexes
• Databases
• Catalogues
• Encyclopaedias
• Dictionaries
• Bibliographies
• Citation indexes
• Statistical data from government websites
9. What is the difference between a systematic review and a literature review?
A systematic review is a form of research that uses a rigorous method to generate knowledge from both published and unpublished data. A literature review, on the other hand, is a critical summary of an area of research within the context of what has already been published.
You might also like

AI tools for researchers: Optimize your workflows with these research assistants

Research Methodology: Everything You need to Know

How To Write a Research Question
- Research article
- Open Access
- Published: 06 March 2019

Tools used to assess the quality of peer review reports: a methodological systematic review
- Cecilia Superchi ORCID: orcid.org/0000-0002-5375-6018 1 , 2 , 3 ,
- José Antonio González 1 ,
- Ivan Solà 4 , 5 ,
- Erik Cobo 1 ,
- Darko Hren 6 &
- Isabelle Boutron 7
BMC Medical Research Methodology volume 19 , Article number: 48 ( 2019 ) Cite this article
18k Accesses
25 Citations
67 Altmetric
Metrics details
A strong need exists for a validated tool that clearly defines peer review report quality in biomedical research, as it will allow evaluating interventions aimed at improving the peer review process in well-performed trials. We aim to identify and describe existing tools for assessing the quality of peer review reports in biomedical research.
We conducted a methodological systematic review by searching PubMed, EMBASE (via Ovid) and The Cochrane Methodology Register (via The Cochrane Library) as well as Google® for all reports in English describing a tool for assessing the quality of a peer review report in biomedical research. Data extraction was performed in duplicate using a standardized data extraction form. We extracted information on the structure, development and validation of each tool. We also identified quality components across tools using a systematic multi-step approach and we investigated quality domain similarities among tools by performing hierarchical, complete-linkage clustering analysis.
We identified a total number of 24 tools: 23 scales and 1 checklist. Six tools consisted of a single item and 18 had several items ranging from 4 to 26. None of the tools reported a definition of ‘quality’. Only 1 tool described the scale development and 10 provided measures of validity and reliability. Five tools were used as an outcome in a randomized controlled trial (RCT). Moreover, we classified the quality components of the 18 tools with more than one item into 9 main quality domains and 11 subdomains. The tools contained from two to seven quality domains. Some domains and subdomains were considered in most tools such as the detailed/thorough (11/18) nature of reviewer’s comments. Others were rarely considered, such as whether or not the reviewer made comments on the statistical methods (1/18).
Several tools are available to assess the quality of peer review reports; however, the development and validation process is questionable and the concepts evaluated by these tools vary widely. The results from this study and from further investigations will inform the development of a new tool for assessing the quality of peer review reports in biomedical research.
Peer Review reports
The use of editorial peer review originates in the eighteenth century [ 1 ]. It is a longstanding and established process that generally aims to provide a fair decision-making mechanism and improve the quality of a submitted manuscript [ 2 ]. Despite the long history and application of the peer review system, its efficacy is still a matter of controversy [ 3 , 4 , 5 , 6 , 7 ]. About 30 years after the first international Peer Review Congress, there are still ‘scarcely any bars to eventual publication. There seems to be no study too fragmented, no hypothesis too trivial [...] for a paper to end up in print’ (Drummond Rennie, chair of the advisory board) [ 8 ].
Recent evidence suggests that many current editors and peer reviewers in biomedical journals still lack the appropriate competencies [ 9 ]. In particular, it has been shown that peer reviewers rarely receive formal training [ 3 ]. Moreover, their capacity to detect errors [ 10 , 11 ], identify deficiencies in reporting [ 12 ] and spin [ 13 ] has been found lacking.
Some systematic reviews have been performed to estimate the effect of interventions aimed at improving the peer review process [ 2 , 14 , 15 ]. These studies showed that there is still a lack of evidence supporting the use of interventions to improve the quality of the peer review process. Furthermore, Bruce and colleagues highlighted the urgent need to clarify outcomes, such as peer review report quality, that should be used in randomized controlled trials evaluating these interventions [ 15 ].
A validated tool that clearly defines peer review report quality in biomedical research is greatly needed. This will allow researchers to have a structured instrument to evaluate the impact of interventions aimed at improving the peer review process in well-performed trials. Such a tool could also be regularly used by editors to evaluate the work of reviewers.
Herein, as starting point for the development of a new tool, we identify and describe existing tools that assess the quality of peer review reports in biomedical research.
Study design
We conducted a methodological systematic review and followed the standard Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines [ 16 ]. The quality of peer review reports is an outcome that in the long term is related to clinical relevance and patient care. However, the protocol was not registered in PROSPERO, as this review does not contain direct health-related outcomes [ 17 ].
Information sources and search strategy
We searched PubMed, EMBASE (via Ovid) and The Cochrane Methodology Register (via The Cochrane Library) from their inception to October 27, 2017 as well as Google® (search date: October 20, 2017) for all reports describing a tool to assess the quality of a peer review report in biomedical research. Search strategies were refined in collaboration with an expert methodologist (IS) and are presented in the Additional file 1 . We hand-searched the citation lists of included papers and consulted a senior editor with expertise in editorial policies and peer review processes to further identify relevant reports.
Eligibility criteria
We included all reports describing a tool to assess the quality of a peer review report. Sanderson and colleagues defined a tool as ‘any structured instrument aimed at aiding the user to assess the quality [...]’ [ 18 ]. Building on this definition, we defined a quality tool as any structured or unstructured instrument assisting the user to assess the quality of peer review report (for definitions see Table 1 ). We restricted inclusion to the English language.
Study selection
We exported the references retrieved from the search into the reference manager Endnote X7 (Clarivate Analytics, Philadelphia, United States), which was subsequently used to remove duplicates. We reviewed all records manually to verify and remove duplicates that had not been previously detected. A reviewer (CS) screened all titles and abstracts of the retrieved citations. A second reviewer (JAG) carried out quality control on a 25% random sample obtained using the statistical software R 3.3.3 [ 19 ]. We obtained and independently examined the full-text copies of potentially eligible reports for further assessment. In the case of disagreement, consensus was determined by a discussion or by involving a third reviewer (DH). We reported the result of this process through a PRISMA flowchart [ 16 ]. When several tools were reported in the same article, they were included as separate tools. When a tool was reported in more than one article, we extracted data from all related reports.
Data extraction
General characteristics of tools.
We designed a data extraction form using Google® Docs and extracted the general characteristics of the tools. We determined whether the tool was scale or checklist. We defined a tool as a scale when it included a numeric or nominal overall quality score while we considered it as a checklist when an overall quality score was not present. We recorded the total number of items (for definitions see Table 1 ). For scales with more than 1 item we extracted how items were weighted, how the overall score was calculated, and the scoring range. Moreover, we checked whether the scoring instructions were adequately defined, partially defined, or not defined according to the subjective judgement of two reviewers (CS and JAG) (an example of the definition for scoring instructions is shown in Table 2 ). Finally, we extracted all information related to the development, validation, and assessment of the tool’s reliability and if the concept of quality was defined.
Two reviewers (CS and JAG) piloted and refined the data extraction form on a random 5% sample of extracted articles. Full data extraction was conducted by two reviewers (CS and JAG) working independently for all included articles. In the case of disagreement, consensus was obtained by discussion or by involving a third reviewer (DH). Authors of the reports were contacted in cases where we needed further clarification of the tool.
Quality components of the peer review report considered in the tools
We followed the systematic multi-step approach recently described by Gentles [ 20 ], which is based on a constant comparative method of analysis developed within the Grounded Theory approach [ 21 ]. Initially, a researcher (CS) extracted all items included in the tools and for each item identified a ‘key concept’ representing a quality component of peer review reports. Next, two researchers (CS and DH) organized the key concepts into a domain-specific matrix (analogous to the topic-specific matrices described by Gentles). Initially, the matrix consisted of domains for peer review report quality, followed by items representative of each domain and references to literature sources that items were extracted from. As the analysis progressed, subdomains were created and the final version of the matrix included domains, subdomains, items and references.
Furthermore, we calculated the proportions of domains based on the number of items included in each domain for each tool. According to the proportions obtained, we created a domain profile for each tool. Then, we calculated the matrix of Euclidean distances between the domain profiles. These distances were used to perform the hierarchical, complete-linkage clustering analysis, which provided us with a tree structure that we represent in a chart. Through this graphical summary, we were able to identify domain similarities among the different tools, which helped us draw our analytical conclusions. The calculations and graphical representations were obtained using the statistical software R 3.3.3 [ 19 ].
Study selection and general characteristics of reports
The screening process is summarized in a flow diagram (Fig. 1 ). Of the 4312 records retrieved, we finally included 46 reports: 39 research articles; 3 editorials; 2 information guides; 1 was a letter to the editor and 1 study was available only as an abstract (excluded studies are listed in Additional file 2 ; included studies are listed in Additional file 3 ).

Study selection flow diagram
General characteristics of the tools
In the 46 reports, we identified 24 tools, including 23 scales and 1 checklist. The tools were developed from 1985 to 2017. Four tools had from 2 to 4 versions [ 22 , 23 , 24 , 25 ]. Five tools were used as an outcome in a randomized controlled trial [ 23 , 25 , 26 , 27 , 28 ]. Table 3 lists the general characteristics of the identified tools. Table 4 presents a more complete descriptive summary of the tools’ characteristics, including types and measures of validity and reliability.
Six scales consisted of a single item enquiring into the overall quality of the peer review report, all of them based on directly asking users to score the overall quality [ 22 , 25 , 29 , 30 , 31 , 32 ]. These tools assessed the quality of a peer review report by using: 1) a 4 or 5 Likert point scale ( n = 4); 2) as ‘good’, ‘fair’ and ‘poor’ ( n = 1); and 3) a restricted scale from 80 to 100 (n = 1). Seventeen scales and one checklist had several items ranging in number from 4 to 26. Of these, 10 used the same weight for each item [ 23 , 24 , 27 , 28 , 33 , 34 , 35 , 36 , 37 , 38 ]. The overall quality score was the sum of the score for each item ( n = 3); the mean of the score of the items ( n = 6); or the summary score ( n = 11) (for definitions see Table 1 ). Three scales reported more than one way to assess the overall quality [ 23 , 24 , 36 ]. The scoring system instructions were not defined in 67% of the tools.
None of the tools reported the definition of peer review report quality, and only one described the tool development [ 39 ]. The first version of this tool was designed by a development group composed of four researchers and three editors. It was based on a tool used in an earlier study and that had been developed by reviewing the literature and interviewing editors. Successively, the tool was modified by rewording some questions after some group discussions and a guideline for using the tool was drawn up.
Only 3 tools assessed and reported a validation process [ 39 , 40 , 41 ]. The assessed types of validity included face validity, content validity, construct validity, and preliminary criterion validity. Face and content validity could involve either a sole editor and author or a group of researchers and editors. Construct validity was assessed with multiple regression analysis using discriminant criteria (reviewer characteristics such as age, sex, and country of residence) and convergent criteria (training in epidemiology and/or statistics); or the overall assessment of the peer review report by authors and an assessment of ( n = 4–8) specific components of the peer review report by editors or authors. Preliminary criterion was assessed by comparing grades obtained by an editor to those obtained by an editor-in-chief using an earlier version of the tool. Reliability was assessed in 9 tools [ 24 , 25 , 26 , 27 , 31 , 36 , 39 , 41 , 42 ]; all reported inter-rater reliability and 2 also reported test-retest reliability. One tool reported the internal consistency measured with the Cronbach’s alpha [ 39 ].
Quality components of the peer review reports considered in the tools with more than one item
We extracted 132 items included in the 18 tools. One item asking for the percentage of co-reviews the reviewer had graded was not included in the classification because it represented a method of measuring reviewer’s performance and not a component of peer review report quality.
We organized the key concepts from each item into ‘topic-specific matrices’ (Additional file 4 ), identifying nine main domains and 11 subdomains: 1) relevance of study ( n = 9); 2) originality of the study ( n = 5); 3) interpretation of study results ( n = 6); 4) strengths and weaknesses of the study ( n = 12) (general, methods and statistical methods); 5) presentation and organization of the manuscript ( n = 8); 6) structure of the reviewer’s comments ( n = 4); 7) characteristics of reviewer’s comments ( n = 14) (clarity, constructiveness, detail/thoroughness, fairness, knowledgeability, tone); 8) timeliness of the review report ( n = 7); and 9) usefulness of the review report ( n = 10) (decision making and manuscript improvement). The total number of tools corresponding to each domain and subdomain is shown in Fig. 2 . An explanation and example of all domains and subdomains is provided in Table 5 . Some domains and subdomains were considered in most tools, such as whether the reviewers’ comments were detailed/thorough ( n = 11) and constructive ( n = 9), whether the reviewers’ comments were on the relevance of the study ( n = 9) and if the peer review report was useful for manuscript improvement ( n = 9). However, other items were rarely considered, such as whether the reviewer made comments on the statistical methods ( n = 1).

Frequency of quality domains and subdomains
Clustering analysis among tools
We created a domain profile for each tool. For example, the tool developed by Justice et al. consisted of 5 items [ 35 ]. We classified three items under the domain ‘ Characteristics of the reviewer’s comments ’, one under ‘ Timeliness of the review report ’ and one under ‘ Usefulness of the review report ’. According to the aforementioned classification, the domain profile (represented by proportions of domains) for this tool was 0.6:0.2:0.2 for the incorporating domains and 0 for the remaining ones. The hierarchical clustering used the matrix of Euclidean distances among domain profiles, which led to five main clusters (Fig. 3 ).

Hierarchical clustering of tools based on the nine quality domains. The figure shows which quality domains are present in each tool. A slice of the chart represents a tool, and each slice is divided into sectors, indicating quality domains (in different colours). The area of each sector corresponds to the proportion of each domain within the tool. For instance, the “Review Rating” tool consists of two domains: Timeliness , meaning that 25% of all its items are encompassed in this domain, and Characteristics of reviewer’s comments occupying the remaining 75%. The blue lines starting from the centre of the chart define how the tools are divided into the five clusters. Clusters #1, #2 and #3 are sub-nodes of a major node grouping all three, meaning that the tools in these clusters have a similar domain profile compared to the tools in clusters #4 and #5
The first cluster consisted of 5 tools developed from 1990 to 2016. All tools included at least one item in the characteristics of the reviewer’s comments domain, representing at least 50% of each domain profile. In the second cluster, there were 3 tools developed from 1994 to 2006. These tools were characterized to incorporate at least one item in the usefulness and timeliness domains. The third cluster included 6 tools that had been developed from 1998 to 2010 and exhibited the most heterogeneous mix of domains. These tools were distinct from the rest because they encompassed items related to interpretation of the study results and originality of the study . Moreover, the third cluster included two tools with different versions and variations. The first, second, and third cluster were linked together in the hierarchical tree that presented tools with at least one quality component grouped in the domain characteristics of the reviewer’s comments. In the fourth cluster, there are 2 tools developed from 2011 to 2017 that consist of at least one component in the strengths and weaknesses domain. Finally, the fifth cluster included 2 tools developed from 2009 to 2012 and which consisted of the same 2 domains. The fourth and fifth clusters were separated from the rest in the hierarchical tree that presented tools with only a few domains.
To the best of our knowledge, this is the first comprehensive review that has systematically identified tools used in biomedical research for assessing the quality of peer review reports. We have identified 24 tools from both the medical literature and an internet search: 23 scales and 1 checklist. One out of four tools consisted of a single item that simply asked the evaluator for a direct assessment of the peer review report’s ‘overall quality’. The remaining tools had between 4 to 26 items in which the overall quality was assessed as the sum of all items, their mean, or as a summary score.
Since a definition of overall quality was not provided, these tools consisted exclusively of a subjective quality assessment by the evaluators. Moreover, we found that only one study reported a rigorous development process of the tool, although it included a very limited number of people. This is of concern because it means that the identified tools were, in fact, not suitable to assess the quality of a peer review report, particularly because they lack a focused theoretical basis. We found 10 tools that were evaluated for validity and reliability; in particular, criterion validity was not assessed for any tool.
Most of the scales with more than one item resulted in a summary score. These scales did not consider how items could be weighted differently. Although commonly used, scales are controversial tools in assessing quality primarily because using a score ‘in summarization weights’ would cause a biased estimation of the measured object [ 43 ]. It is not clear how weights should be assigned to each item of the scale [ 18 ]. Thus different weightings would produce different scales, which could provide varying quality assessments of an individual study [ 44 ].
n our methodological systematic review, we found only one checklist. However, it was neither rigorously developed nor validated and therefore we could not consider it adequate for assessing peer review report quality. We believe that checklists may be a more appropriate means for assessing quality because they do not present an overall score, meaning they do not require a weight for the items.
It is necessary to clearly define what the tool measures. For example, the Risk of Bias (RoB) tool [ 45 ] has a clear aim (to assess trial conduct and not reporting), and it provides a detailed definition of each domain in the tool, including support for judgment. Furthermore, it was developed with transparent procedures, including wide consultation and review of the empirical evidence. Bias and uncertainty can arise when using tools that are not evidence-based, rigorously developed, validated and reliable; and this is particularly true for tools that are used for evaluating interventions aimed at improving the peer review process in RCTs, thus affecting how trial results are interpreted.
We found that most of the items included in the different tools did not cover the scientific aspects of a peer review report nor were constrained to biomedical research. Surprisingly, few tools included an item related to the methods used in the study, and only one inquired about the statistical methods.
In line with a previous study published in 1990 [ 28 ], we believe that the quality components found across all tools could be further organized according to the perspective of either an editor or author, specifically by taking into account the different yet complementary uses of a peer review report. For instance, reviewer’s comments on the relevance of the study and interpretation of the study’s results could assist editors in making an editorial decision, clarity and detail/thoroughness of reviewer’s comments are important attributes which help authors improve manuscript quality. We plan to further investigate the perspectives of biomedical editors and authors towards the quality of peer review reports by conducting an international online survey. We will also include patient editors as survey’s participants as their involvement in the peer review process can further ensure that research manuscripts are relevant and appropriate to end-users [ 46 ].
The present study has strengths but also some limitations. Although we implemented a comprehensive search strategy for reports by following the guidance for conducting methodological reviews [ 20 ], we cannot exclude a possibility that some tools were not identified. Moreover, we limited the eligibility criteria to reports published only in English. Finally, although the number of eligible records we identified through Google® was very limited, it is possible that we introduced selection bias due to a (re)search bubble effect [ 47 ].
Due to the lack of a standard definition of quality, a variety of tools exist for assessing the quality of a peer review report. Overall, we were able to establish 9 quality domains. Between two to seven domains were used among each of the 18 tools. The variety of items and item combinations amongst tools raises concern about variations in the quality of publications across biomedical journals. Low-quality biomedical research implies a tremendous waste of resources [ 48 ] and explicitly affects patients’ lives. We strongly believe that a validated tool is necessary for providing a clear definition of peer review report quality in order to evaluate interventions aimed at improving the peer review process in well-performed trials.
Conclusions
The findings from this methodological systematic review show that the tools for assessing the quality of a peer review report have various components, which have been grouped into 9 domains. We plan to survey a sample of editors and authors in order to refine our preliminary classifications. The results from further investigations will allow us to develop a new tool for assessing the quality of peer review reports. This in turn could be used to evaluate interventions aimed at improving the peer review process in RCTs. Furthermore, it would help editors: 1) evaluate the work of reviewers; 2) provide specific feedback to reviewers; and 3) identify reviewers who provide outstanding review reports. Finally, it might be further used to score the quality of peer review reports in developing programs to train new reviewers.
Abbreviations
Preferred Reporting Items for Systematic Reviews
Randomized controlled trials
Risk of Bias
Kronick DA. Peer review in 18th-century scientific journalism. JAMA. 1990;263(10):1321–2.
Article CAS Google Scholar
Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review. JAMA. 2002;287(21):2784–6.
Article Google Scholar
Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006;99:178–82.
Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med. 1998;32(3):310–7.
Kravitz RL, Franks P, Feldman MD, Gerrity M, Byrne C, William M. Editorial peer reviewers’ recommendations at a general medical journal : are they reliable and do editors care? PLoS One. 2010;5(4):2–6.
Yaffe MB. Re-reviewing peer review. Sci Signal. 2009;2(85):1–3.
Stahel PF, Moore EE. Peer review for biomedical publications : we can improve the system. BMC Med. 2014;12(179):1–4.
Google Scholar
Rennie D. Make peer review scientific. Nature. 2016;535:31–3.
Moher D. Custodians of high-quality science: are editors and peer reviewers good enough? https://www.youtube.com/watch?v=RV2tknDtyDs&t=454s . Accessed 16 Oct 2017.
Ghimire S, Kyung E, Kang W, Kim E. Assessment of adherence to the CONSORT statement for quality of reports on randomized controlled trial abstracts from four high-impact general medical journals. Trials. 2012;13:77.
Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results. JAMA. 2010;303(20):2058–64.
Hopewell S, Collins GS, Boutron I, Yu L-M, Cook J, Shanyinde M, et al. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ. 2014;349:g4145.
Lazarus C, Haneef R, Ravaud P, Boutron I. Classification and prevalence of spin in abstracts of non-randomized studies evaluating an intervention. BMC Med Res Methodol. 2015;15:85.
Jefferson T, Rudin M, Brodney Folse S, et al. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev. 2007;2:MR000016.
Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Med. 2016;14:85.
Moher D, Liberati A, Tetzlaff J, Altman DG, Group TP. Preferred reporting items for systematic reviews and meta-analyses : the PRISMA statement. PLoS Med. 2009;6(7):e1000097.
NHS. PROSPERO International prospective register of systematic reviews. https://www.crd.york.ac.uk/prospero/ . Accessed 6 Nov 2017.
Sanderson S, Tatt ID, Higgins JPT. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Intern J Epidemiol. 2007;36:666–76.
R Core Team. R: a language and environment for statistical computing. http://www.r-project.org/ . Accessed 4 Dec 2017.
Gentles SJ, Charles C, Nicholas DB, Ploeg J, McKibbon KA. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev. 2016;5:172.
Glaser B, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.
Friedman DP. Manuscript peer review at the AJR: facts, figures, and quality assessment. Am J Roentgenol. 1995;164(4):1007–9.
Black N, Van Rooyen S, Godlee F, Smith R, Evans S. What makes a good reviewer and a good review for a general medical journal? JAMA. 1998;280(3):231–3.
Henly SJ, Dougherty MC. Quality of manuscript reviews in nursing research. Nurs Outlook. 2009;57(1):18–26.
Callaham ML, Baxt WG, Waeckerle JF, Wears RL. Reliability of editors’ subjective quality ratings of peer reviews of manuscripts. JAMA. 1998;280(3):229–31.
Callaham ML, Knopp RK, Gallagher EJ. Effect of written feedback by editors on quality of reviews: two randomized trials. JAMA. 2002;287(21):2781–3.
Van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers ’ recommendations : a randomised trial. BMJ. 1999;318(7175):23–7.
Mcnutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. JAMA. 1990;263(10):1371–6.
Moore A, Jones R. Supporting and enhancing peer review in the BJGP. Br J Gen Pract. 2014;64(624):e459–61.
Stossel TP. Reviewer status and review quality. N Engl J Med. 1985;312(10):658–9.
Thompson SR, Agel J, Losina E. The JBJS peer-review scoring scale: a valid, reliable instrument for measuring the quality of peer review reports. Learn Publ. 2016;29:23–5.
Rajesh A, Cloud G, Harisinghani MG. Improving the quality of manuscript reviews : impact of introducing a structured electronic template to submit reviews. AJR. 2013;200:20–3.
Shattell MM, Chinn P, Thomas SP, Cowling WR. Authors’ and editors’ perspectives on peer review quality in three scholarly nursing journals. J Nurs Scholarsh. 2010;42(1):58–65.
Jawaid SA, Jawaid M, Jafary MH. Characteristics of reviewers and quality of reviews: a retrospective study of reviewers at Pakistan journal of medical sciences. Pakistan J Med Sci. 2006;22(2):101–6.
Justice AC, Cho MK, Winker MA, Berlin JA. Does masking author identity improve peer review quality ? A randomized controlled trial. JAMA. 1998;280(3):240–3.
Henly SJ, Bennett JA, Dougherty MC. Scientific and statistical reviews of manuscripts submitted to nursing research: comparison of completeness, quality, and usefulness. Nurs Outlook. 2010;58(4):188–99.
Hettyey A, Griggio M, Mann M, Raveh S, Schaedelin FC, Thonhauser KE, et al. Peerage of science: will it work? Trends Ecol Evol. 2012;27(4):189–90.
Publons. Publons for editors: overview. https://static1.squarespace.com/static/576fcda2e4fcb5ab5152b4d8/t/58e21609d482e9ebf98163be/1491211787054/Publons_for_Editors_Overview.pdf . Accessed 20 Oct 2017.
Van Rooyen S, Black N, Godlee F. Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts. J Clin Epidemiol. 1999;52(7):625–9.
Evans AT, McNutt RA, Fletcher SW, Fletcher RH. The characteristics of peer reviewers who produce good-quality reviews. J Gen Intern Med. 1993;8(8):422–8.
Feurer I, Becker G, Picus D, Ramirez E, Darcy M, Hicks M. Evaluating peer reviews: pilot testing of a grading instrument. JAMA. 1994;272(2):98–100.
Landkroon AP, Euser AM, Veeken H. Quality assessment of reviewers’ reports using a simple instrument. Obstet Gynecol. 2006;108(4):979–85.
Greenland S, O’Rourke K. On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001;2(4):463–71.
Jüni P, Witschi A, Bloch R. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999;282(11):1054–60.
Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.
Schroter S, Price A, Flemyng E, et al. Perspectives on involvement in the peer-review process: surveys of patient and public reviewers at two journals. BMJ Open. 2018;8:e023357.
Ćurković M, Košec A. Bubble effect: including internet search engines in systematic reviews introduces selection bias and impedes scientific reproducibility. BMC Med Res Methodol. 2018;18(1):130.
Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.
Kliewer MA, Freed KS, DeLong DM, Pickhardt PJ, Provenzale JM. Reviewing the reviewers: comparison of review quality and reviewer characteristics at the American journal of roentgenology. AJR. 2005;184(6):1731–5.
Berquist T. Improving your reviewer score: it’s not that difficult. AJR. 2017;209:711–2.
Callaham ML, Mcculloch C. Longitudinal trends in the performance of scientific peer reviewers. Ann Emerg Med. 2011;57(2):141–8.
Yang Y. Effects of training reviewers on quality of peer review: a before-and-after study (Abstract). https://peerreviewcongress.org/abstracts_2009.html . Accessed 7 Nov 2017.
Prechelt L. Review quality collector. https://reviewqualitycollector.org/static/pdf/rqdef-example.pdf . Accessed 20 Oct 2017.
Das Sinha S, Sahni P, Nundy S. Does exchanging comments of Indian and non-Indian reviewers improve the quality of manuscript reviews? Natl Med J India. 1999;12(5):210–3.
Callaham ML, Schriger DL. Effect of structured workshop training on subsequent performance of journal peer reviewers. Ann Emerg Med. 2002;40(3):323–8.
Download references
Acknowledgments
The authors would like to thank the MiRoR consortium for their support, Elizabeth Moylan for helping to identify further relevant reports and Melissa Sharp for providing advice during the writing of this article.
This project was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no 676207. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Availability of data and materials
The datasets supporting the conclusions of the present study will be available in the Zenodo repository in the Methods in Research on Research (MiRoR) community [ https://zenodo.org/communities/miror/?page=1&size=20 ].
Author information
Authors and affiliations.
Department of Statistics and Operations Research, Barcelona-Tech, UPC, c/ Jordi Girona 1-3, 08034, Barcelona, Spain
Cecilia Superchi, José Antonio González & Erik Cobo
INSERM, U1153 Epidemiology and Biostatistics Sorbonne Paris Cité Research Center (CRESS), Methods of therapeutic evaluation of chronic diseases Team (METHODS), F-75014, Paris, France
Cecilia Superchi
Paris Descartes University, Sorbonne Paris Cité, Paris, France
Iberoamerican Cochrane Centre, Hospital de la Santa Creu i Sant Pau, C/ Sant Antoni Maria Claret 167, Pavelló 18 - planta 0, 08025, Barcelona, Spain
CIBER de Epidemiología y Salud Pública (CIBERESP), Madrid, Spain
Department of Psychology, Faculty of Humanities and Social Sciences, University of Split, Split, Croatia
Centre d’épidémiologie Clinique, Hôpital Hôtel-Dieu, 1 place du Paris Notre-Dame, 75004, Paris, France
Isabelle Boutron
You can also search for this author in PubMed Google Scholar
Contributions
All authors provided intellectual contributions to the development of this study. CS, EC and IB had the initial idea and with JAG and DH, designed the study. CS designed the search in collaboration with IS. CS conducted the screening and JAG carried out a quality control of a 25% random sample. CS and JAG conducted the data extraction. CS conducted the analysis and with JAG designed the figures. CS led the writing of the manuscript. IB led the supervision of the manuscript preparation. All authors provided detailed comments on earlier drafts and approved the final manuscript.
Corresponding author
Correspondence to Cecilia Superchi .
Ethics declarations
Ethics approval and consent to participate.
Not required.
Consent for publication
Not applicable.
Competing interests
All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that (1) no authors have support from any company for the submitted work; (2) IB is the deputy director of French EQUATOR that might have an interest in the work submitted; (3) no author’s spouse, partner, or children have any financial relationships that could be relevant to the submitted work; and (4) none of the authors has any non-financial interests that could be relevant to the submitted work.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional files
Additional file 1:.
Search strategies. (PDF 182 kb)
Additional file 2:
Excluded studies. (PDF 332 kb)
Additional file 3:
Included studies. (PDF 244 kb)
Additional file 4:
Classification of peer review report quality components. (PDF 2660 kb)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and Permissions
About this article
Cite this article.
Superchi, C., González, J.A., Solà, I. et al. Tools used to assess the quality of peer review reports: a methodological systematic review. BMC Med Res Methodol 19 , 48 (2019). https://doi.org/10.1186/s12874-019-0688-x
Download citation
Received : 11 July 2018
Accepted : 20 February 2019
Published : 06 March 2019
DOI : https://doi.org/10.1186/s12874-019-0688-x
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Peer review
- Quality control
- Systematic review
BMC Medical Research Methodology
ISSN: 1471-2288
- Submission enquiries: [email protected]
- General enquiries: [email protected]
Subscribe to our newsletter.
Privacy Statement
7 open source tools to make literature reviews easy

Opensource.com
A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing a literature review using open source software .
The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.
1. GNU Linux
Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a flavor of GNU Linux will breathe new life into these outdated PCs. There are more than 100 distributions , all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a bootable USB stick that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.
Linux distributions generally come with a free web browser, and the most popular is Firefox . Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.
3. Unpaywall
Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even Harvard University is challenged to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). Unpaywall is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using open source in lit reviews .
Formatting references is the most tedious of academic tasks. Zotero can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).
5. LibreOffice
Your thesis or academic article can be written conventionally with the free office suite LibreOffice , which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.
If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with LaTeX , a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.
7. MediaWiki
If you want to leverage the open source way to get help with your literature review, you can facilitate a dynamic collaborative literature review . A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. MediaWiki is free software that enables you to set up your own wikis.
Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., Aalto University ), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including mine ) use Appropedia , which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.
Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.
Wrapping up
Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.

Related Content

Subscribe to our weekly newsletter
- Search Search
- CN (Chinese)
- DE (German)
- ES (Spanish)
- FR (Français)
- JP (Japanese)
- Open research
- Booksellers
- Peer Reviewers
- Springer Nature Group ↗
- Editing a journal
- Resources and Tools
- Research Integrity
- Code of conduct for journals
- Editor Resources
- Fundamentals of Peer Review
- Research Ethics
- IRR service
- Course overview
Reviewer Finder
Finding suitable peer reviewers for a manuscript can be a challenging, time-consuming task. We want to help make the process easier and faster for our editors. Reviewer Finder is an exciting tool that reduces the manual work involved in finding relevant reviewers.
To use the tool, simply enter information about the manuscript you are sending out for review into the Reviewer Finder and it will return a list of possible reviewers.
Log-in to Reviewer Finder
Get started:, enter as much information as possible.
Fields you can fill-out include:
Check the suggestions
The matching algorithm returns researchers who have a publishing profile similar to that of the manuscript author(s). Go through the comprehensive list of reviewer names that are sorted by relevance. You will be able to expand each name to learn more about the person’s research background.
Helpful Features
- Reviewer Finder will flag conflicts of interest related to authorship, co-authorship, and institutional affiliations
- Save searches so that you can refer back to the results for similar papers
Frequently Asked Questions
How does Reviewer Finder intend to help?
We help by searching a pool of data for suitable reviewers (hopefully we can find contacts that you do not know) and providing information about their profile, past publications, and possible conflicts of interests to help you make a decision on whether to invite the reviewer to review or not.
What data source is the tool searching?
The tool is searching data from Web of science (provided by Clarivate Analytics) through Target Author Data Graph.
Clarivate Analytics ISI Web of Science - Web of Science is highly comprehensive and multidisciplinary in its journal coverage. Sources include leading STM, humanities, and social science journals.
How does the search algorithm work?
The matching process uses two concepts to create the matching algorithm.
- Data Graph: A data graph is constructed from a data source (Web of Science) to create a unique list of authors, their organizational details and publishing history.
- Suggestion Engine: Entering basic manuscript attributes into the Reviewer Finder tool, such as Title, Abstract text, Keywords and names of the Authors causes the suggestion engine to recommend authors that it calculates will be a good match for peer review.
The software defines a good match to be someone who has a similar publishing profile to that of the manuscript author. Therefore the algorithm attempts to find individuals who have published material that most closely resembles that of the original author.
What should I do once I’ve finished searching?
We present the data as we find it and suggest that you confirm there aren’t any conflicts of interests using your usual methods until you’re confident our tool delivers results you’re happy with.
What should I do if I have a question?
Please send your questions to [email protected] and we will be sure to provide an answer!
Other services for editors
Transfer Desk
Get more submissions for your journal and help rejected authors find a better fit for their research.
Your journal articles can be shared freely by your authors and readers.
ORCID allows your authors to identify themselves at the time of submission with a personal, permanent digital code that distinguishes them from other researchers.
Web of Science Reviewer Recognition Service
Learn how our partnership with the Web of Science Reviewer Recognition Service makes it easy for your journal's reviewers to showcase their peer review activity via our submission systems.
Insight and discussion on publishing in the academic world, including resources, information, and advice to help grow your journal or book series.
Stay up to date
Here to foster information exchange with the library community
Connect with us on LinkedIn and stay up to date with news and development.
- Tools & Services
- Account Development
- Sales and account contacts
- Professional
- Press office
- Locations & Contact
We are a world leading research, educational and professional publisher. Visit our main website for more information.
- © 2023 Springer Nature
- General terms and conditions
- California Privacy Statement
- Manage cookies / Do not sell my data
- Accessibility
- Legal notice
- Help us to improve this site, send feedback.

Start your free trial
Arrange a trial for your organisation and discover why FSTA is the leading database for reliable research on the sciences of food and health.
REQUEST A FREE TRIAL
- Research Skills Blog
5 software tools to support your systematic review processes
By Dr. Mina Kalantar on 19-Jan-2021 13:01:01

Systematic reviews are a reassessment of scholarly literature to facilitate decision making. This methodical approach of re-evaluating evidence was initially applied in healthcare, to set policies, create guidelines and answer medical questions.
Systematic reviews are large, complex projects and, depending on the purpose, they can be quite expensive to conduct. A team of researchers, data analysts and experts from various fields may collaborate to review and examine incredibly large numbers of research articles for evidence synthesis. Depending on the spectrum, systematic reviews often take at least 6 months, and sometimes upwards of 18 months to complete.
The main principles of transparency and reproducibility require a pragmatic approach in the organisation of the required research activities and detailed documentation of the outcomes. As a result, many software tools have been developed to help researchers with some of the tedious tasks required as part of the systematic review process.
hbspt.cta._relativeUrls=true;hbspt.cta.load(97439, 'ccc20645-09e2-4098-838f-091ed1bf1f4e', {"useNewLoader":"true","region":"na1"});
The first generation of these software tools were produced to accommodate and manage collaborations, but gradually developed to help with screening literature and reporting outcomes. Some of these software packages were initially designed for medical and healthcare studies and have specific protocols and customised steps integrated for various types of systematic reviews. However, some are designed for general processing, and by extending the application of the systematic review approach to other fields, they are being increasingly adopted and used in software engineering, health-related nutrition, agriculture, environmental science, social sciences and education.
Software tools
There are various free and subscription-based tools to help with conducting a systematic review. Many of these tools are designed to assist with the key stages of the process, including title and abstract screening, data synthesis, and critical appraisal. Some are designed to facilitate the entire process of review, including protocol development, reporting of the outcomes and help with fast project completion.
As time goes on, more functions are being integrated into such software tools. Technological advancement has allowed for more sophisticated and user-friendly features, including visual graphics for pattern recognition and linking multiple concepts. The idea is to digitalise the cumbersome parts of the process to increase efficiency, thus allowing researchers to focus their time and efforts on assessing the rigorousness and robustness of the research articles.
This article introduces commonly used systematic review tools that are relevant to food research and related disciplines, which can be used in a similar context to the process in healthcare disciplines.
These reviews are based on IFIS' internal research, thus are unbiased and not affiliated with the companies.

This online platform is a core component of the Cochrane toolkit, supporting parts of the systematic review process, including title/abstract and full-text screening, documentation, and reporting.
The Covidence platform enables collaboration of the entire systematic reviews team and is suitable for researchers and students at all levels of experience.
From a user perspective, the interface is intuitive, and the citation screening is directed step-by-step through a well-defined workflow. Imports and exports are straightforward, with easy export options to Excel and CVS.
Access is free for Cochrane authors (a single reviewer), and Cochrane provides a free trial to other researchers in healthcare. Universities can also subscribe on an institutional basis.
Rayyan is a free and open access web-based platform funded by the Qatar Foundation, a non-profit organisation supporting education and community development initiative . Rayyan is used to screen and code literature through a systematic review process.
Unlike Covidence, Rayyan does not follow a standard SR workflow and simply helps with citation screening. It is accessible through a mobile application with compatibility for offline screening. The web-based platform is known for its accessible user interface, with easy and clear export options.
Function comparison of 5 software tools to support the systematic review process
Eppi-reviewer.
EPPI-Reviewer is a web-based software programme developed by the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI) at the UCL Institute for Education, London .
It provides comprehensive functionalities for coding and screening. Users can create different levels of coding in a code set tool for clustering, screening, and administration of documents. EPPI-Reviewer allows direct search and import from PubMed. The import of search results from other databases is feasible in different formats. It stores, references, identifies and removes duplicates automatically. EPPI-Reviewer allows full-text screening, text mining, meta-analysis and the export of data into different types of reports.
There is no limit for concurrent use of the software and the number of articles being reviewed. Cochrane reviewers can access EPPI reviews using their Cochrane subscription details.
EPPI-Centre has other tools for facilitating the systematic review process, including coding guidelines and data management tools.
CADIMA is a free, online, open access review management tool, developed to facilitate research synthesis and structure documentation of the outcomes.
The Julius Institute and the Collaboration for Environmental Evidence established the software programme to support and guide users through the entire systematic review process, including protocol development, literature searching, study selection, critical appraisal, and documentation of the outcomes. The flexibility in choosing the steps also makes CADIMA suitable for conducting systematic mapping and rapid reviews.
CADIMA was initially developed for research questions in agriculture and environment but it is not limited to these, and as such, can be used for managing review processes in other disciplines. It enables users to export files and work offline.
The software allows for statistical analysis of the collated data using the R statistical software. Unlike EPPI-Reviewer, CADIMA does not have a built-in search engine to allow for searching in literature databases like PubMed.
DistillerSR
DistillerSR is an online software maintained by the Canadian company, Evidence Partners which specialises in literature review automation. DistillerSR provides a collaborative platform for every stage of literature review management. The framework is flexible and can accommodate literature reviews of different sizes. It is configurable to different data curation procedures, workflows and reporting standards. The platform integrates necessary features for screening, quality assessment, data extraction and reporting. The software uses Artificial Learning (AL)-enabled technologies in priority screening. It is to cut the screening process short by reranking the most relevant references nearer to the top. It can also use AL, as a second reviewer, in quality control checks of screened studies by human reviewers. DistillerSR is used to manage systematic reviews in various medical disciplines, surveillance, pharmacovigilance and public health reviews including food and nutrition topics. The software does not support statistical analyses. It provides configurable forms in standard formats for data extraction.
DistillerSR allows direct search and import of references from PubMed. It provides an add on feature called LitConnect which can be set to automatically import newly published references from data providers to keep reviews up to date during their progress.
The systematic review Toolbox is a web-based catalogue of various tools, including software packages which can assist with single or multiple tasks within the evidence synthesis process. Researchers can run a quick search or tailor a more sophisticated search by choosing their approach, budget, discipline, and preferred support features, to find the right tools for their research.
If you enjoyed this blog post, you may also be interested in our recently published blog post addressing the difference between a systematic review and a systematic literature review.

- FSTA - Food Science & Technology Abstracts
- IFIS Collections
- Ask an Expert
- Resources Hub
- Privacy policy
- Diversity Statement
- IFIS Sustainability Commitment
- Company news
- Frequently Asked Questions
Ground Floor, 115 Wharfedale Road, Winnersh Triangle, Wokingham, Berkshire RG41 5RB
Get in touch with IFIS
© International Food Information Service (IFIS Publishing) operating as IFIS – All Rights Reserved | Charity Reg. No. 1068176 | Limited Company No. 3507902 | Designed by Blend

- About the author
A Tool to Generate Reviews of Academic Papers
Writing reviews is important but sometimes repetitive and time-consuming. Hence, today I built a tool to help automatize the process of review writing . You can try it at the website below:
The Review Generator ( http://philippe-fournier-viger.com/reviewgenerator/index.html )

This tool let you select some items and this will add predefined sentences to your review. Of course, this tool is not supposed to replace a human and generated reviews should be viewed as draft and edited by hand to add more details.
If you like the tool, you may boomark it and share it. And if you would like more features, please let me know. For example, if you would like that I add more content to the tool, please leave a comment below or send me an email.
** Credit ** That project is a modification of AutoReject (https://autoreject.org/)by Andreas Zeller, which was designed as a joke to automatically reject papers. I have reused the template but modified the text content to turn it into a serious tool. #review #academia #reviewgenerator #reviewprocess #journal #conference
— Philippe Fournier-Viger is a distinguished professor working in China and founder of the SPMF open source data mining software .
Related posts:
2 responses to a tool to generate reviews of academic papers.
i dont know how to use this software
It is not hard to use, you just click the checkboxes on the left, and the generated review appears on the right.
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
- Search for:
- Academia (74)
- artificial intelligence (37)
- Big data (80)
- Chinese posts (1)
- Conference (66)
- Data Mining (163)
- Data science (88)
- Database (1)
- General (42)
- Industry (1)
- Interview (1)
- Machine Learning (19)
- Mathematics (2)
- open-source (35)
- Pattern Mining (58)
- Programming (17)
- Research (109)
- Time series (3)
- Uncategorized (16)
- Utility Mining (17)
- Website (3)
Recent Posts
- How to propose a special issue for a journal?
- ASCII Art for SPMF
- Efficiency problems with using java.util.Hashmap
- SPMF’s architecture (5) The Graphical User Interface
- SPMF 3.0: Towards even more efficiency
- SPMF’s architecture (4) The MemoryLogger
- Unethical services in academia
- SPMF’s architecture (3) The Preference Manager
- A Brief Glossary of Pattern Mining
- SPMF’s architecture (2) The Main class and the Command Processor
Recent Comments
- Philippe Fournier-Viger on How to auto-adjust the minimum support threshold according to the data size
- Nabil R Adam on How to auto-adjust the minimum support threshold according to the data size
- SPMF’s architecture (5) The Graphical User Interface | The Data Mining Blog on SPMF’s architecture (2) The Main class and the Command Processor
- SPMF’s architecture (3) The Preference Manager | The Data Mining Blog on SPMF’s architecture (1) The Algorithm Manager
- Philippe Fournier-Viger on The KDDCup 2015 dataset

- artificial intelligence
- data mining
- data science
- frequent pattern mining
- high-utility mining
- high utility itemset mining
- itemset mining
- machine learning
- open-source
- open source
- pattern mining
- periodic pattern
- programming
- sequential pattern
- utility mining
Number of visitors:

- About Journals
- Author Resources
- Editors - Associate Editors
- Digital Collection
- Contact Publishing Office
- Announcements/Call for Papers
- Submit Paper
- Paper Status
Welcome to ASME Journals Connect!
New Authors? Click here to initiate the submission process.
Returning authors and administrative users (editors, associate editors, reviewers) ? Click here to log in.
Questions? Contact us at [email protected]
- Research Papers
- Review Articles
- Technical Briefs
- Design innovation Papers
- Expert Views
- Announcements
- Technology Reviews
- Discussions
- Book Reviews
- In Memoriam
Journals Information? Click the journal name at the left to review the purpose and scope, including the areas of specialty for each journal.
Data & Analytics
Gender & Diversity
Healthcare & Medicine
Librarian Community
Open Science
Research Intelligence
Research Community
Sustainability
Your Career
- Elsevier Connect
‘What's the best journal for my paper?' New tool can help
Journal Finder has been developed by Elsevier and updated in response to feedback from authors

Getting a research paper published can be a challenge. It's even more challenging when considering the risk of rejection that comes from submitting a paper to a journal that's not the right fit. That's where Elsevier's Journal Finder tool comes in.
The Journal Finder tool
- Helps inexperienced authors to select the correct journals for their papers
- Helps authors working in multidisciplinary fields identify possible journals
- Highlights journals that offer open-access options
About the 2019 relaunch
Journal Finder received a significant overhaul in July 2019. Among the updates that were made to the new iteration of Journal Finder are:
- Additional keyword searching options, with “look ahead” autofill
- Subject area filter via All Science Journal Classification (ASJC) codes (as used by Scopus ) in a drop-down menu
- Introduction of new filters and histograms: CiteScore, Impact Factor, time to first decision, time to publication
- Easier sorting of results on all metrics (default order = best match)
- Higher output limits: maximum of 50 journals in search results list, instead of 10 in the previous version
- Three example searches across diverse science areas, for “playing with the tool” in case you have no abstract at hand but want to see how to use it
- Depiction of journal covers in the results list for easy, visual identification
- Expandable search results list including open access options, top three readership countries, journal scope details and direct submission link
- Trend visualization for most metrics, linking to the Journal Insights tool
- Publication status validation: Journal Finder will automatically check on Scopus if your title/abstract already exists (useful to see if your choice of publication made sense or if you want to check for a similar article for the future)
- Three recent articles from ScienceDirect for a quick indication what type of articles have recently appeared in the journal
You can find out more about the updated Journal Finder here
For inexperienced authors, this is a particular pain point, leading to rejections, adding months to publication and slowing career progress. Nearly a third of visitors to Elsevier's Authors' Home are trying to decide which journal they should submit their paper to.
Meanwhile, editors must sift through many out-of-scope papers when authors choose journals that are a poor match.
Our role is to support authors by getting them published in the best possible journal as fast as possible.
That's where the Journal Finder tool comes in. Since its launch, Journal Finder has been used by more than 60,000 users each month.
How Journal Finder works
You can find Journal Finder here .
The tool generates a list of Elsevier journals that match the topic of their abstract. They can then order the results based on their priorities, such as highest CiteScore or shortest editorial time. You can filter the results to those journals that have open-access options.
Since its launch in 2013, JournalFinder has been updated to share information on journal metrics that provide users with additional insights into various characteristics of the journal – things like impact, speed and reach. With this information, authors can make decisions based on the criteria that matter most to them.
You might also be interested in the Journal Insights tool, which provides visualizations of journal metrics covering five calendar years. With this, you can investigate the data further, compare trends and derive considerable insight into journal performance.
What is the Elsevier Fingerprinting Engine?
The Elsevier Fingerprinting Engine is a software system that mines the text of scientific documents – publication abstracts, funding announcements and awards, project summaries, patents, proposals/applications, and other sources – to create an index of weighted terms which defines the text, known as a Fingerprint visualization. The transferability of fingerprinting to other tools in an example of how Elsevier is benefiting from the expertise of Collexis , a semantic technology software developer Elsevier acquired in 2010. Elsevier is using the "fingerprinting" technology for various products for Academic and Government Institutional Markets, including SciVal Experts , an expertise profiling system and research networking tool, and SciVal Funding , which helps researchers find funding sources and helps funding agencies find researchers to review grant applications.
The Journal Finder tool uses Scopus and the Elsevier Fingerprinting Engine to locate Elsevier journals that most closely match an author's list of keywords and/or abstracts. An Elsevier journal will be recommended if it has published articles which have a high similarity with the new article. A list of potential journals will be created for authors and the tool will allow filtering based on author's preferred criteria (such as Impact Factor, open-access options, review time, acceptance rate, publication times and frequency).
The final selected journal links directly to the journal's homepage and the Elsevier Editorial System (EES) page. The tool makes recommendations from the 2,500+ journals published by Elsevier.
After being one of the first to test the tool in 2013, Dr. Adrie J.J. Bos, Co-Editor-in-Chief of Radiation Measurements , wrote: "The results matched precisely with my own judgement."
How the idea came about
At Elsevier, we receive feedback from tens of thousands of authors each year. By listening to our authors, we are able to make continual improvements to our services, and design products from the vantage point of the people who will use them. In 2012, we launched the Author Mobile Apps competition, which asked early-career researchers to submit their ideas for journal-based mobile applications. The competition received an overwhelming response, with 3,775 ideas submitted.
By a happy coincidence, the winning idea – a "Scope-finder" that would find the best fitting journal for a paper – had already been identified as a priority for Elsevier and was incorporated into the development of the Journal Finder tool.
Hearing of this need directly from the customer confirmed that we were on the right track and should build such a tool as soon as possible.

Elsevier's Journal Finder tool is helpful for authors in doubt of which journal fits their data. This can occur when it involves an intermediate field of research subjects or when authors are in the early stages of their research career, trying to locate the right journal to publish their manuscript.
Sandra Yee, Dean of the University Library System, Wayne State University in Michigan, said the Journal Finder tool will help faculty members and librarians by providing "substantive data and more specific information."
Learn more about Elsevier
Contributors

Hans Zijlstra
Hans Zijlstra works as a Research Metrics Product Manager in Elsevier’s Research Products department in Amsterdam. He is responsible for developing journal and article metrics with the aim of improving Elsevier’s service to researchers, librarians, publishers and funders.

Practical guide
Do this to make your journals even more engaging
Rebekah Collins

Never miss out on a special issue again

Think LEGO towers, not ivory: How we can rebuild trust in higher ed
Anant Agarwal, PhD
Elsevier.com visitor survey
We are always looking for ways to improve customer experience on Elsevier.com. We would like to ask you for a moment of your time to fill in a short questionnaire, at the end of your visit . If you decide to participate, a new browser tab will open so you can complete the survey after you have completed your visit to this website. Thanks in advance for your time.

IMAGES
VIDEO
COMMENTS
Scholastica's peer review software is designed to help journals work smarter, not harder — with all the features editors, authors, and reviewers need for smooth
Peer review in scholarly journals: Perspective of the scholarly community – an ... CONSORT is a tool to improve the quality of reporting of randomized
5 Tools for Easy Literature Review (With 2 Bonus Tools) · SciSpace is a one-stop solution for an effective literature search and barrier-free
Recent evidence suggests that many current editors and peer reviewers in biomedical journals still lack the appropriate competencies [9]. In
7 open source tools to make literature reviews easy · 1. GNU Linux · 2. Firefox · 3. Unpaywall · 4. Zotero · 5. LibreOffice · 6. LaTeX · 7. MediaWiki.
Reviewer Finder is an exciting tool that reduces the manual work involved in ... for your journal's reviewers to showcase their peer review activity via our
IFIS reviews software tools designed to help researchers conduct systematic reviews, and help reduce the large and complex process that is
This entry was posted in Academia, General, Research and tagged academia, conference, journal, paper review, Research, review generator.
Research Papers; Review Articles; Technical Briefs; Design innovation Papers; Expert Views; Tutorials. Non-technical submissions, which undergo editorial review
That's where Elsevier's Journal Finder tool comes in. ... find funding sources and helps funding agencies find researchers to review grant applications.