• - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution


Search form

Using data for improvement

Read the full collection.

What you need to know

Both qualitative and quantitative data are critical for evaluating and guiding improvement

A family of measures, incorporating outcome, process, and balancing measures, should be used to track improvement work

Time series analysis, using small amounts of data collected and displayed frequently, is the gold standard for using data for improvement

We all need a way to understand the quality of care we are providing, or receiving, and how our service is performing. We use a range of data in order to fulfil this need, both quantitative and qualitative. Data are defined as “information, especially facts and numbers, collected to be examined and considered and used to help decision-making.” 1 Data are used to make judgements, to answer questions, and to monitor and support improvement in healthcare ( box 1 ). The same data can be used in different ways, depending on what we want to know or learn.

Defining quality improvement 2

Quality improvement aims to make a difference to patients by improving safety, effectiveness, and experience of care by:

Using understanding of our complex healthcare environment

Applying a systematic approach

Designing, testing, and implementing changes using real-time measurement for improvement

Within healthcare, we use a range of data at different levels of the system:

Patient level—such as blood sugar, temperature, blood test results, or expressed wishes for care)

Service level—such as waiting times, outcomes, complaint themes, or collated feedback of patient experience

Organisation level—such as staff experience or financial performance

Population level—such as mortality, quality of life, employment, and air quality.

This article outlines the data we need to understand the quality of care we are providing, what we need to capture to see if care is improving, how to interpret the data, and some tips for doing this more effectively.

Sources and selection criteria

This article is based on my experience of using data for improvement at East London NHS Foundation Trust, which is seen as one of the world leaders in healthcare quality improvement. Our use of data, from trust board to clinical team, has transformed over the past six years in line with the learning shared in this article. This article is also based on my experience of teaching with the Institute for Healthcare Improvement, which guides and supports quality improvement efforts across the globe.

What data do we need?

Healthcare is a complex system, with multiple interdependencies and an array of factors influencing outcomes. Complex systems are open, unpredictable, and continually adapting to their environment. 3 No single source of data can help us understand how a complex system behaves, so we need several data sources to see how a complex system in healthcare is performing.

Avedis Donabedian, a doctor born in Lebanon in 1919, studied quality in healthcare and contributed to our understanding of using outcomes. 4 He described the importance of focusing on structures and processes in order to improve outcomes. 5 When trying to understand quality within a complex system, we need to look at a mix of outcomes (what matters to patients), processes (the way we do our work), and structures (resources, equipment, governance, etc).

Therefore, when we are trying to improve something, we need a small number of measures (ideally 5-8) to help us monitor whether we are moving towards our goal. Any improvement effort should include one or two outcome measures linked explicitly to the aim of the work, a small number of process measures that show how we are doing with the things we are actually working on to help us achieve our aim, and one or two balancing measures ( box 2 ). Balancing measures help us spot unintended consequences of the changes we are making. As complex systems are unpredictable, our new changes may result in an unexpected adverse effect. Balancing measures help us stay alert to these, and ought to be things that are already collected, so that we do not waste extra resource on collecting these.

Different types of measures of quality of care

Outcome measures (linked explicitly to the aim of the project).

Aim— To reduce waiting times from referral to appointment in a clinic

Outcome measure— Length of time from referral being made to being seen in clinic

Data collection— Date when each referral was made, and date when each referral was seen in clinic, in order to calculate the time in days from referral to being seen

Process measures (linked to the things you are going to work on to achieve the aim)

Change idea— Use of a new referral form (to reduce numbers of inappropriate referrals and re-work in obtaining necessary information)

Process measure— Percentage of referrals received that are inappropriate or require further information

Data collection— Number of referrals received that are inappropriate or require further information each week divided by total number of referrals received each week

Change idea— Text messaging patients two days before the appointment (to reduce non-attendance and wasted appointment slots)

Process measure— Percentage of patients receiving a text message two days before appointment

Data collection— Number of patients each week receiving a text message two days before their appointment divided by the total number of patients seen each week

Process measure— Percentage of patients attending their appointment

Data collection— Number of patients attending their appointment each week divided by the total number of patients booked in each week

Balancing measures (to spot unintended consequences)

Measure— Percentage of referrers who are satisfied or very satisfied with the referral process (to spot whether all these changes are having a detrimental effect on the experience of those referring to us)

Data collection— A monthly survey to referrers to assess their satisfaction with the referral process

Measure— Percentage of staff who are satisfied or very satisfied at work (to spot whether the changes are increasing burden on staff and reducing their satisfaction at work)

Data collection— A monthly survey for staff to assess their satisfaction at work

How should we look at the data?

This depends on the question we are trying to answer. If we ask whether an intervention was efficacious, as we might in a research study, we would need to be able to compare data before and after the intervention and remove all potential confounders and bias. For example, to understand whether a new treatment is better than the status quo, we might design a research study to compare the effect of the two interventions and ensure that all other characteristics are kept constant across both groups. This study might take several months, or possibly years, to complete, and would compare the average of both groups to identify whether there is a statistically significant difference.

This approach is unlikely to be possible in most contexts where we are trying to improve quality. Most of the time when we are improving a service, we are making multiple changes and assessing impact in real-time, without being able to remove all confounding factors and potential bias. When we ask whether an outcome has improved, as we do when trying to improve something, we need to be able to look at data over time to see how the system changes as we intervene, with multiple tests of change over a period. For example, if we were trying to improve the time from a patient presenting in the emergency department to being admitted to a ward, we would likely be testing several different changes at different places in the pathway. We would want to be able to look at the outcome measure of total time from presentation to admission on the ward, over time, on a daily basis, to be able to see whether the changes made lead to a reduction in the overall outcome. So, when looking at a quality issue from an improvement perspective, we view smaller amounts of data but more frequently to see if we are improving over time. 2

What is best practice in using data to support improvement?

Best practice would be for each team to have a small number of measures that are collectively agreed with patients and service users as being the most important ways of understanding the quality of the service being provided. These measures would be displayed transparently so that all staff, service users, and patients and families or carers can access them and understand how the service is performing. The data would be shown as time series analysis, to provide a visual display of whether the service is improving over time. The data should be available as close to real-time as possible, ideally on a daily or weekly basis. The data should prompt discussion and action, with the team reviewing the data regularly, identifying any signals that suggest something unusual in the data, and taking action as necessary.

The main tools used for this purpose are the run chart and the Shewhart (or control) chart. The run chart ( fig 1 ) is a graphical display of data in time order, with a median value, and uses probability-based rules to help identify whether the variation seen is random or non-random. 2 The Shewhart (control) chart ( fig 2 ) also displays data in time order, but with a mean as the centre line instead of a median, and upper and lower control limits (UCL and LCL) defining the boundaries within which you would predict the data to be. 6 Shewhart charts use the terms “common cause variation” and “special cause variation,” with a different set of rules to identify special causes.

Fig 1

A typical run chart

Fig 2

A typical Shewhart (or control) chart

Is it just about numbers?

We need to incorporate both qualitative and quantitative data to help us learn about how the system is performing and to see if we improve over time. Quantitative data express quantity, amount, or range and can be measured numerically—such as waiting times, mortality, haemoglobin level, cash flow. Quantitative data are often visualised over time as time series analyses (run charts or control charts) to see whether we are improving.

However, we should also be capturing, analysing, and learning from qualitative data throughout our improvement work. Qualitative data are virtually any type of information that can be observed and recorded that is not numerical in nature. Qualitative data are particularly useful in helping us to gain deeper insight into an issue, and to understand meaning, opinion, and feelings. This is vital in supporting us to develop theories about what to focus on and what might make a difference. 7 Examples of qualitative data include waiting room observation, feedback about experience of care, free-text responses to a survey.

Using qualitative data for improvement

One key point in an improvement journey when qualitative data are critical is at the start, when trying to identify “What matters most?” and what the team’s biggest opportunity for improvement is. The other key time to use qualitative data is during “Plan, Do, Study, Act” (PDSA) cycles. Most PDSA cycles, when done well, rely on qualitative data as well as quantitative data to help learn about how the test fared compared with our original theory and prediction.

Table 1 shows four different ways to collect qualitative data, with advantages and disadvantages of each, and how we might use them within our improvement work.

Different ways to collect qualitative data for improvement

Tips to overcome common challenges in using data for improvement?

One of the key challenges faced by healthcare teams across the globe is being able to access data that is routinely collected, in order to use it for improvement. Large volumes of data are collected in healthcare, but often little is available to staff or service users in a timescale or in a form that allows it to be useful for improvement. One way to work around this is to have a simple form of measurement on the unit, clinic, or ward that the team own and update. This could be in the form of a safety cross 8 or tally chart. A safety cross ( fig 3 ) is a simple visual monthly calendar on the wall which allows teams to identify when a safety event (such as a fall) occurred on the ward. The team simply colours in each day green when no fall occurred, or colours in red the days when a fall occurred. It allows the team to own the data related to a safety event that they care about and easily see how many events are occurring over a month. Being able to see such data transparently on a ward allows teams to update data in real time and be able to respond to it effectively.

Fig 3

Example of a safety cross in use

A common challenge in using qualitative data is being able to analyse large quantities of written word. There are formal approaches to qualitative data analyses, but most healthcare staff are not trained in these methods. Key tips in avoiding this difficulty are ( a ) to be intentional with your search and sampling strategy so that you collect only the minimum amount of data that is likely to be useful for learning and ( b ) to use simple ways to read and theme the data in order to extract useful information to guide your improvement work. 9 If you want to try this, see if you can find someone in your organisation with qualitative data analysis skills, such as clinical psychologists or the patient experience or informatics teams.

Education into practice

What are the key measures for the service that you work in?

Are these measures available, transparently displayed, and viewed over time?

What qualitative data do you use in helping guide your improvement efforts?

How patients were involved in the creation of this article

Service users are deeply involved in all quality improvement work at East London NHS Foundation Trust, including within the training programmes we deliver. Shared learning over many years has contributed to our understanding of how best to use all types of data to support improvement. No patients have had input specifically into this article.

This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ , including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ ’s quality improvement editor post are funded by the Health Foundation.

Competing interests: I have read and understood the BMJ Group policy on declaration of interests and have no relevant interests to declare.

Provenance and peer review: Commissioned; externally peer reviewed.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

quantitative research peer review article

Please note that Internet Explorer version 8.x is not supported as of January 1, 2016. Please refer to this support page for more information.


Asian Nursing Research

Review article synthesizing quantitative evidence for evidence-based nursing: systematic review.

As evidence-based practice has become an important issue in healthcare settings, the educational needs for knowledge and skills for the generation and utilization of healthcare evidence are increasing. Systematic review (SR), a way of evidence generation, is a synthesis of primary scientific evidence, which summarizes the best evidence on a specific clinical question using a transparent, a priori protocol driven approach. SR methodology requires a critical appraisal of primary studies, data extraction in a reliable and repeatable way, and examination for validity of the results. SRs are considered hierarchically as the highest form of evidence as they are a systematic search, identification, and summarization of the available evidence to answer a focused clinical question with particular attention to the methodological quality of studies or the credibility of opinion and text. The purpose of this paper is to introduce an overview of the fundamental knowledge, principals and processes in SR. The focus of this paper is on SR especially for the synthesis of quantitative data from primary research studies that examines the effectiveness of healthcare interventions. To activate evidence-based nursing care in various healthcare settings, the best and available scientific evidence are essential components. This paper will include some examples to promote understandings.

Cited by (0)

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on November 25, 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

Prevent plagiarism. Run a free check.

In general, the peer review process includes the following steps:

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2022, November 25). What Is Peer Review? | Types & Examples. Scribbr. Retrieved March 6, 2023, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Save citation to file

Email citation, add to collections.

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

Quantitative Results of a National Intervention to Prevent Hospital-Acquired Catheter-Associated Urinary Tract Infection: A Pre-Post Observational Study


Background: Many hospitals struggle to prevent catheter-associated urinary tract infection (CAUTI).

Objective: To evaluate the effect of a multimodal initiative on CAUTI in hospitals with high burden of health care-associated infection (HAI).

Design: Prospective, national, nonrandomized, clustered, externally facilitated, pre-post observational quality improvement initiative, for 3 cohorts active between November 2016 and May 2018.

Setting: Acute care, long-term acute care, and critical access hospitals, including intensive care and non-intensive care wards.

Participants: Target hospitals had a high burden of Clostridioides difficile infection plus central line-associated bloodstream infection, CAUTI, or hospital-onset methicillin-resistant Staphylococcus aureus bloodstream infection, defined as cumulative attributable differences above the first tertile in the Targeted Assessment for Prevention (TAP) strategy. Some additional nonrecruited hospitals also joined.

Intervention: Multimodal intervention, including Practice Change Assessment tool to identify infection prevention and control (IPC) and HAI prevention gaps; Web-based, on-demand modules involving onboarding, foundational IPC practices, HAI-specific 2-tiered approach to prioritize and implement interventions, and TAP resources; monthly webinars; state partner-led in-person meetings; and feedback. State partners made site visits to at least 50% of their enrolled hospitals, to support self-assessments and coach.

Measurements: Rates of CAUTI and urinary catheter device utilization ratio.

Results: Of 387 participating hospitals from 23 states and the District of Columbia, 361 provided CAUTI data. Over the study period, the unadjusted CAUTI rate was low and relatively stable, decreasing slightly from 1.12 to 1.04 CAUTIs per 1000 catheter-days. Catheter utilization decreased from 21.46 to 19.83 catheter-days per 100 patient-days from the pre- to the postintervention period.

Limitations: The intervention period was brief, with no assessment of fidelity. Baseline CAUTI rates were low. Patient characteristics were not assessed.

Conclusion: This multimodal intervention yielded no substantial improvements in CAUTI or urinary catheter utilization.

Primary funding source: Centers for Disease Control and Prevention.

Similar articles

Publication types

Related information

Linkout - more resources, full text sources.


full text provider logo

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Log in using your username and password

You are here

This article has a correction. Please see:

Download PDF


Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


Some nurses feel that they lack the necessary skills to read a research paper and to then decide if they should implement the findings into their practice. This is particularly the case when considering the results of quantitative research, which often contains the results of statistical testing. However, nurses have a professional responsibility to critique research to improve their practice, care and patient safety. 1  This article provides a step by step guide on how to critically appraise a quantitative paper.

Title, keywords and the authors

The authors’ names may not mean much, but knowing the following will be helpful:

Their position, for example, academic, researcher or healthcare practitioner.

Their qualification, both professional, for example, a nurse or physiotherapist and academic (eg, degree, masters, doctorate).

This can indicate how the research has been conducted and the authors’ competence on the subject. Basically, do you want to read a paper on quantum physics written by a plumber?

The abstract is a resume of the article and should contain:


Research question/hypothesis.

Methods including sample design, tests used and the statistical analysis (of course! Remember we love numbers).

Main findings.


The subheadings in the abstract will vary depending on the journal. An abstract should not usually be more than 300 words but this varies depending on specific journal requirements. If the above information is contained in the abstract, it can give you an idea about whether the study is relevant to your area of practice. However, before deciding if the results of a research paper are relevant to your practice, it is important to review the overall quality of the article. This can only be done by reading and critically appraising the entire article.

The introduction

Example: the effect of paracetamol on levels of pain.

My hypothesis is that A has an effect on B, for example, paracetamol has an effect on levels of pain.

My null hypothesis is that A has no effect on B, for example, paracetamol has no effect on pain.

My study will test the null hypothesis and if the null hypothesis is validated then the hypothesis is false (A has no effect on B). This means paracetamol has no effect on the level of pain. If the null hypothesis is rejected then the hypothesis is true (A has an effect on B). This means that paracetamol has an effect on the level of pain.

Background/literature review

The literature review should include reference to recent and relevant research in the area. It should summarise what is already known about the topic and why the research study is needed and state what the study will contribute to new knowledge. 5 The literature review should be up to date, usually 5–8 years, but it will depend on the topic and sometimes it is acceptable to include older (seminal) studies.


In quantitative studies, the data analysis varies between studies depending on the type of design used. For example, descriptive, correlative or experimental studies all vary. A descriptive study will describe the pattern of a topic related to one or more variable. 6 A correlational study examines the link (correlation) between two variables 7  and focuses on how a variable will react to a change of another variable. In experimental studies, the researchers manipulate variables looking at outcomes 8  and the sample is commonly assigned into different groups (known as randomisation) to determine the effect (causal) of a condition (independent variable) on a certain outcome. This is a common method used in clinical trials.

There should be sufficient detail provided in the methods section for you to replicate the study (should you want to). To enable you to do this, the following sections are normally included:

Overview and rationale for the methodology.

Participants or sample.

Data collection tools.

Methods of data analysis.

Ethical issues.

Data collection should be clearly explained and the article should discuss how this process was undertaken. Data collection should be systematic, objective, precise, repeatable, valid and reliable. Any tool (eg, a questionnaire) used for data collection should have been piloted (or pretested and/or adjusted) to ensure the quality, validity and reliability of the tool. 9 The participants (the sample) and any randomisation technique used should be identified. The sample size is central in quantitative research, as the findings should be able to be generalised for the wider population. 10 The data analysis can be done manually or more complex analyses performed using computer software sometimes with advice of a statistician. From this analysis, results like mode, mean, median, p value, CI and so on are always presented in a numerical format.

The author(s) should present the results clearly. These may be presented in graphs, charts or tables alongside some text. You should perform your own critique of the data analysis process; just because a paper has been published, it does not mean it is perfect. Your findings may be different from the author’s. Through critical analysis the reader may find an error in the study process that authors have not seen or highlighted. These errors can change the study result or change a study you thought was strong to weak. To help you critique a quantitative research paper, some guidance on understanding statistical terminology is provided in  table 1 .

Some basic guidance for understanding statistics

Quantitative studies examine the relationship between variables, and the p value illustrates this objectively.  11  If the p value is less than 0.05, the null hypothesis is rejected and the hypothesis is accepted and the study will say there is a significant difference. If the p value is more than 0.05, the null hypothesis is accepted then the hypothesis is rejected. The study will say there is no significant difference. As a general rule, a p value of less than 0.05 means, the hypothesis is accepted and if it is more than 0.05 the hypothesis is rejected.

The CI is a number between 0 and 1 or is written as a per cent, demonstrating the level of confidence the reader can have in the result. 12  The CI is calculated by subtracting the p value to 1 (1–p). If there is a p value of 0.05, the CI will be 1–0.05=0.95=95%. A CI over 95% means, we can be confident the result is statistically significant. A CI below 95% means, the result is not statistically significant. The p values and CI highlight the confidence and robustness of a result.

Discussion, recommendations and conclusion

The final section of the paper is where the authors discuss their results and link them to other literature in the area (some of which may have been included in the literature review at the start of the paper). This reminds the reader of what is already known, what the study has found and what new information it adds. The discussion should demonstrate how the authors interpreted their results and how they contribute to new knowledge in the area. Implications for practice and future research should also be highlighted in this section of the paper.

A few other areas you may find helpful are:

Limitations of the study.

Conflicts of interest.

Table 2 provides a useful tool to help you apply the learning in this paper to the critiquing of quantitative research papers.

Quantitative paper appraisal checklist

Competing interests None declared.

Patient consent Not required.

Provenance and peer review Commissioned; internally peer reviewed.

Correction notice This article has been updated since its original publication to update p values from 0.5 to 0.05 throughout.

Linked Articles

Read the full text or download the PDF:

Concordia University, St. Paul

   Education - Graduate Studies: How do I find qualitative and quantitative articles?

Get Help Now

Or try these options:, get help from additional services in the library:, qualitative & quantitative articles.

These three strategies can help you locate qualitative and quantitative articles as you search library databases.

quantitative research peer review article

What is Qualitative research ? - "an approach to research that is primarily concerned with studying the nature, quality, and meaning of human experience. It asks questions about how people make sense of their experiences, how people talk about what has happened to them and others, and how people experience, manage, and negotiate situations they find themselves in. Qualitative research is interested both in individual experiences and in the ways in which people experience themselves as part of a group. Qualitative data take the form of accounts or observations, and the findings are presented in the form of a discussion of the themes that emerged from the analysis. Numbers are very rarely used in qualitative research."

Willig, C. (2016). Qualitative research. In L. H. Miller (Ed.), The Sage encyclopedia of theory in psychology . Thousand Oaks, CA: Sage

Publications. Retrieved from https://go.openathens.net/redirector/csp.edu?url=https%3A%2F%2Fsearch.credoreference.com%2Fsearch%2Fall%3FinstitutionId%3D5380%26searchPhrase%3DQualitative%2520Research

What is Quantitative research ? - "Quantitative research relies primarily on the collection of quantitative data and has its own, unique set of assumptions and normative practices... Goals include to describe, to predict, and to explain human phenomena."

Quantitative research. (2009). In L. E. Sullivan (Ed.), The SAGE glossary of the social and behavioral sciences . Thousand Oaks, CA: Sage

Publications. Retrieved from https://go.openathens.net/redirector/csp.edu?url=https%3A%2F%2Fsearch.credoreference.com%2Fsearch%2Fall%3FinstitutionId%3D5380%26searchPhrase%3Dquantitative%2520research

Chapter 3. Introduction to Quantitative Research and Data

T he foundation of any e-book analysis framework rests on knowledge of the general e-book landscape and the existing information needs of a local user community. From this starting point, quantitative methods, such as cost analysis, can provide evidence for collection development initiatives and demonstrate how they align with patrons’ needs and the overarching goals of library administrators or funding agencies.

Essentially, “data stands in place of reality we wish to study. We cannot simply know a phenomenon, but we can attempt to capture it as data which represents the reality we have experienced . . . and are trying to explain.” 1 The data collected through quantitative investigations provides a baseline for future evaluation, evidence for when and how patrons make use of electronic collections, and promotes data-driven decisions throughout collection development departments. To get the most mileage out of the time and resources invested into quantitative investigations, it is essential to first understand what quantitative research is and what types of questions it can answer.

What Is Quantitative Research?

In the most basic terms, quantitative research methods are concerned with collecting and analyzing data that is structured and can be represented numerically. 2 One of the central goals is to build accurate and reliable measurements that allow for statistical analysis.

Because quantitative research focuses on data that can be measured, it is very effective at answering the “what” or “how” of a given situation. Questions are direct, quantifiable, and often contain phrases such as what percentage? what proportion? to what extent? how many? how much?

Quantitative research allows librarians to learn more about the demographics of a population, measure how many patrons use a service or product, examine attitudes and behaviors, document trends, or explain what is known anecdotally. Measurements like frequencies (i.e., counts), percentages, proportions, and relationships provide means to quantify and provide evidence for the variables listed above.

Findings generated from quantitative research uncover behaviors and trends. However, it is important to note that they do not provide insight into why people think, feel, or act in certain ways. In other words, quantitative research highlights trends across data sets or study groups, but not the motivation behind observed behaviors. To fill in these knowledge gaps, qualitative studies like focus groups, interviews, or open-ended survey questions are effective.

Whenever I sit down to a new quantitative research project and begin to think about my goals and objectives, I like to keep a small cheat sheet on my desk to remind me of the trends quantitative data can uncover and the stories that I can tell with study conclusions. This serves as one quick strategy that keeps my thoughts focused and prevents scope creep as I discuss project plans with various stakeholders.

Quantitative Research Cheat Sheet

Six key characteristics of quantitative research:

Quantitative findings can provide evidence or answers in the following areas:

Main advantages of quantitative research:

Main limitations of quantitative research:

Quantitative Research in Information Management Environments

In the current information landscape, a wealth of quantitative data sources is available to librarians. One of the challenges surrounding quantitative research in the information management profession is “how to make sense of all these data sources and use them in a way that supports effective decision-making.” 4

Most libraries pay for and receive materials through multiple routes. As a result, a quantitative research framework for e-book collections often consist of two central components: an examination of resource allocations and expenditures from funds, endowments, or gifts; and an examination of titles received through firm orders, subscriptions, packages, and large aggregated databases. 5 In many cases, examining funds and titles according to subject areas adds an extra layer of knowledge that can provide evidence for teaching, learning, or research activities in a specific field or justify requests for budget increases. 6

Many of the quantitative research projects that I have conducted over the past four years are in direct response to an inquiry from library administrators. In most cases, I have been asked to provide evidence for collection development activities that support expressed information needs, justify expenditures, or project annual increases in preparation for a new fiscal year. Study results are often expected to describe or weigh several courses of action in the short and long term. Essentially, my work is categorized into three basic concepts related to library management:

To assist in my prep work for a quantitative research project, I put together a file of background information about my library system and local user community to ensure that the project supports institutional goals and aligns with the general direction of programs and services on campus. Below are seven categories of information that I have on file at all times:

Typically, I take a day or two at the beginning of each fiscal year to update this information and ensure that it accurately reflects the landscape of collections and services available at CUL. From this starting point, it is simple to look at new project descriptions and think about the data required to support high-level decisions regarding the allocation of resources, to assess the effectiveness of collections and services, or to measure the value and impact of collections.

A wealth of local and external data sources is available to librarians, and each one can be used to tell a story about collection size, value, and impact. All that is required is an understanding of what the data measures and how different sources can be combined to tell a story about a user community.

Definitions of Local and External Data Sources

The remaining sections of this issue of Library Technology Reports discuss how I use quantitative data, what evidence I have uncovered to support e-book collection decisions, and how I apply quantitative findings in practical library settings. For the purposes of these discussions, I will use the following terminology:

Bibliographic record: A library catalog record that represents a specific title or resource.

Catalog clickthroughs: Counts of patron use of the catalog to access electronic full texts.

Citation analysis: Measurement of the impact of an article based on the number of times it has been cited.

Consortia reports: Consolidated usage reports for consortia. Often used to view usage linked to each individual consortia member.

COUNTER (Counting Online Usage of Networked Electronic Resources): An international initiative to improve the reliability of online usage statistics by providing a Code of Practice that standardizes the collection of usage data. It works to ensure vendor usage data is credible and comparable.

Cost data: Factual information concerning the cost of library materials, annual budget allocations, and general acquisitions budget.

FTE (full-time equivalent): The number of full-time faculty and students working or studying at a specific institution.

IP (Internet Protocol) address: A numerical label usually assigned to a library router or firewall that provides access to a private network (e.g., school or library network).

Link resolver statistics: Information regarding the pathways users take to access electronic resources.

Overlap data: Measurement of the degree of duplication across a collection.

Publication analysis: Measurement of impact by counting the research output of an author. Metrics include the number of peer-reviewed articles, coauthor collaborations, publication patterns, and extent of interdisciplinary research.

Title lists: Lists of e-book titles available in subscriptions, databases, or packages. These lists are generated and maintained by vendors and publishers.

Turnaway statistics: The number of patrons denied access to a specific title.

Vendor use data: Electronic use statistics provided by vendors.

Indicators and Performance Measures That Support Quantitative Research

I regularly use several indicators and performance measures to analyze e-book collections. Local and external data sources (listed in the section above) inform these investigations and provide the necessary “ingredients” to conduct cost analysis, examine return on investment, or measure the value of e-book collections to the community at CUL. Below is a breakdown of how I classify data and relate it to different indicators. 9

Input Cost Measures

Data source: Cost data pulled from Voyager reports (or your institution’s ILS system).

In general, cost data demonstrates how funds are allocated across a budget. Analysis can identify areas where additional resources are required, monitor cost changes over time, and flag collection areas where funds can be pulled (e.g., overbudgeted funds, subject areas that no longer support the curriculum, etc.) and “reinvested” in the collection to support current information needs.

Each of the investigations described in the following chapter began with a review of cost data. I relied on a basic knowledge of how e-book acquisition budgets are distributed across subject areas or pooled to purchase interdisciplinary materials. Essentially, these investigations involved the identification of fund codes linked to subject areas, expenditures across set date ranges (e.g., calendar years, fiscal years, academic years), and bulk versus long-tail purchases.

Tip: When working with cost data and examining input cost measures, I have found it helpful to categorize data by fund type. E-book collections at CUL are often built with general income (GI) funds, endowments, and gifts. Policies and procedures regarding how funds can be transferred and what materials can be purchased impact how resources are allocated to build e-book collections. Before beginning a cost analysis project at your institution, it may be helpful to review the policies in place and determine how they relate to overarching institutional goals and collection priorities.

Collection Output Measures

Data sources: Cost data, title lists, overlap data, bibliographic records (particularly subject headings).

Collection output measures are related to the quantity and quality of output. Examples include the number of e-book titles included in a subscription or package deal acquired by a library, the number of e-book records acquired over a given period of time, the number of publishers and unique subject areas represented in an e-book collection, the currency of information (e.g., publication year), and the percentage of title overlap, or duplication, within a collection.

At this stage in my cost analysis projects, it is often necessary to combine data to create a snapshot of how funds flow in and out of subject areas to acquire research and teaching materials. For example, many of our large e-book packages are interdisciplinary. By pulling cost data, I can determine how the total cost was split across subject divisions based on fund code counts. Then, I break title lists apart by subject to determine what percentage of total content relates to each library division. By comparing the cost breakdown and title list breakdown, it is possible to determine what percentage of total content each library division receives and if it is on par with the division’s financial contribution.

Effectiveness Measures and Indicators

Data sources: Cost data, title lists, COUNTER reports, vendor reports, consortia reports, resolver statistics, turnaway statistics, Google Analytics.

Examining input and output measures is an effective way of determining how budgets are allocated and the quantity and quality of materials available to patrons. To develop a quantitative baseline for the general value of e-book collections, measures like rate of use, cost per use, and turnaway rates can be very effective.

Again, this form of analysis relies on data from multiple sources. The ability to combine cost data, title lists, and COUNTER data (or vendor data) has yielded actionable results at my library. For instance, I combine data from these three sources to measure the value of databases. By pulling cost data covering three fiscal years and matching title lists against COUNTER reports, I have been able to examine trends in annual increase rates, examine overlap between subscriptions in the same subject area, and calculate cost per use to determine what percentage of the user community makes use of subscriptions.

Finally, by looking at turnaway statistics (also found in COUNTER data), it is possible to determine if sufficient access is provided to users. For instance, I look at turnaway statistics to evaluate if e-books listed on course reading lists provide sufficient access to a class of students over a semester. In cases where access is limited to a single user, I may look at the budget to find areas where funds can be shifted to purchase simultaneous usage instead.

Together, the data sets mentioned above provide evidence for how funds are invested, if they are invested in materials that are heavily used by patrons, and if access models are suited to the needs of the local user community.

In some cases, particularly when dealing with foreign language materials, I have encountered challenges because COUNTER data is not provided, and in some cases, it is difficult to obtain vendor reports as well. In the absence of usage data, I have experimented with link resolver statistics to determine what information they provide about user activities and the value of e-book materials.

Link resolver statistics provide information about the pathways users take to access electronic resources. 10 Resolver statistics show that a patron made a “request” via the link resolver and started the process of trying to view a full text. If the patron successfully accesses the full text, this is counted as a “clickthrough.”

It is important to note that link resolver statistics and usage statistics (like COUNTER) are not comparable because they measure different activities. Link resolvers measure attempts to connect while usage data measures usage activity. However, comparing sets of link resolver statistics against each other may provide insight into which resources patrons attempt to access most frequently. This can provide a ballpark idea of resource value in cases where usage statistics are not available.

Domain Measures

Data sources: FTE (full-time equivalent), IP address, demographic information.

Domain measures relate to the user community served by a library. They include total population, demographic information, attributes (e.g., undergraduate level, graduate level), and information needs.

In my work, domain measures impact subscription or package costs because campus-wide access is often priced according to FTE. Due to the size of CUL’s student body, access to essential collections can become extremely expensive and fall outside of the budget range. When this occurs, examining patron access by IP address has opened the door to negotiation, particularly when dealing with content that is discipline-specific. For instance, when negotiating subscription prices for science materials, IP data provided evidence that usage is concentrated at the library router located in the Science and Engineering Library. This allowed science selectors to negotiate pricing models based around the FTE of natural science programs as opposed to the campus community as a whole.

Cost-Effectiveness Indicators

Data sources: COUNTER reports, vendor reports, turnaway statistics, citation analysis, publication analysis.

Cost-effectiveness indicators are related to measures like cost per use and ultimately examine the general return on investment. They evaluate the financial resources invested in a product and determine if the investment brings added value to the existing collection.

In my work, I often combine cost data with usage data to calculate cost per use and also capture usage trends spanning at least three calendar years. The results provide a benchmark regarding whether the financial investment in the product is equivalent to its general “demand” within the user community. A recent project with colleagues at the science and medical science libraries has examined how to use citation and publication data to determine general impact of electronic resources.

Challenges Presented by Quantitative Research

One of the challenges surrounding quantitative research in library environments is a lack of standardization across data sets, particularly vendor reports. The general situation has improved in recent years due to widespread compliance with the COUNTER Code of Practice, but there is still work to be done. It is difficult to interpret the meaning of vendor usage data that is still not COUNTER-compliant because clear definitions of use do not exist. This can create significant roadblocks when running quantitative projects that examine multiple e-book collections to get a sense of comparative value.

Also, usage data is generated outside of libraries by publishers or aggregators and vendors. Factors like turnover, company mergers, or password changes result in significant time lags between when usage statistics are generated and when libraries receive them. Also, some vendors pull down usage statistics after a period of months. In most cases, librarians need statistics captured over two or three years to meet reporting requirements, and data dating back this far can be difficult to obtain. Finally, annual usage statistics are provided according to calendar year. However, librarians look at usage by fiscal year and academic year as well. In many cases, this means that multiple usage reports have to be stitched together in order to capture the appropriate timeframe for reporting purposes. This process is labor intensive and takes a considerable amount of time to complete.

These challenges emphasize an ongoing need to build positive working relationships with publishers, aggregators, and vendors to discuss challenges and develop solutions that benefit all stakeholders. It is important to note that libraries have valuable information that is not available to content providers, namely how e-books are discovered and used. Strong relationships allow for the transparent exchange of information between all parties, which ultimately benefits patrons by providing a seamless e-book experience.

Designing a Quantitative Research Framework

As mentioned earlier in this chapter, data stands in place of a reality we wish to study, quantify, and explain. In order to prevent scope creep and pull together bodies of data that add value to local work environments, it is essential to begin any quantitative research project with a set of clearly defined objectives, a strong understanding of the stakeholder group or audience, and knowledge of local information needs. These bits of information serve as markers to measure progress and ensure the project stays on track.

It is tempting to dive straight into a project and investigate if anecdotal information or assumptions are correct, but time spent developing a project outline is never wasted. The development of a successful plan requires “a clear idea of what it is to be achieved among the stakeholders. Clearly articulated objectives are the engine that drives the assessment process. This is one of the most difficult but most rewarding stages of the assessment process.” 11 Creating a roadmap for research projects can save countless hours down the line and ensures the correct quantitative method is selected. The plan also provides focus when the analysis phase of a project begins. Keep in mind that the data set you end up working with will be large; approaching it with stated goals and objectives saves significant amounts of time, which is especially important when working under a tight deadline!

Below is a checklist that I use at the beginning of any research project. It is based on recommendations made by Bakkalbasi, Sundre, and Fulcher. 12

While goals and objectives are closely related, they are not the same. Project goals should state exactly what you hope to learn or demonstrate through your research. Objectives state what you will assess or measure in order to achieve your overarching project goal.

Example of a project goal:

Example of project objectives:

The data sets collected through quantitative methods are large and can easily be examined from a variety of perspectives. As the project develops, mentally frame emerging trends into a story that can be shared with stakeholders. This process determines how results will ultimately be applied to collection development initiatives. Background knowledge of the local patron community and institutional goals serves as a compass; use it to shape results that bring value to your library or the greater professional community.

From my experience, each quantitative project that I work on allows me to expand my skill sets and understand how I can structure my daily activities to support overarching institutional goals. During many projects, I have encountered unexpected challenges or had to improvise when quantitative methods did not yield expected results (e.g., low survey response rates). However, each challenge equipped me to take on larger projects, better understand how our budget is structured, or build stronger relationships with patrons and colleagues.

One skill that has been invaluable to my work is the ability to develop a quantitative research plan. I hope that by sharing this structure, along with performance measures and data sources that I use, readers have a behind-the-scenes view of my process and all of the moving parts that I work with to conduct e-book collection analysis. And of course, now to the fun part! It is time to get down to the nitty-gritty and demonstrate how I conduct analysis to inform budget decisions and collection development activities at CUL.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access


Research Article

Fragments of peer review: A quantitative analysis of the literature (1969-2015)

Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Computer Science, University of Valencia, Burjassot, Valencian Community, Spain

Roles Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – original draft

Affiliation Department of Research in Biomedicine and Health, University of Split, Split, Split-Dalmatia, Croatia

Roles Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Writing – original draft, Writing – review & editing

Affiliation Department of Economics and Management, University of Brescia, Brescia, Lombardy, Italy

ORCID logo


Fig 1

This paper examines research on peer review between 1969 and 2015 by looking at records indexed from the Scopus database. Although it is often argued that peer review has been poorly investigated, we found that the number of publications in this field doubled from 2005. A half of this work was indexed as research articles, a third as editorial notes and literature reviews and the rest were book chapters or letters. We identified the most prolific and influential scholars, the most cited publications and the most important journals in the field. Co-authorship network analysis showed that research on peer review is fragmented, with the largest group of co-authors including only 2.1% of the whole community. Co-citation network analysis indicated a fragmented structure also in terms of knowledge. This shows that despite its central role in research, peer review has been examined only through small-scale research projects. Our findings would suggest that there is need to encourage collaboration and knowledge sharing across different research communities.

Citation: Grimaldo F, Marušić A, Squazzoni F (2018) Fragments of peer review: A quantitative analysis of the literature (1969-2015). PLoS ONE 13(2): e0193148. https://doi.org/10.1371/journal.pone.0193148

Editor: Lutz Bornmann, Max Planck Society, GERMANY

Received: June 9, 2017; Accepted: February 4, 2018; Published: February 21, 2018

Copyright: © 2018 Grimaldo et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files.

Funding: This work has been supported by the TD1306 COST Action PEERE. The first author, Francisco Grimaldo, was also funded by the Spanish Ministry of Economy, Industry and Competitiveness project TIN2015-66972-C5-5-R. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.


Peer review is central to research. It is essential to ensure the quality of scientific publications, but also to help the scientific community self-regulate its reputation and resource allocation [ 1 ]. Whether directly or indirectly, it also influences funding and publication [ 2 ]. The transition of publishing and reading to the digital era has not changed the value of peer review, although it has stimulated the call for new models and more reliable standards [ 3 – 5 ].

Under the impact of recent scandals, where manipulated research passed the screening of peer review and was eventually published in influential journals, many analysts have suggested that more research is needed on this delicate subject [ 6 – 10 ]. The lack of data and robust evidence on the quality of the process has led many observers even to question the real value of peer review and to contemplate alternatives [ 11 – 13 ].

This study aims to provide a comprehensive analysis of peer review literature from 1969 to 2015, by looking at articles indexed in Scopus. This analysis can help to reveal the structure of the field by finding the more prolific and influential authors, the most authoritative journals and the most active research institutions. By looking at co-authorship and co-citation networks, we measured structural characteristics of the scientific community, including collaboration and knowledge sharing. This was to understand whether, despite the growing number of publications on peer review in the last years, research is excessively fragmented to give rise to a coherent and connected field.

Finally, it is important to note that the period covered by our analysis is highly representative. Indeed, although many analysts suggested that peer review is deeply rooted in the historical evolution of modern science since the 17 th century [ 14 ], recent historical analysis suggested that as an institutionalized system of evaluation in scholarly journals was established systematically only about 70 years ago, where also the terms “peer review” and “referee” became common currency [ 15 ].

Fig 1 (left panel) shows that the number of publications on peer review doubled from 2005. From 2004 to 2015, the annual growth of publications on peer review was 12% on average, reaching 28% and 38% between 2004–2005 and 2005–2006, respectively. The volume of published research grew more than the total number of publications, which had an average growth of 5% from 2004 to 2015, to reach 15% in 2004–2005 ( Fig 1 , right panel). The observed peak-based dynamics of this growth can be related to the impact of the International Congresses on Peer Review and Biomedical Publication, which were regularly held every four years starting from 1989 with the JAMA publishing abstracts and articles from the second, third and fourth editions of the congress [ 10 ]. This was also confirmed by data from PubMed and Web of Science (see Figure A in S1 Appendix ).


Number of records on peer review (left) and number of records published in English on any topic from 1969 to 2015 in Scopus (right).


About half of the records were journal articles, the rest mostly being editorial notes, commentaries, letters and literature reviews (see Figure B in S1 Appendix ). However, the number of original research contributions, e.g., research articles, book chapters or conference proceedings papers, increased from 2005 onward to exceed the number of editorial notes, reviews and letters ( Fig 2 ). This would indicate that empirical, data-driven research is increasing recently.



Fig 3 shows the top 10 most productive countries by origin of research authors. Peer review is studied predominantly in the US, followed by the UK, Australia, Canada and Germany. While this may also be due to the size-effect of these communities, previous studies have suggested that peer review is intrinsic especially to the Anglo-Saxon institutional context [ 16 ]. However, if we look at the top 10 most productive institutions, in which it is probable that research has been cumulative and more systematic, we also found two prominent European institutions, the ETH Zurich and the University of Zurich ( Fig 4 ). This indicates that research on peer review is truly international.





Fig 5 shows the most prolific authors. While the top ones are European social scientists, i.e., Lutz Bornmann and Hans-Dieter Daniel, who published 45 and 34 papers, respectively, the pioneers of research on peer review were medical scholars and journal editors, such as Drummond Rennie, Richard Smith and Annette Flanagin.



Among publication outlets, the journals that published on peer review most frequently were: Science (n = 136 papers), Nature (n = 110), JAMA (n = 99), Scientometrics (n = 65), Behavioral and Brain Sciences (n = 48), Chemical and Engineering News (34), Academic Medicine (32), Australian Clinical Review (32), Learned Publishing (n = 31) and Research Evaluation (n = 31). However, research articles on peer review were published mostly by JAMA (n = 62), Behavioral and Brain Sciences (n = 44) and Scientometrics (n = 42). This means that top journals such as Science and Nature typically published commentaries, editorial notes or short surveys, while research contributions have mostly been published elsewhere. If we look at the impact of research on peer review on journal citations (see Table A in S1 Appendix ), the impact has been minimal with the exception of Scientometrics , whose articles on peer review significantly contributed to the journal’s success (10.97% of the whole journal citations were received by articles on peer review). However, it is worth noting that the contribution of research on peer review in terms of journal citations has been increasing over time (see Fig 6 for a restricted sample of journals listed in Table B in S1 Appendix ).



Among the most important topics, looking at the keywords revealed that research has preferably examined the connection between peer review and “quality assurance” (103 papers), “publishing” (93), “research” (76), “open access” (56), “quality improvement” (47), “evaluation” (46), “publication” (44), “assessment” (41), “ethics” (40) and “bibliometrics” (39). The primacy of the link between peer review and the concept of “quality” was confirmed by looking at nouns/verbs/adjectives in the paper titles (“quality” appearing 527 times against “journal” appearing 454 times or “research”, 434 times) and in the abstracts (“quality” recurring 2208 times against “research” 2038 or medical 1014 times). This would confirm that peer review has been mainly viewed as a “quality control” process rather than a collaborative process that would aim at increasing the knowledge value of a manuscript [ 17 ].

Data showed that research on peer review is typically pursued in small collaborative networks (75% of the records had less than three co-authors), with the exception of one article published in 2012, which was co-authored by 198 authors and so was excluded by the following analysis on co-authorship networks to avoid statistical bias (see Figure D in S1 Appendix ). Around 83% of the co-authorship networks included less than six authors (see Figure E in S1 Appendix ). The most prolific authors were also those with more co-authors, although not those with a higher average number of co-authors per paper (see Table E in S1 Appendix ).

The most prolific authors from our analysis were not always those instrumental in connecting the community studying peer review (e.g., compare Table 1 and Fig 5 ). Fragmentation and small-scale collaboration networks were dominant (e.g., see Table B and Figure E in S1 Appendix ). We found 1912 clusters with an average size of 4.1, which is very small. However, it is important to emphasize certain differences in the position of scientists in these three samples. When excluding records published in medicine journals, we found a more connected co-authorship network with scientists working in more cohesive and stable groups, indicated by the lower number of clusters, higher density and shorter diameter in sample 3 in Table 2 , which is not linearly related to decreasing numbers of nodes and edges.





To look at this in more detail, we plot the co-authorship network linking all authors of the papers on peer review. Each network’s node was a different author and links were established between two authors whenever they co-authored a paper. The greater the number of papers co-authored between two authors, the higher the thickness of the corresponding link.

The co-authorship network was highly disaggregated, with 7971 authors forming 1910 communities ( Table 1 ). With the exception of a large community of 167 researchers and a dozen of communities including around 30 to 50 scientists, 98% of communities had fewer than 15 scientists. Note that the giant component (n = 167 scientists) represents only 2.1% of the total number of scientists in the sample. It included co-authorship relations between the top 10 most central authors and their collaborators ( Fig 7 ). The situation is different if we look at various largest communities and restrict our analysis to research articles and articles published in non-medicine journals ( Fig 8 ). In this case, collaboration groups were more cohesive (see Fig 8 , right panel).


Note that the node size refers to the author’s betweeness centrality.



Sample 1 on the left, i.e., sample 3 on the right, i.e., outside medicine). Note that the node size refers to the author’s betweeness centrality.


In order to look at the internal structure of the field, we built a co-citation network that measured relations between cited publications. It is important to note that here a co-citation meant that two records were cited in the same document. For the sake of clarity, we reported data only on cases in which co-citations were higher than 1.

Fig 9 shows the co-citation network, which included 6402 articles and 71548 references. In the top-right hand corner, there is the community of 84 papers, while the two small clusters at the bottom-centre and middle-left, were examples of isolated co-citation links that were generated by a small number of articles (e.g., the bottom-centre was generated by four citing articles by the same authors with a high number of co-citation links). Table 3 presents the co-citation network metrics, including data on the giant component. Results suggest that the field is characterized by network fragmentation with 192 clusters with a limited size. While the giant component covered 33% of the nodes, it counted only 0.9% of the total number of cited sources in all records. Furthermore, data showed that 79.2% of co-citation links included no more than five cited references.





Table 4 shows a selection of the most important references that were instrumental in clustering the co-citation network as part of the giant component. Results demonstrated not only the importance of certain classical sociology of science contributions, e.g., Robert Merton’s work, which showed an interest on peer review since the 1970s; also more recent works, including literature reviews, were important to decrease the disconnection of scientists in the field [ 2 ]. They also show that, at least for the larger co-citation subnetwork, the field is potentially inter-disciplinary, with important contributions from scholars in medicine as well as scholars from sociology and behavioural sciences.



Discussion and conclusions

Our analysis showed that research on peer review has been rapidly growing, especially from 2005. Not only the number of publications increased; it did also the number of citations and so the impact of research on peer review [ 18 ]. We also found that research is international, with more tradition in the US but with important research groups also in Europe. However, when looking at co-authorship networks, findings indicate that research is fragmented. Scholars do not collaborate on a significant scale, with the largest group of co-authors including only 2.1% of the whole community. When looking at co-citation patterns, we found that also knowledge sharing is fragmented. The larger networks covers only 33% of the nodes, which count only for 0.9% of the total number of cited sources in all records.

This calls for a serious consideration of certain structural problems of studies on peer review. First, difficulties in accessing data from journals and funding agencies and performing large-scale quantitative research have probably limited collaborative research [ 19 ]. While the lack of data may also be due to the confidentiality and anonymity that characterize peer review, it is also possible that editorial boards of journals and administrative bodies of funding agencies have interests in obstructing independent research as a means to protect internal decisions [ 8 ]. However, the full digitalisation of editorial management processes and the increasing emphasis on open data and research integrity among science stakeholders are creating a favourable context in which researchers will be capable of accessing peer review data more frequently and easily soon [ 20 ]. This is expected to stimulate collaboration and increase the scale of research on peer review. Secondly, the lack of specific funding schemes that support research on peer review has probably obstructed the possibility of systematic studies [ 10 ]. This has probably made difficult for scholars to establish large-scale, cross-disciplinary collaboration.

In conclusion, although peer review may reflect context-specificities and disciplinary traditions, the common challenge of understanding the complexity of this process, testing the efficacy of different models in reducing bias and allocating credit and reputation fairly requires ensuring comparison and encouraging collaboration and knowledge sharing across communities [ 21 ]. Here, a recently released document on data sharing by a European project has indicated that data sharing on peer review is instrumental to promote the quality of the process, with relevant collective benefits [ 22 ]. Not only such initiatives are important to improve the quality of research; they can also promote an evidence-based approach to peer review reorganizations and innovations, which is now not so well developed.

Our sample included all records on peer review published from 1969 to 2015, which were extracted from Scopus on July 1 st 2016. We used the Advanced Search tab on the Scopus website to run our query strings (for detail, see below) and exported all available fields for each document retrieved as a CSV (comma separated values format) file. After several tests and checks on the dataset, we identified three samples of records that were hierarchically linked as follows:

With sample 1, we aimed to exclude records that were not explicitly addressed to peer review as an object of study. With sample 2, we identified only articles that reported results, data or cases. With sample 3, we aimed to understand specificities and differences between studies on peer review in medicine and other studies. If not differently mentioned, we reported results on sample 1. Note that, in order to check data consistency, we compared our Scopus dataset with other datasets and repositories, such as PubMed and WoS (see Figure A in S1 Appendix ).

The queries to Scopus proposed in this paper allowed us to retrieve the corpus at a sufficient level of generality to look at the big picture of this field of research. Querying titles and author keywords about “peer review” did not restrict the search only to specific aspects, contexts or cases in which peer review could have been studied (e.g., peer review of scientific manuscripts). Although these queries could filter out some relevant papers, we strongly believe these cases had only a marginal impact on our analysis. For instance, we tried to use other related search terms and found a few papers from Scopus (e.g. just 2 documents for “grant decision making” and 3 documents for “grant selection”) and a number of false positives (e.g. the first 20 of the 69 documents obtained for “panel review” did not really deal with peer review as a field of research).

In order to visualize the collaboration structure in the field, we calculated co-authorship networks [ 23 ] in all samples. Each node in the co-authorship network represented a researcher, while each edge connected two researchers who co-authored a paper. In order to map knowledge sharing, we extracted co-citation networks [ 24 – 26 ]. In this case, nodes represented bibliographic references while edges connected two references when they were cited in the same paper. These methods are key to understand the emergence and evolution of research on peer review as a field driven by scientists’ connections and knowledge flows [ 27 ].

When constructing co-authorship and co-citation networks, we only used information about documents explicitly dealing with “peer review”. The rationale behind this decision was that we wanted to measure the kind of collaboration that can be attributed to these publications, regardless the total productivity of the scientists involved. Minor data inconsistencies can also happen due to the data exported from Scopus, WoS and PubMed not being complete, clean and free of errors. If a paper is missing, all co-authorship links that can be derived will be missing too. If an author name is written in two ways, two different nodes will represent the same researcher and links will be distributed between them. The continuous refinement, sophistication and precision of the algorithms behind these databases ensure that the amount of mistakes and missing information is irrelevant for a large-scale analysis. In any case, we implemented automatic mechanisms that cleaned data and removed duplicated records that reduced these inconsistencies to a marginal level given the scope of our study (see the R script used to perform the analysis in S1 File ).

Research has extensively used co-authorship and co-citation networks to study collaboration patterns by means of different network descriptors [ 28 ]. Here, we focussed on the following indicators, which were used to extract information from the samples presented above:

Supporting information

S1 appendix..


S1 File. R code script used to perform the quantitative analysis.



We would like to thank Rocio Tortajada for her help in the early stages of this research and Emilia López-Iñesta for her help in formatting the references. This paper was based upon work from the COST Action TD1306 “New Frontiers of Peer Review”-PEERE ( www.peere.org ).

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Jan-.

Cover of StatPearls

StatPearls [Internet].

Qualitative study.

Steven Tenny ; Janelle M. Brannan ; Grace D. Brannan .


Last Update: September 18, 2022 .

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. [1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a stand-alone study, purely relying on qualitative data or it could be part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and application of qualitative research.

Qualitative research at its core, ask open-ended questions whose answers are not easily put into numbers such as ‘how’ and ‘why’. [2] Due to the open-ended nature of the research questions at hand, qualitative research design is often not linear in the same way quantitative design is. [2] One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. [3] Phenomena such as experiences, attitudes, and behaviors can be difficult to accurately capture quantitatively, whereas a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a certain time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify and it is important to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore ‘compete’ against each other and the philosophical paradigms associated with each, qualitative and quantitative work are not necessarily opposites nor are they incompatible. [4] While qualitative and quantitative approaches are different, they are not necessarily opposites, and they are certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined that there is a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated together.

Examples of Qualitative Research Approaches


Ethnography as a research design has its origins in social and cultural anthropology, and involves the researcher being directly immersed in the participant’s environment. [2] Through this immersion, the ethnographer can use a variety of data collection techniques with the aim of being able to produce a comprehensive account of the social phenomena that occurred during the research period. [2] That is to say, the researcher’s aim with ethnography is to immerse themselves into the research population and come out of it with accounts of actions, behaviors, events, etc. through the eyes of someone involved in the population. Direct involvement of the researcher with the target population is one benefit of ethnographic research because it can then be possible to find data that is otherwise very difficult to extract and record.

Grounded Theory

Grounded Theory is the “generation of a theoretical model through the experience of observing a study population and developing a comparative analysis of their speech and behavior.” [5] As opposed to quantitative research which is deductive and tests or verifies an existing theory, grounded theory research is inductive and therefore lends itself to research that is aiming to study social interactions or experiences. [3] [2] In essence, Grounded Theory’s goal is to explain for example how and why an event occurs or how and why people might behave a certain way. Through observing the population, a researcher using the Grounded Theory approach can then develop a theory to explain the phenomena of interest.


Phenomenology is defined as the “study of the meaning of phenomena or the study of the particular”. [5] At first glance, it might seem that Grounded Theory and Phenomenology are quite similar, but upon careful examination, the differences can be seen. At its core, phenomenology looks to investigate experiences from the perspective of the individual. [2] Phenomenology is essentially looking into the ‘lived experiences’ of the participants and aims to examine how and why participants behaved a certain way, from their perspective . Herein lies one of the main differences between Grounded Theory and Phenomenology. Grounded Theory aims to develop a theory for social phenomena through an examination of various data sources whereas Phenomenology focuses on describing and explaining an event or phenomena from the perspective of those who have experienced it.

Narrative Research

One of qualitative research’s strengths lies in its ability to tell a story, often from the perspective of those directly involved in it. Reporting on qualitative research involves including details and descriptions of the setting involved and quotes from participants. This detail is called ‘thick’ or ‘rich’ description and is a strength of qualitative research. Narrative research is rife with the possibilities of ‘thick’ description as this approach weaves together a sequence of events, usually from just one or two individuals, in the hopes of creating a cohesive story, or narrative. [2] While it might seem like a waste of time to focus on such a specific, individual level, understanding one or two people’s narratives for an event or phenomenon can help to inform researchers about the influences that helped shape that narrative. The tension or conflict of differing narratives can be “opportunities for innovation”. [2]

Research Paradigm

Research paradigms are the assumptions, norms, and standards that underpin different approaches to research. Essentially, research paradigms are the ‘worldview’ that inform research. [4] It is valuable for researchers, both qualitative and quantitative, to understand what paradigm they are working within because understanding the theoretical basis of research paradigms allows researchers to understand the strengths and weaknesses of the approach being used and adjust accordingly. Different paradigms have different ontology and epistemologies . Ontology is defined as the "assumptions about the nature of reality” whereas epistemology is defined as the “assumptions about the nature of knowledge” that inform the work researchers do. [2] It is important to understand the ontological and epistemological foundations of the research paradigm researchers are working within to allow for a full understanding of the approach being used and the assumptions that underpin the approach as a whole. Further, it is crucial that researchers understand their own ontological and epistemological assumptions about the world in general because their assumptions about the world will necessarily impact how they interact with research. A discussion of the research paradigm is not complete without describing positivist, postpositivist, and constructivist philosophies.

Positivist vs Postpositivist

To further understand qualitative research, we need to discuss positivist and postpositivist frameworks. Positivism is a philosophy that the scientific method can and should be applied to social as well as natural sciences. [4] Essentially, positivist thinking insists that the social sciences should use natural science methods in its research which stems from positivist ontology that there is an objective reality that exists that is fully independent of our perception of the world as individuals. Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability.

Conversely, postpositivists argue that social reality can never be one hundred percent explained but it could be approximated. [4] Indeed, qualitative researchers have been insisting that there are “fundamental limits to the extent to which the methods and procedures of the natural sciences could be applied to the social world” and therefore postpositivist philosophy is often associated with qualitative research. [4] An example of positivist versus postpositivist values in research might be that positivist philosophies value hypothesis-testing, whereas postpositivist philosophies value the ability to formulate a substantive theory.


Constructivism is a subcategory of postpositivism. Most researchers invested in postpositivist research are constructivist as well, meaning they think there is no objective external reality that exists but rather that reality is constructed. Constructivism is a theoretical lens that emphasizes the dynamic nature of our world. “Constructivism contends that individuals’ views are directly influenced by their experiences, and it is these individual experiences and views that shape their perspective of reality”. [6] Essentially, Constructivist thought focuses on how ‘reality’ is not a fixed certainty and experiences, interactions, and backgrounds give people a unique view of the world. Constructivism contends, unlike in positivist views, that there is not necessarily an ‘objective’ reality we all experience. This is the ‘relativist’ ontological view that reality and the world we live in are dynamic and socially constructed. Therefore, qualitative scientific knowledge can be inductive as well as deductive.” [4]

So why is it important to understand the differences in assumptions that different philosophies and approaches to research have? Fundamentally, the assumptions underpinning the research tools a researcher selects provide an overall base for the assumptions the rest of the research will have and can even change the role of the researcher themselves. [2] For example, is the researcher an ‘objective’ observer such as in positivist quantitative work? Or is the researcher an active participant in the research itself, as in postpositivist qualitative work? Understanding the philosophical base of the research undertaken allows researchers to fully understand the implications of their work and their role within the research, as well as reflect on their own positionality and bias as it pertains to the research they are conducting.

Data Sampling 

The better the sample represents the intended study population, the more likely the researcher is to encompass the varying factors at play. The following are examples of participant sampling and selection: [7]

Data Collection and Analysis

Qualitative research uses several techniques including interviews, focus groups, and observation. [1] [2] [3] Interviews may be unstructured, with open-ended questions on a topic and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one on one and is appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be a participant-observer to share the experiences of the subject or a non-participant or detached observer.

While quantitative research design prescribes a controlled environment for data collection, qualitative data collection may be in a central location or in the environment of the participants, depending on the study goals and design. Qualitative research could amount to a large amount of data. Data is transcribed which may then be coded manually or with the use of Computer Assisted Qualitative Data Analysis Software or CAQDAS such as ATLAS.ti or NVivo. [8] [9] [10]

After the coding process, qualitative research results could be in various formats. It could be a synthesis and interpretation presented with excerpts from the data. [11] Results also could be in the form of themes and theory or model development.


To standardize and facilitate the dissemination of qualitative research outcomes, the healthcare team can use two reporting standards. The Consolidated Criteria for Reporting Qualitative Research or COREQ is a 32-item checklist for interviews and focus groups. [12] The Standards for Reporting Qualitative Research (SRQR) is a checklist covering a wider range of qualitative research. [13]

Examples of Application

Many times a research question will start with qualitative research. The qualitative research will help generate the research hypothesis which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data for a better understanding of what the numbers truly mean and their implications. The qualitative methods can then help clarify the quantitative data and also help refine the hypothesis for future research. Furthermore, with qualitative research researchers can explore subjects that are poorly studied with quantitative methods. These include opinions, individual's actions, and social science research.

A good qualitative study design starts with a goal or objective. This should be clearly defined or stated. The target population needs to be specified. A method for obtaining information from the study population must be carefully detailed to ensure there are no omissions of part of the target population. A proper collection method should be selected which will help obtain the desired information without overly limiting the collected data because many times, the information sought is not well compartmentalized or obtained. Finally, the design should ensure adequate methods for analyzing the data. An example may help better clarify some of the various aspects of qualitative research.

A researcher wants to decrease the number of teenagers who smoke in their community. The researcher could begin by asking current teen smokers why they started smoking through structured or unstructured interviews (qualitative research). The researcher can also get together a group of current teenage smokers and conduct a focus group to help brainstorm factors that may have prevented them from starting to smoke (qualitative research).

In this example, the researcher has used qualitative research methods (interviews and focus groups) to generate a list of ideas of both why teens start to smoke as well as factors that may have prevented them from starting to smoke. Next, the researcher compiles this data. The research found that, hypothetically, peer pressure, health issues, cost, being considered “cool,” and rebellious behavior all might increase or decrease the likelihood of teens starting to smoke.

The researcher creates a survey asking teen participants to rank how important each of the above factors is in either starting smoking (for current smokers) or not smoking (for current non-smokers). This survey provides specific numbers (ranked importance of each factor) and is thus a quantitative research tool.

The researcher can use the results of the survey to focus efforts on the one or two highest-ranked factors. Let us say the researcher found that health was the major factor that keeps teens from starting to smoke, and peer pressure was the major factor that contributed to teens to start smoking. The researcher can go back to qualitative research methods to dive deeper into each of these for more information. The researcher wants to focus on how to keep teens from starting to smoke, so they focus on the peer pressure aspect.

The researcher can conduct interviews and/or focus groups (qualitative research) about what types and forms of peer pressure are commonly encountered, where the peer pressure comes from, and where smoking first starts. The researcher hypothetically finds that peer pressure often occurs after school at the local teen hangouts, mostly the local park. The researcher also hypothetically finds that peer pressure comes from older, current smokers who provide the cigarettes.

The researcher could further explore this observation made at the local teen hangouts (qualitative research) and take notes regarding who is smoking, who is not, and what observable factors are at play for peer pressure of smoking. The researcher finds a local park where many local teenagers hang out and see that a shady, overgrown area of the park is where the smokers tend to hang out. The researcher notes the smoking teenagers buy their cigarettes from a local convenience store adjacent to the park where the clerk does not check identification before selling cigarettes. These observations fall under qualitative research.

If the researcher returns to the park and counts how many individuals smoke in each region of the park, this numerical data would be quantitative research. Based on the researcher's efforts thus far, they conclude that local teen smoking and teenagers who start to smoke may decrease if there are fewer overgrown areas of the park and the local convenience store does not sell cigarettes to underage individuals.

The researcher could try to have the parks department reassess the shady areas to make them less conducive to the smokers or identify how to limit the sales of cigarettes to underage individuals by the convenience store. The researcher would then cycle back to qualitative methods of asking at-risk population their perceptions of the changes, what factors are still at play, as well as quantitative research that includes teen smoking rates in the community, the incidence of new teen smokers, among others. [14] [15]

Qualitative research functions as a standalone research design or in combination with quantitative research to enhance our understanding of the world. Qualitative research uses techniques including structured and unstructured interviews, focus groups, and participant observation to not only help generate hypotheses which can be more rigorously tested with quantitative research but also to help researchers delve deeper into the quantitative research numbers, understand what they mean, and understand what the implications are.  Qualitative research provides researchers with a way to understand what is going on, especially when things are not easily categorized. [16]

As discussed in the sections above, quantitative and qualitative work differ in many different ways, including the criteria for evaluating them. There are four well-established criteria for evaluating quantitative data: internal validity, external validity, reliability, and objectivity. The correlating concepts in qualitative research are credibility, transferability, dependability, and confirmability. [4] [11] The corresponding quantitative and qualitative concepts can be seen below, with the quantitative concept is on the left, and the qualitative concept is on the right:

In conducting qualitative research, ensuring these concepts are satisfied and well thought out can mitigate potential issues from arising. For example, just as a researcher will ensure that their quantitative study is internally valid so should qualitative researchers ensure that their work has credibility.  

Indicators such as triangulation and peer examination can help evaluate the credibility of qualitative work.

‘Thick’ or ‘rich’ description can be used to evaluate the transferability of qualitative research whereas using an indicator such as an audit trail might help with evaluating the dependability and confirmability.

One issue of concern that qualitative researchers should take into consideration is observation bias. Here are a few examples:

Qualitative research by itself or combined with quantitative research helps healthcare providers understand patients and the impact and challenges of the care they deliver. Qualitative research provides an opportunity to generate and refine hypotheses and delve deeper into the data generated by quantitative research. Qualitative research does not exist as an island apart from quantitative research, but as an integral part of research methods to be used for the understanding of the world around us. [17]

Qualitative research is important for all members of the health care team as all are affected by qualitative research. Qualitative research may help develop a theory or a model for health research that can be further explored by quantitative research.  Much of the qualitative research data acquisition is completed by numerous team members including social works, scientists, nurses, etc.  Within each area of the medical field, there is copious ongoing qualitative research including physician-patient interactions, nursing-patient interactions, patient-environment interactions, health care team function, patient information delivery, etc. 

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

In this Page

Bulk download.

Related information

Similar articles in PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers


University of Portland Clark Library

Thursday, February 23: The Clark Library is closed today.

Nursing & Health Innovations: Peer-reviewed Quantitative Research

What is Quantitative Research?

Typical attributes of Quantitative Research:

How to Find Peer-reviewed Quantitative Research Articles

In CINAHL and MEDLINE , to find Peer-reviewed Quantitative Research articles, add several of the following subject terms to your search:

CINAHL terms:

MEDLINE terms:

quantitative research peer review article

Identifying Quantitative Research Articles

Here's an example of an article that has several quantitative research terms as Minor Subjects in the CINAHL database.

Chi Square Test, T-Tests, Two-Way Analysis of Variance, P-Value in Minor Subjects

Top Marketing Research Articles Related to Quantitative Research

Learn all about quantitative research. Quirks.com is the largest source of marketing research information.

Search Results

More Filters

Loading filters...

Article Using tech to turn unstructured data into actionable insights Jim Longo | March 1, 2023

Sponsored Article 16 Top 
Panel Research Companies Quirk's Staff | March 1, 2023

Sponsored Article 12 Top 
Full-Service Research Companies Quirk's Staff | March 1, 2023

Sponsored Article InnovateMR + Ivy Exec Quirk's Staff | March 1, 2023

Video What'cha Drinkin'? an interview with Steve Schlesinger, CEO, Schlesinger Group Quirk's Staff | March 8, 2023

Article How Northwest Bank used marketing research to develop a new brand campaign Erinn Steffen , Katie Bender | February 27, 2023

Article Online fraud in marketing research: Tips for taking action now Jim Longo | January 23, 2023

Sponsored Video 20|20 Research Webinar: The New Quant+Qual Paradigm: 3 Integrative Strategies for Today Isaac Rogers | January 18, 2023

Sponsored Video Elevated Insights Webinar: Beating the Cheaters in Qualitative and Quantitative Research Debbie Balch, Cailee Osterman, Kara Carroll, Joey Torretto | January 18, 2023

Article 2022 editors' choice articles Quirk's Staff | December 19, 2022

Article Work and play: Martial arts, perseverance and randori Nancy Cox | December 19, 2022

Article Will public policy regulations help save the marketing research industry? Brooke Reavey | December 6, 2022

Article Reaching the right brain to connect with consumers Jennifer Adams | November 30, 2022

Article Storytelling, in-house research and adaptation: Takeaways from November’s Wisdom Wednesday  Maddie Swenson | November 28, 2022

Sponsored Video Entropik Tech Webinar: Behavioral Upgrade for Qual and Quant Research John Crouch | November 17, 2022

researchprospect post subheader

Quantitative Vs Qualitative Research

Published by Carmen Troy at August 13th, 2021 , Revised On January 11, 2023

What is Quantitative Research?

Quantitative research is associated with numerical data or data that can be measured. It is used to study a large group of population. The information is gathered by performing statistical, mathematical, or computational techniques.

Quantitative research isn’t simply based on  statistical analysis or quantitative techniques but rather uses a certain approach to theory to address research hypotheses or questions, establish an appropriate research methodology, and draw findings & conclusions .

Characteristics of Quantitative Research

Some most commonly employed quantitative research strategies include data-driven dissertations, theory-driven studies, and reflection-driven research. Regardless of the chosen approach, there are some common quantitative research features as listed below.

What is Qualitative Research?

Qualitative research is a type of scientific research where a researcher collects evidence to seek answers to a  question . It is associated with studying human behavior from an informative perspective. It aims at obtaining in-depth details of the problem.

As the term suggests,  qualitative research  is based on qualitative research methods, including participants’ observations, focus groups, and unstructured interviews.

Qualitative research is very different in nature when compared to quantitative research. It takes an established path towards the  research process , how  research questions  are set up, how existing theories are built upon, what research methods are employed, and how the  findings  are unveiled to the readers.

You may adopt conventional methods, including phenomenological research, narrative-based research, grounded theory research, ethnographies, case studies, and auto-ethnographies.

Does your Research Methodology Have the Following?

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.


Characteristics of Qualitative Research

Again, regardless of the chosen approach to qualitative research, your dissertation will have unique key features as listed below.

Confused between qualitative and quantitative methods of data analysis? No idea what discourse and content analysis are?

We hear you.

When to Use Qualitative and Quantitative Research Model?

Now that you know the unique differences between quantitative and qualitative research methods, you may want to learn a bit about primary and secondary research methods.

Here is an article that will help you  distinguish between primary and secondary research  and decide whether you need to use quantitative and/or qualitative methods of primary research in your dissertation.

Alternatively, you can base your dissertation on secondary research, which is descriptive and explanatory.

Limitations of Quantitative and Qualitative Research

Faqs about quantitative vs qualitative research, what is quantitative research, what is qualitative research.

Qualitative research is a type of scientific research where a researcher collects evidence to seek answers to a question . It is associated with studying human behavior from an informative perspective. It aims at obtaining in-depth details of the problem.

Qualitative or quantitative, which research type should I use?

The research title, research questions, hypothesis , objectives, and study area generally determine the dissertation’s best research method.

You May Also Like

Discourse analysis is an essential aspect of studying a language. It is used in various disciplines of social science and humanities such as linguistic, sociolinguistics, and psycholinguistic.

Baffled by the concept of reliability and validity? Reliability refers to the consistency of measurement. Validity refers to the accuracy of measurement.

In correlational research, a researcher measures the relationship between two or more variables or sets of scores without having control over the variables.

Ready to place an order?

Useful links, learning resources.

DMCA.com Protection Status




  1. Qualitative Research Paper Critique Example

    quantitative research peer review article

  2. How To Write Summary Of Article

    quantitative research peer review article

  3. Example Of Quantitative Research Journal Article

    quantitative research peer review article

  4. Example Of Scientific Peer Review

    quantitative research peer review article

  5. (PDF) A Quantitative Analysis of Peer Review

    quantitative research peer review article

  6. Critical Appraisal Of Quantitative Research Essay

    quantitative research peer review article


  1. Tips for Writing APA Style Research Papers

  2. Qualitative vs Quantitative Research

  3. Understanding Quantitative Research

  4. What are Research Questions?

  5. ORCID

  6. How can you tell an article is a peer reviewed, research article?


  1. Recent quantitative research on determinants of health in high income

    Peer-reviewed Research Article Recent quantitative research on determinants of health in high income countries: A scoping review Vladimira Varbanova , Philippe Beutels Recent quantitative research on determinants of health in high income countries: A scoping review Vladimira Varbanova, Philippe Beutels x Published: September 17, 2020

  2. Quantitative research

    Quantitative research This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys - the principal research design …

  3. Using data for improvement

    Quantitative data express quantity, amount, or range and can be measured numerically—such as waiting times, mortality, haemoglobin level, cash flow. Quantitative data are often visualised over time as time series analyses (run charts or control charts) to see whether we are improving.

  4. Synthesizing Quantitative Evidence for Evidence-based Nursing

    Developing a review question Sackett et al [7] stated that a good clinical question should have four essential factors: (a) the patient or problem in question; (b) the intervention, test, or exposure of interest; (c) comparison interventions (if relevant); (d) the outcome, or outcomes, of interest.

  5. What Is Peer Review?

    In general, the peer review process includes the following steps: First, the author submits the manuscript to the editor. The editor can either: Reject the manuscript and send it back to the author, or Send it onward to the selected peer reviewer (s) Next, the peer review process occurs.

  6. Quantitative Results of a National Intervention to Prevent Hospital

    Quantitative Results of a National Intervention to Prevent Hospital-Acquired Catheter-Associated Urinary Tract Infection: A Pre-Post Observational Study Centers for Disease Control and Prevention. Centers for Disease Control and Prevention.

  7. (PDF) Quantitative Research Methods : A Synopsis Approach

    A quantitative study is referred to as a systematic approach meant to investigate a particular phenomenon [33], and the result is to derive a technique to rank an appropriate decision-making...

  8. How to appraise quantitative research

    Quantitative studies examine the relationship between variables, and the p value illustrates this objectively. 11 If the p value is less than 0.05, the null hypothesis is rejected and the hypothesis is accepted and the study will say there is a significant difference.

  9. How do I find qualitative and quantitative articles?

    Research studies that use qualitative and quantitative methods are published in peer reviewed journals. Not every article in a peer reviewed journal will be a research study, but limiting your results to articles in these journals will help you narrow the pool of articles you are looking through. Example from ProQuest.

  10. PDF The Usefulness of Qualitative and Quantitative Approaches and ...

    3.0. Advantages and disadvantages of qualitative and quantitative research Over the years, debate and arguments have been going on with regard to the appropriateness of qualitative or quantitative research approaches in conducting social research. Robson (2002, p43) noted that there has been a paradigm war between constructivists and positivists.

  11. Chapter 3. Introduction to Quantitative Research and Data

    In the most basic terms, quantitative research methods are concerned with collecting and analyzing data that is structured and can be represented numerically. 2 One of the central goals is to build accurate and reliable measurements that allow for statistical analysis.

  12. Fragments of peer review: A quantitative analysis of the literature

    This paper examines research on peer review between 1969 and 2015 by looking at records indexed from the Scopus database. Although it is often argued that peer review has been poorly investigated, we found that the number of publications in this field doubled from 2005. A half of this work was indexed as research articles, a third as editorial notes and literature reviews and the rest were ...

  13. Critiquing Quantitative Research Reports: Key Points for the Beginner

    presented for those beginning to critically appraise quantitative research presented in peer reviewed journals. General Overview The first step in the critique process is for the reader to browse the abstract and article for an overview. During this initial review a great deal of information can be obtained. The abstract

  14. ERIC

    The purpose of this article is to provide a critique of the main concepts of qualitative and quantitative methodologies for conducting research. It is written expressly for the novice researcher in an attempt to clarify the major misconceptions and misunderstandings concerning the qualitative-quantitative research polarities. This article will not…

  15. Qualitative Study

    Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability. Conversely, postpositivists argue that social reality can never be one hundred percent explained but it could be approximated. [4]

  16. Peer-reviewed Quantitative Research

    How to Find Peer-reviewed Quantitative Research Articles In CINAHL and MEDLINE, to find Peer-reviewed Quantitative Research articles, add several of the following subject terms to your search: CINAHL terms: Quantitative Studies Analysis of Variance Chi Square Test P-Value T-Tests MEDLINE terms: Evaluation Studies Analysis of Variance

  17. ERIC

    Purpose: This study examined 46 articles in total, which yielded 5 recurring themes: perceived discrimination, language barriers, socioeconomic barriers, cultural barriers and educational/knowledge barriers. The two most dominant themes found were the inability to speak the country's primary language and belonging to a culture with different practices and values from the host country.

  18. International Journal of Quantitative Research in Education

    IJQRE aims to enhance the practice and theory of quantitative research in education. In this journal, "education" is defined in the broadest sense of the word, to include settings outside the school. IJQRE publishes peer-reviewed, empirical research employing a variety of quantitative methods and approaches, including but not limited to surveys, cross sectional studies, longitudinal ...

  19. 1220+ Top Articles on Quantitative Research

    Showing: Quantitative Research. Total: 1355. Article Online fraud in marketing research: Tips for taking action now Jim Longo | January 23, 2023. Sponsored Video 20|20 Research Webinar: The New Quant+Qual Paradigm: 3 Integrative Strategies for Today Isaac Rogers | January 18, 2023.

  20. PDF Quantitative and Qualitative Research Article Critique Nathon Kelley

    QUANTITATIVE AND QUALITATIVE RESEARCH ARTICLE CRITIQUE 2 Quantitative and Qualitative Research Article Critique The ability to properly critique a research article is considered "one of the fundamental skills of scholarship in any discipline" (Nieswiadomy, 2008, p. 378),; this is also true for nursing. Nursing is a science, and as such, it ...

  21. Quantitative Vs Qualitative Research

    Qualitative research is a type of scientific research where a researcher collects evidence to seek answers to a question. It is associated with studying human behavior from an informative perspective. It aims at obtaining in-depth details of the problem. As the term suggests, qualitative research is based on qualitative research methods ...