are peer reviewed journals reliable

When to trust (and not to trust) peer reviewed science

are peer reviewed journals reliable

Deputy Vice-Chancellor Academic and Professor of Molecular Biology, UNSW Sydney

Disclosure statement

Merlin Crossley receives funding from the Australian Research Council and the National Health and Medical Research Council. He works at UNSW Sydney, and is on the Trust of the Australian Museum, the Boards of the Australian Science Media Centre, UNSW Press and UNSW Global.

UNSW Sydney provides funding as a member of The Conversation AU.

View all partners

The article is part of our occasional long read series Zoom Out , where authors explore key ideas in science and technology in the broader context of society.

The words “published in a peer reviewed journal” are sometimes considered as the gold standard in science. But any professional scientist will tell you that the fact an article has undergone peer review is a long way from an ironclad guarantee of quality.

To know what science you should really trust you need to weigh the subtle indicators that scientists consider.

Read more: Why I disagree with Nobel Laureates when it comes to career advice for scientists

Journal reputation

The standing of the journal in which a paper is published is the first thing.

For every scientific field, broad journals (like Nature , Science and Proceedings of the National Academy of Science ) and many more specialist journals (like the Journal of Biological Chemistry ) are available. But it is important to recognise that hierarchies exist.

Some journals are considered more prestigious, or frankly, better than others. The “ impact factor ” (which reflects how many citations papers in the journal attract) is one simple, if controversial measure, of the importance of a journal.

In practice every researcher carries a mental list of the top relevant journals in her or his head. When choosing where to publish, each scientist makes their own judgement on how interesting and how reliable their new results are.

If authors aim too high with their target journal, then the editor will probably reject the paper at once on the basis of “interest” (before even considering scientific quality).

If an author aims too low, then they could be selling themselves short – this could represent a missed opportunity for a trophy paper in a top journal that everyone would recognise as significant (if only because of where it was published).

Read more: Not just available, but also useful: we must keep pushing to improve open access to research

Researchers sometimes talk their paper up in a cover letter to the editor, and aim for a journal one rank above where they expect the manuscript will eventually end up. If their paper is accepted they are happy. If not, they resubmit to a lower ranked, or in the standard euphemism, a “more specialised journal”. This wastes time and effort, but is the reality of life in science.

Neither editors nor authors like to get things wrong. They are weighing up the pressure to break a story with a big headline against the fear of making a mistake. A mistake in this context means publishing a result that becomes quickly embroiled in controversy .

To safeguard against that, three or four peer reviewers (experienced experts in the field) are appointed by the editor to help.

The peer review process

At the time of submitting a paper, the authors may suggest reviewers they believe are appropriately qualified. But the editor will make the final choice, based on their understanding of the field and also on how well and how quickly reviewers respond to the task.

The identity of peer reviewers is usually kept secret so that they can comment freely (but sometimes this means they are quite harsh). The peer reviewers will repeat the job of the editor, and advise on whether the paper is of sufficient interest for the journal. Importantly, they will also evaluate the robustness of the science and whether the conclusions are supported by the evidence.

This is the critical “peer review” step. In practice, though, the level of scrutiny remains connected to the standing of the journal. If the work is being considered for a top journal, the scrutiny will be intense. The top journals seldom accept papers unless they consider them to be not only interesting but also water tight and bullet proof – that is they believe the result is something that will stand the test of time.

If, on the other hand, the work is going into a little-read journal with a low impact factor, then sometimes reviewers will be more forgiving. They will still expect scientific rigour but are likely to accept some data as inconclusive, provided the researchers point out the limitations of their work.

Knowing this is how the process goes, whenever a researcher reads a paper they make a mental note of where the work was published.

Read more: What was missing in Australia's $1.9 billion infrastructure announcement

Journal impact factor

Most journals are reliable. But at the bottom of the list in terms of impact lie two types of journals:

respectable journals that publish peer reviewed results that are solid but of limited interest – since they may represent dead ends or very specialist local topics

so-called “predatory” journals, which are more sinister – in these journals the peer review process is either superficial or non-existent, and editors essentially charge authors for the privilege of publishing.

Professional scientists will distinguish between the two partly based on the publishing house, and even the name of the journal.

The Public Library of Science ( PLOS ) is a reputable publisher, and offers PLOS ONE for solid science – even if it may only appeal to a limited audience.

Read more: Universities spend millions on accessing results of publicly funded research

Springer Nature has launched a similar journal called Scientific Reports . Other good quality journals with lower impact factors include journals of specialist academic societies in countries with smaller populations – they will never reach a large audience but the work may be rock solid.

Predatory journals on the other hand are often broad in scale, published by online publishers managing many titles, and sometimes have the word “international” in the title. They are seeking to harvest large numbers of papers to maximise profits. So names like “The International Journal of Science” should be treated with caution, whereas the “Journal of the Australian Bee Society” may well be reliable (note, I invented these names just to illustrate the point).

The value of a journal vs a single paper

Impact factors have become controversial because they have been overused as a proxy for the quality of single papers. However, strictly applied they reflect only the interest a journal attracts, and may depend on a few “jackpot” papers that “go viral” in terms of accumulating citations.

Additionally, while papers in higher impact journals may have undergone more scrutiny, there is more pressure on the editors and on the authors of these top journals. This means shortcuts may be taken more often, the last, crucial control experiment may never be done, and the journals end up being less reliable than their reputations imply. This disconnect sometimes generates sniping about how certain journals aren’t as good as they claim to be – which actually keeps everyone on their toes.

While all the controversies surrounding impact factors are real, every researcher knows and thinks about them or other journal ranking systems (SNP – Source Normalised Impact per Paper, SJR – Scientific Journal Rankings, and others) when they are choosing which journal to publish in, which papers to read, and which papers to trust.

Read more: Science isn't broken, but we can do better: here's how

Nothing is perfect

Even if everything is done properly, peer review is not infallible. If authors fake their data very cleverly, for example, then it may be difficult to detect.

Deliberately faking data is, however, relatively rare. Not because scientists are saints but because it is foolish to fake data. If the results are important, others will quickly try to reproduce and build upon them. If a fake result is published in a top journal it is almost certain to be discovered. This does happen from time to time, and it is always a scandal .

Errors and sloppiness are much more common. This may be related to the increasing urgency, pressure to publish and prevalence of large teams where no one may understand all the science. Again, however, only inconsequential mistakes will survive – most important errors will quickly be picked up.

Can you trust the edifice that is modern science?

Usually, one can get a feel for how likely it is that a piece of peer reviewed science is solid. This comes through relying on the combination of the pride and the reputation of the authors, and of the journal editors, and of the peer reviewers.

So I do trust the combination of peer review system and the inherent fact that science is built on previous foundations. If those are shaky, the cracks will appear quickly and things will be set straight.

I am also heartened by new opportunities for even better and faster systems that are arising as a result of advances in information technology. These include models for post-publication (rather than pre-publication) peer review. Perhaps this creates a way to formalise discussions that would otherwise happen on Twitter, and that can raise doubts about the validity of published results.

Read more: Bored reading science? Let's change how scientists write

The journal eLife is turning peer review on its head. It’s offering to publish everything it deems to be of sufficient interest, and then letting authors choose to answer or not answer points that are raised in peer review after acceptance of the manuscript. Authors can even choose to refrain from going ahead if they think the peer reivewers’ points expose the work as flawed.

ELife also has a system where reviewers get together and provide a single moderated review, to which their names are appended and which is published. This prevents the problem of anonymity enabling overly harsh treatment.

All in all, we should feel confident that important science is solid (and peripheral science unvalidated) due to peer review, transparency, scrutiny and reproduction of results in science publication. Nevertheless in some fields where reproduction is rare or impossible – long term studies depending on complex statistical data – it is likely that scientific debate will continue.

But even in these fields, the endless scrutiny by other researchers, together with the proudly guarded reputations of authors and journals, means that even if it will never be perfect, the scientific method remains more reliable than all the others.

Want to write?

Write an article and join a growing community of more than 160,800 academics and researchers from 4,576 institutions.

Register now

Explore Information — Understanding & Recognizing Peer Review

What Do You Mean by Peer Reviewed Sources?

What's so great about peer review?

Peer reviewed articles are often considered the most reliable and reputable sources in that field of study. Peer reviewed articles have undergone review (hence the "peer-review") by fellow experts in that field, as well as an editorial review process. The purpose of this is to ensure that, as much as possible, the finished product meets the standards of the field. 

Peer reviewed publications are one of the main ways researchers communicate with each other. 

Most library databases have features to help you discover articles from scholarly journals. Most articles from scholarly journals have gone through the peer review process. Many scholarly journals will also publish book reviews or start off with an editorial, which are not peer reviewed - so don't be tricked!

So that means I can turn my brain off, right?

Nope! You still need to engage with what you find. Are there additional scholarly sources with research that supports the source you've found, or have you encountered an outlier in the research? Have others been able to replicate the results of the research? Is the information old and outdated? Was this study on toothpaste (for example) funded by Colgate? 

You're engaging with the research - ultimately, you decide what belongs in your project, and what doesn't. You get to decide if a source is relevant or not. It's a lot of responsibility - but it's a lot of authority, too.

Check Yourself!

are peer reviewed journals reliable

Recognizing Scholarly Articles

Popular vs. Scholarly

(Source: Peabody Library)


Popular vs. scholarly articles.

When looking for articles to use in your assignment, you should realize that there is a difference between "popular" and "scholarly" articles.

Popular  sources, such as newspapers and magazines, are written by journalists or others for general readers (for example, Time, Rolling Stone, and National Geographic).

Scholarly  sources are written for the academic community, including experts and students, on topics that are typically footnoted and based on research (for example, American Literature or New England Review). Scholarly journals are sometimes referred to as "peer-reviewed," "refereed" or "academic."

How do you find scholarly or "peer-reviewed" journal articles?

The option to select  scholarly or peer-reviewed articles is typically available on the search page of each database.  Just check the box or select the option . You can also search Ulrich's Periodical Directory (link provided below) to see if the journal is Refereed / Peer-reviewed.  

Popular Sources (Magazines & Newspapers) Inform and entertain the general public.

Scholarly or Academic Sources (Journals & Scholarly Books) Disseminate research and academic discussion among professionals in a discipline. 

Trade Publications Neither scholarly or popular sources, but could be a combination of both. Allows practitioners in specific industries to share market and production information that improves their businesses.

Peer Review in 3 minutes

(Source: NCSU Libraries)

Structure of a Scholarly Article

What might you find in a scholarly article?

anatomy of a scholarly article

Creative Commons

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Peer review: a flawed process at the heart of science and journals

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times ). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)


My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me', which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance. 1

That is why Robbie Fox, the great 20th century editor of the Lancet , who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'


But does peer review `work' at all? A systematic review of all the available evidence on peer review concluded that `the practice of peer review is based on faith in its effects, rather than on facts'. 2 But the answer to the question on whether peer review works depends on the question `What is peer review for?'.

One answer is that it is a method to select the best grant applications for funding and the best papers to publish in a journal. It is hard to test this aim because there is no agreed definition of what constitutes a good paper or a good research proposal. Plus what is peer review to be tested against? Chance? Or a much simpler process? Stephen Lock when editor of the BMJ conducted a study in which he alone decided which of a consecutive series of papers submitted to the journal he would publish. He then let the papers go through the usual process. There was little difference between the papers he chose and those selected after the full process of peer review. 1 This small study suggests that perhaps you do not need an elaborate process. Maybe a lone editor, thoroughly familiar with what the journal wants and knowledgeable about research methods, would be enough. But it would be a bold journal that stepped aside from the sacred path of peer review.

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers. 3 , 4 Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust.


So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.

Slow and expensive

Many journals, even in the age of the internet, take more than a year to review and publish a paper. It is hard to get good data on the cost of peer review, particularly because reviewers are often not paid (the same, come to that, is true of many editors). Yet there is a substantial `opportunity cost', as economists call it, in that the time spent reviewing could be spent doing something more productive—like original research. I estimate that the average cost of peer review per paper for the BMJ (remembering that the journal rejected 60% without external review) was of the order of £100, whereas the cost of a paper that made it right though the system was closer to £1000.

The cost of peer review has become important because of the open access movement, which hopes to make research freely available to everybody. With the current publishing model peer review is usually `free' to authors, and publishers make their money by charging institutions to access the material. One open access model is that authors will pay for peer review and the cost of posting their article on a website. So those offering or proposing this system have had to come up with a figure—which is currently between $500-$2500 per article. Those promoting the open access system calculate that at the moment the academic community pays about $5000 for access to a peer reviewed paper. (The $5000 is obviously paying for much more than peer review: it includes other editorial costs, distribution costs—expensive with paper—and a big chunk of profit for the publisher.) So there may be substantial financial gains to be had by academics if the model for publishing science changes.

There is an obvious irony in people charging for a process that is not proved to be effective, but that is how much the scientific community values its faith in peer review.


People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. I regularly received letters from authors who were upset that the BMJ rejected their paper and then published what they thought to be a much inferior paper on the same subject. Always they saw something underhand. They found it hard to accept that peer review is a subjective and, therefore, inconsistent process. But it is probably unreasonable to expect it to be objective and consistent. If I ask people to rank painters like Titian, Tintoretto, Bellini, Carpaccio, and Veronese, I would never expect them to come up with the same order. A scientific study submitted to a medical journal may not be as complex a work as a Tintoretto altarpiece, but it is complex. Inevitably people will take different views on its strengths, weaknesses, and importance.

So, the evidence is that if reviewers are asked to give an opinion on whether or not a paper should be published they agree only slightly more than they would be expected to agree by chance. (I am conscious that this evidence conflicts with the study of Stephen Lock showing that he alone and the whole BMJ peer review process tended to reach the same decision on which papers should be published. The explanation may be that being the editor who had designed the BMJ process and appointed the editors and reviewers it was not surprising that they were fashioned in his image and made similar decisions.)

Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers.

Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits' Reviewer B: `It is written in a clear style and would be understood by any reader'.

This—perhaps inevitable—inconsistency can make peer review something of a lottery. You submit a study to a journal. It enters a system that is effectively a black box, and then a more or less sensible answer comes out at the other end. The black box is like the roulette wheel, and the prizes and the losses can be big. For an academic, publication in a major journal like Nature or Cell is to win the jackpot.

The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants. 5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci. 6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.

This is known as the Mathew effect: `To those who have, shall be given; to those who have not shall be taken away even the little that they have'. I remember feeling the effect strongly when as a young editor I had to consider a paper submitted to the BMJ by Karl Popper. 7 I was unimpressed and thought we should reject the paper. But we could not. The power of the name was too strong. So we published, and time has shown we were right to do so. The paper argued that we should pay much more attention to error in medicine, about 20 years before many papers appeared arguing the same.

The editorial peer review process has been strongly biased against `negative studies', i.e. studies that find an intervention does not work. It is also clear that authors often do not even bother to write up such studies. This matters because it biases the information base of medicine. It is easy to see why journals would be biased against negative studies. Journalistic values come into play. Who wants to read that a new treatment does not work? That's boring.

We became very conscious of this bias at the BMJ; we always tried to concentrate not on the results of a study we were considering but on the question it was asking. If the question is important and the answer valid, then it must not matter whether the answer is positive or negative. I fear, however, that bias is not so easily abolished and persists.

The Lancet has tried to get round the problem by agreeing to consider the protocols (plans) for studies yet to be done. 8 If it thinks the protocol sound and if the protocol is followed, the Lancet will publish the final results regardless of whether they are positive or negative. Such a system also has the advantage of stopping resources being spent on poor studies. The main disadvantage is that it increases the sum of peer reviewing—because most protocols will need to be reviewed in order to get funding to perform the study.

Abuse of peer review

There are several ways to abuse the process of peer review. You can steal ideas and present them as your own, or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened. Drummond Rennie tells the story of a paper he sent, when deputy editor of the New England Journal of Medicine , for review to Vijay Soman. 9 Having produced a critical review of the paper, Soman copied some of the paragraphs and submitted it to another journal, the American Journal of Medicine . This journal, by coincidence, sent it for review to the boss of the author of the plagiarized paper. She realized that she had been plagiarized and objected strongly. She threatened to denounce Soman but was advised against it. Eventually, however, Soman was discovered to have invented data and patients, and left the country. Rennie learnt a lesson that he never subsequently forgot but which medical authorities seem reluctant to accept: those who behave dishonestly in one way are likely to do so in other ways as well.


The most important question with peer review is not whether to abandon it, but how to improve it. Many ideas have been advanced to do so, and an increasing number have been tested experimentally. The options include: standardizing procedures; opening up the process; blinding reviewers to the identity of authors; reviewing protocols; training reviewers; being more rigorous in selecting and deselecting reviewers; using electronic review; rewarding reviewers; providing detailed feedback to reviewers; using more checklists; or creating professional review agencies. It might be, however, that the best response would be to adopt a very quick and light form of peer review—and then let the broader world critique the paper or even perhaps rank it in the way that Amazon asks users to rank books and CDs.

I hope that it will not seem too indulgent if I describe the far from finished journey of the BMJ to try and improve peer review. We tried as we went to conduct experiments rather than simply introduce changes.

The most important step on the journey was realizing that peer review could be studied just like anything else. This was the idea of Stephen Lock, my predecessor as editor, together with Drummond Rennie and John Bailar. At the time it was a radical idea, and still seems radical to some—rather like conducting experiments with God or love.

Blinding reviewers to the identity of authors

The next important step was hearing the results of a randomized trial that showed that blinding reviewers to the identity of authors improved the quality of reviews (as measured by a validated instrument). 10 This trial, which was conducted by Bob McNutt, A T Evans, and Bob and Suzanne Fletcher, was important not only for its results but because it provided an experimental design for investigating peer review. Studies where you intervene and experiment allow more confident conclusions than studies where you observe without intervening.

This trial was repeated on a larger scale by the BMJ and by a group in the USA who conducted the study in many different journals. 11 , 12 Neither study found that blinding reviewers improved the quality of reviews. These studies also showed that such blinding is difficult to achieve (because many studies include internal clues on authorship), and that reviewers could identify the authors in about a quarter to a third of cases. But even when the results were analysed by looking at only those cases where blinding was successful there was no evidence of improved quality of the review.

Opening up peer review

At this point we at the BMJ thought that we would change direction dramatically and begin to open up the process. We hoped that increasing the accountability would improve the quality of review. We began by conducting a randomized trial of open review (meaning that the authors but not readers knew the identity of the reviewers) against traditional review. 13 It had no effect on the quality of reviewers' opinions. They were neither better nor worse. We went ahead and introduced the system routinely on ethical grounds: such important judgements should be open and acountable unless there were compelling reasons why they could not be—and there were not.

Our next step was to conduct a trial of our current open system against a system whereby every document associated with peer review, together with the names of everybody involved, was posted on the BMJ 's website when the paper was published. Once again this intervention had no effect on the quality of the opinion. We thus planned to make posting peer review documents the next stage in opening up our peer review process, but that has not yet happened—partly because the results of the trial have not yet been published and partly because this step required various technical developments.

The final step was, in my mind, to open up the whole process and conduct it in real time on the web in front of the eyes of anybody interested. Peer review would then be transformed from a black box into an open scientific discourse. Often I found the discourse around a study was a lot more interesting than the study itself. Now that I have left I am not sure if this system will be introduced.

Training reviewers

The BMJ also experimented with another possible way to improve peer review—by training reviewers. 4 It is perhaps extraordinary that there has been no formal training for such an important job. Reviewers learnt either by trial and error (without, it has to be said, very good feedback), or by working with an experienced reviewer (who might unfortunately be experienced but not very good).

Our randomized trial of training reviewers had three arms: one group got nothing; one group had a day's face-to-face training plus a CD-rom of the training; and the third group got just the CD-rom. The overall result was that training made little difference. 4 The groups that had training did show some evidence of improvement relative to those who had no training, but we did not think that the difference was big enough to be meaningful. We cannot conclude from this that longer or better training would not be helpful. A problem with our study was that most of the reviewers had been reviewing for a long time. `Old dogs cannot be taught new tricks', but the possibility remains that younger ones could.


One difficult question is whether peer review should continue to operate on trust. Some have made small steps beyond into the world of audit. The Food and Drug Administration in the USA reserves the right to go and look at the records and raw data of those who produce studies that are used in applications for new drugs to receive licences. Sometimes it does so. Some journals, including the BMJ , make it a condition of submission that the editors can ask for the raw data behind a study. We did so once or twice, only to discover that reviewing raw data is difficult, expensive, and time consuming. I cannot see journals moving beyond trust in any major way unless the whole scientific enterprise moves in that direction.

So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.

Richard Smith was editor of the BMJ and chief executive of the BMJ Publishing Group for 13 years. In his last year at the journal he retreated to a 15th century palazzo in Venice to write a book. The book will be published by RSM Press [ ], and this is the second in a series of extracts that will be published in the JRSM.

USC logo

What are credible sources?

What is peer review?

Your lecturers will often require that in assignments you use information from academic journal articles that are peer reviewed (an alternative term is “refereed”).

Peer review is a formal quality control process whereby an article submitted to a journal is evaluated by several recognised experts in that discipline. These “referees” judge whether it makes a sufficient contribution to knowledge in the discipline and is of a sufficient standard to justify publication.

Academic book manuscripts, conference papers, trade journals can also be commonly peer reviewed. Many library databases, including Discover, allow you to limit your search to only peer reviewed articles. 

How can I tell if a journal is peer-reviewed?

In the diagram below, "Fuel Cell" is a non-peer reviewed magazine, whereas "Cell Proliferation" is a peer reviewed journal, as indicated by the Referee Jersey icon to the left of the title.

are peer reviewed journals reliable

Peer review in 3 minutes


Source Evaluation and Credibility: Journals and Magazines

Identify Journal Types

Bias in magazines.

Sometimes it is difficult to determine whether a magazine's focus is conservative, liberal or radical. One way to determine this is by using  Magazines for Libraries  by William and Linda Sternberg Katz (R 050.25 M189, located near the Reference Desk). The magazines listed below are a sample of various points-of-view to get you started. 

* Current issues of these magazines are also available for browsing at the library. Journals and magazines are arranged alphabetically by title.

Please keep in mind that online versions of magazines may not contain all of the articles in a particular issue, or allow you full access to the whole article.

Directions | Directory | For Library Staff | Portal | About

630-617-3160 | Reference: 630-617-3173 190 Prospect Ave., Elmhurst, IL 60126

Peer-review and publication does not guarantee reliable information

Posted on 16th January 2018 by Dennis Neuen

are peer reviewed journals reliable

This is the twenty-second blog in a series of 36 blogs based on a list of ‘Key Concepts’ developed by an Informed Health Choices project team.   Each blog will explain one Key Concept that we need to understand to be able to assess treatment claims. 

What is peer-review?

A peer-reviewed journal article is an article which has been independently assessed by others working in the same field.

Peer-review can occur both before and after publication, but pre-publication review is considered standard practice in academia. The concept of peer-review dates back to the 1731, when the Royal Society of Edinburgh distributed a notice stating [1]:

Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.

Fast forward to today and peer-review is still valued as quintessential for quality control and functioning of the scientific community. The Royal Society, Britain’s national academy of sciences, proudly describes [2]:

Peer review is to the running of the scientific enterprise what democracy is to the running of the country.

How does peer review work?

How peer review works. Author submits manuscript to journal. Journal editor assesses manuscript. Either manuscript is rejected/transferred or manuscript is sent to reviewer/s. This goes to one of three options of either single-blind review, double-blind review, or open peer review. Authors makes revisions and then Journal editor assesses comments. Either the manuscript is rejected/transferred or manuscript is accepted.

Figure 1. Summary of the standard process of peer-review [4]

Peer review is highly variable between journals with no universal process. A summary of the standard process is shown in Figure 1. There are three main types of peer review, which The Royal Society of Edinburgh outlines [3]:

1. Single-blind review

2. Double-blind review

3. Open review

Is there bias associated with peer-review?

Peer-review is by no means perfect. It is itself subject to bias, as most things in research are. Evidence from a peer-reviewed article does not make it reliable, based only on that fact.

For example, there is evidence suggesting poor interrater agreement among peer-reviewers, with a strong bias against manuscripts that report results against reviewers’ theoretical perspectives [5]. Although a study  reported in the Journal of General Internal Medicine showed that reviewers agreed barely beyond chance on  recommendations to accept/revise vs. reject, editors nevertheless placed considerable weight on reviewer recommendations [6]. In addition, it has been shown that large numbers of public reviews are more thorough in reviewing academic articles than a small group of experts [7].

There is also ongoing debate about reviewer bias in the single-blind peer review process. Some suggest that if reviewers know the identity of authors, there may be implicit bias against women [8] and those with foreign last names or from less prestigious institutions [9]. Therefore, some researchers argue that double-blind peer review is preferable [10].

In addition, some argue that, for multidisciplinary articles, it is difficult to recruit reviewers who are well-versed in all the relevant methodologies since they which tend to cover multiple different topics in a single study, which works against authors of such papers [3].

Examples from the past

No matter what review system is used, or potential bias it could create, there is always the potential for major and minor errors to be missed.

are peer reviewed journals reliable

1. Vaccination and Autism

This is arguably the most famous retracted journal article in history. Andrew Wakefield reported a small study in The Lancet which he claimed suggested that measles, mumps and rubella (MMR) vaccination might cause autism. Wakefield selected participants and changed and manipulated diagnoses and clinical histories  to promote undisclosed financial interests [11]. This paper resulted in a rise in measles and mumps, serious illness and some deaths.

2. Deliberate errors inserted

A BMJ article deliberately inserted eight errors into a 600-word report of study about to be published and then sent it to 300 reviewers [12]. The median number of errors spotted was two. Twenty per cent of reviewers did not spot any errors. Major errors were overlooked, such as methodological weaknesses, inaccurate reporting of data, unjustified conclusions and minor errors such as omissions and inaccurate reporting of data [13].

3. COOPERATE study

The COOPERATE study investigated therapy with angiotensin-converting-enzyme inhibitor and angiotensin-II receptor blocker, finding that a combination was better than monotherapy in non-diabetic renal disease [14]. This study was published in the Lancet in 2003, and was retracted due to the discovery of major flaws. Contrary to what had been reported, the trial was never approved by an ethics committee, the lead author had lied about obtaining informed consent, the involvement of a statistician could not be verified, the treatment was not double-blind since the lead author was aware of the allocation schedule and the committee was unable to establish the authenticity of the sample of data produced by the lead author [15].

So what can we do?

Whilst peer-review cannot be exactly blamed for missing some of these errors (e.g. Wakefield’s manipulation of data, COOPERATE’s lead author lying about ethics approval), it reminds us that peer-review does not guarantee reliability. Some things are beyond our control, however, here are some things we can do.

Critically appraise the article yourself, especially the Methods section

Don’t just read the abstract or main results. Read the paper from start to finish, especially the Methods section. Critically appraise the paper yourself, with help from some of the other ‘Key Concept’ blogs from our series. Ask yourself, which features could lead to bias? And just as importantly, what is not included but should be included which  could cause bias?

Critical appraisal and assessing the risk of bias is not a skill that can be picked up overnight. One way of making critical appraisal easier and more structured is by using Critical Appraisal Tools (CATs) or checklists, such as those offered by the Critical Appraisal Skills Programme (CASP) UK , Scottish Intercollegiate Guidelines Network (SIGN) or the Centre for Evidence-Based Medicine (CEBM) . Other resources that might be helpful are the EQUATOR network guidelines , which have handy checklists for each study design, to promote accurate and transparent reporting. Students 4 Best Evidence have collated a list of these CATs and others used all over the world that you can find here. Be aware that these tools can also lead to bias, but they are a good starting point for those learning about how to appraise evidence.

Maintain a healthy dose of scepticism

We don’t believe everything that is on the internet or everything that is shown on the TV. Similarly, we need to be able to critically appraise information, regardless of it being published in high impact journals such as the NEJM or Lancet. It’s not the journal’s impact factor that matters; it is the quality of the article itself – which you can assess yourself. Is it better to have a working second-hand Hyundai than a Lamborghini without wheels? Perhaps the phrase should be, don’t judge an academic article by its journal.


Editorial peer-review remains a cornerstone of academic medical scholarship [16], and it is widely regarded as promoting high-quality reports of research. However, surveys of the quality of reports of medical research make clear that peer review does not guarantee adequate reports of research. Furthermore, Cochrane reviews of research assessing the effects of peer-review make clear that the process does not deliver what it is widely assumed it achieves. We must critically appraise articles ourselves to maximise the chance of catching mistakes that have been missed during the peer-review process.

Learning resources which further explain why peer-review and publication does not guarantee reliable information

Read the rest of the blogs in the series here, references (pdf), testing treatments, take home message:.

' src=

Dennis Neuen

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Related Articles

are peer reviewed journals reliable

Do the advantages outweigh the disadvantages?

This is the final blog in a series of 36 blogs explaining 36 key concepts we need to be able to understand to think critically about treatment claims.

Decisions about whether or not to use a treatment should be informed by the balance between the potential benefits and the potential harms, costs and other advantages and disadvantages of the treatment.

are peer reviewed journals reliable

How certain is the evidence?

This is the thirty-fifth blog in a series of 36 blogs explaining 36 key concepts we need to be able to understand to think critically about treatment claims.

The certainty of the evidence (the extent to which the research provides a good indication of the likely effects of treatments) can affect the treatment decisions people make. For example, someone might decide not to use or to pay for a treatment if the certainty of the evidence is low or very low.

are peer reviewed journals reliable

Do treatment comparisons reflect your circumstances?

This is the thirty-fourth blog in a series of 36 blogs explaining 36 key concepts we need to be able to understand to think critically about treatment claims.

Comparisons designed to evaluate whether a treatment can work under ideal circumstances may not reflect what you can expect under usual circumstances.

Library Guides

How to recognize peer-reviewed (refereed) journals.

In many cases professors will require that students utilize articles from “peer-reviewed” journals. Sometimes the phrases “refereed journals” or “scholarly journals” are used to describe the same type of journals. But what are peer-reviewed (or refereed or scholarly) journal articles, and why do faculty require their use?

Three categories of information resources:

Helpful hint!

Not all information in a peer-reviewed journal is actually refereed, or reviewed. For example, editorials, letters to the editor, book reviews, and other types of information don’t count as articles, and may not be accepted by your professor.

How do you determine whether an article qualifies as being a peer-reviewed journal article?

First, you need to be able to identify which journals are peer-reviewed. There are generally four methods for doing this


If you have used the previous four methods in trying to determine if an article is from a peer-reviewed journal and are still unsure, speak to your instructor.

Library Homepage

What are Peer-Reviewed Journals?

Research Help

540-828-5642 [email protected] 540-318-1962

Additional Resources


Peer-reviewed journals (also called scholarly or refereed journals) are a key information source for your college papers and projects. They are written by scholars for scholars and are an reliable source for information on a topic or discipline. These journals can be found either in the library's online databases, or in the library's local holdings. This guide will help you identify whether a journal is peer-reviewed and show you tips on finding them.


What is Peer-Review?

Peer-review is a process where an article is verified by a group of scholars before it is published.

When an author submits an article to a peer-reviewed journal, the editor passes out the article to a group of scholars in the related field (the author's peers). They review the article, making sure that its sources are reliable, the information it presents is consistent with the research, etc. Only after they give the article their "okay" is it published.

The peer-review process makes sure that only quality research is published: research that will further the scholarly work in the field.

When you use articles from peer-reviewed journals, someone has already reviewed the article and said that it is reliable, so you don't have to take the steps to evaluate the author or his/her sources. The hard work is already done for you!

Identifying Peer-Review Journals

If you have the physical journal, you can look for the following features to identify if it is peer-reviewed.

Masthead (The first few pages) : includes information on the submission process, the editorial board, and maybe even a phrase stating that the journal is "peer-reviewed."

Publisher: Peer-reviewed journals are typically published by professional organizations or associations (like the American Chemical Society). They also may be affiliated with colleges/universities.

Graphics:  Typically there either won't be any images at all, or the few charts/graphs are only there to supplement the text information. They are usually in black and white.

Authors: The authors are listed at the beginning of the article, usually with information on their affiliated institutions, or contact information like email addresses.

Abstracts: At the beginning of the article the authors provide an extensive abstract detailing their research and any conclusions they were able to draw.

Terminology:  Since the articles are written by scholars for scholars, they use uncommon terminology specific to their field and typically do not define the words used.

Citations: At the end of each article is a list of citations/reference. These are provided for scholars to either double check their work, or to help scholars who are researching in the same general area.

Advertisements: Peer-reviewed journals rarely have advertisements. If they do the ads are for professional organizations or conferences, not for national products.

Identifying Articles from Databases

When you are looking at an article in an online database, identifying that it comes from a peer-reviewed journal can be more difficult. You do not have access to the physical journal to check areas like the masthead or advertisements, but you can use some of the same basic principles.

Points you may want to keep in mind when you are evaluating an article from a database:

Science in the News

Opening the lines of communication between research scientists and the wider community.

are peer reviewed journals reliable

are peer reviewed journals reliable

Peer Review in Science: the pains and problems

News about exciting and novel science is being published every day, whether it’s about a new technique for rejuvenating skin cells , or a new breakthrough in fighting drug resistance for cancer patients . However, when you read these news articles, how can you assess if they are trustworthy? How do you know if a new technique actually works or if the data used to make a conclusion is analyzed correctly?

One way to know would be to read the actual scientific article to check the data, but a layperson, or even a fellow scientist in a different field, may not be able to properly evaluate the data presented and comment on the validity of the conclusions. That is where peer-reviewed journals come into play. 

Peer-reviewed journals contain articles that are not only conducted and written by experts, but reviewed by several other experts in the same field as well. This peer-review system is crucial for maintaining a level of rigor in scientific publications, as well as ensuring a level of trust in the scientific community by the general public. Unfortunately, however, the world of peer-reviewing has many issues that can potentially impact its credibility. To fully understand this potential, let’s first look at how the reviewing process works.

The peer-review process

Different peer-review journals have different reviewing processes, but they usually follow a similar structure. In general, when a manuscript is submitted to a journal, the journal editor sends the manuscript to at least two reviewers. These reviewers are typically experts in the field, but should have no direct affiliation with the authors. These experts then read and assess the article, providing feedback that the editor sends to the authors. The authors can then make changes, conduct more experiments, and improve upon the manuscript based on the reviewers suggestions. After the authors have answered all of the reviewers’ comments and concerns, the manuscript can either be accepted or rejected for publication by the journal (Figure 1). 

At the scientific journal Science , this entire process, from original submission of a manuscript to its final acceptance, takes about 123 days on average. If that seems like a long time, a quick search in the Review Speed Database will tell you that this is in fact a typical amount of time for a manuscript to be accepted—and this data is not accounting for the countless number of manuscripts that are rejected! The journey to being published in a peer-reviewed journal can be a long and arduous process for researchers.

are peer reviewed journals reliable

A complicated process for everyone involved

The reviewing process can not only be difficult for the researchers who have to go through rounds of editing and answering reviewers’ comments, but taxing for the reviewers as well. Most of the time, reviewers are not paid for their time spent reviewing manuscripts, and are usually fellow academics who are expected to take time away from their regular job to be a reviewer. In fact, a study shows that a typical academic who works on reviews completes about 4.73 reviews per year. While this number may seem small, each review takes about four to five hours to complete ; globally, the total time spent on peer reviews was over 100 million hours in 2020—equivalent to over 15 thousand years!

Academics are expected to be willing to dedicate this amount of time to review for altruistic purposes, as well as for other non-monetary rewards , such as obtaining free journal access, being acknowledged in journals for their efforts, the possibility of receiving favor from journals when they need to publish papers themselves, and many others.Furthermore, the journals themselves are also overwhelmed. There are a high number of papers submitted for review every day (an estimated 21 million articles were reviewed in 2020!); additionally, submission rates can fluctuate greatly over the course of a year, causing editors to be unusually swamped during certain times. 

Potential problems of peer review

Because of how overwhelming the review process can be, the results are not always consistent between different articles and journals. Particularly, the decisions of reviewers can be inconsistent. One study showed that recently published articles, when resubmitted a few months later, are often rejected by the same journal – most of the reviewers did not detect that it was a resubmission, and the articles were frequently rejected due to “methodological flaws,” showing the volatility of reviewer decisions. This may be due in part to the disparities in opinions between reviewers, making it very difficult to submit a paper that will be liked by all of the reviewers. In fact, another study did a probability analysis and showed that it was so unlikely and unpredictable to get two reviewers to agree, that getting a paper accepted by  both reviewers has a similar probability to throwing a dice.

Additionally, reviewers are of course humans too! They will sometimes miss critical information in a paper or have personal biases when reviewing, causing dubious research to sometimes be published. Furthermore, another study shows that there may be a bias in favor of the institutions that the reviewers themselves are affiliated with. After all this work, published, peer-reviewed works can still be retracted, with one of the most notable examples of this being from a few decades ago, in which a paper was published in the Lancet that linked autism to vaccines (Figure 2). This paper was later retracted for many reasons, including data manipulation, low sample size, conflicts of interest, and countless other pieces of evidence contradicting the claims. As you can see, not every paper that is peer-reviewed is a mistake-free paper with good science.

are peer reviewed journals reliable

There is also a gender bias in selecting reviewers – despite a significant portion of researchers being women, women make up a much smaller fraction of reviewers. This survey observed that authors, regardless of gender, suggest mostly male peers as reviewers to their editors (Figure 3). Along these same lines, another paper determined that female reviewers are less likely to be chosen by peers than if a reviewer was randomly selected. This widespread gender bias may then lead to further biases in the review process. This same study showed that there are fewer female authors publishing than what is expected based on the population of female researchers, possibly due to a gender bias similar to the one present in reviewer selection.

are peer reviewed journals reliable

What can be done to improve?

Overall, there is a lot of work being done to improve upon the peer-review system. For example, journals are now putting in more effort to retract incorrect papers , so that papers that slip through the cracks are taken down in a more timely fashion, working to prevent the spread of false information. There are also ongoing debates over whether journals should start paying reviewers a compensation fee for reviewing – perhaps this would solve the inconsistency in peer-reviewing by giving reviewers incentives to pay more attention and be more mindful about what to reject or accept. However, this could make the cost of publishing and reading journals even higher, potentially reducing the number of publications a journal can publish in the future.

With the internet, it is also becoming a lot easier to facilitate discussion of a scientific article,  allowing for additional review by the public beyond the traditional peer-review. For example, PubPeer is a site that allows scientists to review and talk about published research, while medRxiv and bioRxiv allow scientists to post their manuscript before peer review—also known as a preprint—so that the public gets a chance to read and review the papers more quickly. However, this rise of preprint publication, while useful for allowing scientists to garner feedback from the larger scientific community more quickly, is affecting the ways that news outlets are reporting scientific discoveries – many large media sites often report about preprint articles without mentioning that they are not peer-reviewed yet. Moving forward, news outlets need to realize the importance of reporting the preprint nature of these articles, and consumers need to be mindful of if the new research they are reading is indeed from a peer-reviewed journal.

Now, back to the original question: when reading an article about a new scientific discovery, how do we know if we can trust the data? While there are a lot of factors to consider, finding out if the article is peer-reviewed can be a quick litmus test for credibility. However, just because a paper is published in a “peer-reviewed journal,” does not mean that the paper is completely fact-checked, unbiased, or correct. The peer review system is not perfect, but for now, it is the closest thing we have to ensure academic rigor. Like everything in life, we just have to take these articles with a grain of salt.

Wei Li is a fourth year graduate student in the Chemistry and Chemical Biology Ph.D. program at Harvard University. She studies the chemistry of gut bacteria and their effects on host physiology.

Cover image by Pexels from pixabay

For more information:

Read more about the difficulties of peer review here , here , and here .

Share this:

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

Currently you have JavaScript disabled. In order to post comments, please make sure JavaScript and Cookies are enabled, and reload the page. Click here for instructions on how to enable JavaScript in your browser.


  1. 🏆 (New) 1000+ List of Peer Reviewed Journals 2022

    are peer reviewed journals reliable

  2. Analytical Essay: Peer reviewed nursing articles

    are peer reviewed journals reliable

  3. Peer Reviewed Journals

    are peer reviewed journals reliable

  4. Overview of peer review

    are peer reviewed journals reliable

  5. Provide a synopsis of eight peer-reviewed articles from nursing journals

    are peer reviewed journals reliable

  6. Peer-Reviewed Journals

    are peer reviewed journals reliable


  1. Peer reviewed journals #trending #shortsyoutube #viral

  2. Research Presentation: Chapter 4 Coaching for Character

  3. Finding a Journal for any Article

  4. AO 2023 Lunch With the Masters: How to Write a Scientific Paper with Dr. Clark M. Stanford

  5. How to write and publish manuscripts in peer reviewed journals

  6. For Journal Editors and Publishers: How to increase the visibility of your scholarly journal


  1. When to trust (and not to trust) peer reviewed science

    The words “published in a peer reviewed journal” are sometimes considered as the gold standard in science. But any professional scientist will

  2. Explore Information

    Peer reviewed articles are often considered the most reliable and reputable sources in that field of study. Peer reviewed articles have

  3. Peer review: a flawed process at the heart of science and journals

    Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by

  4. Peer-reviewed journals

    Peer review is a formal quality control process whereby an article submitted to a journal is evaluated by several recognised experts in that

  5. Source Evaluation and Credibility: Journals and Magazines

    Articles from scholarly, peer-reviewed, academic, and refereed journals are more credible than articles from popular or trade journals

  6. Peer-review and publication does not guarantee reliable information

    Peer-review is by no means perfect. It is itself subject to bias, as most things in research are. Evidence from a peer-reviewed article does not

  7. How to Recognize Peer-Reviewed Journals

    Peer-reviewed (refereed or scholarly) journals - Articles are written by experts and are reviewed by several other experts in the field before the article is

  8. A Definition of Peer-Reviewed

    Peer-reviewed journals (also called scholarly or refereed journals) are a key information source for your college papers and projects.

  9. Why are peer-reviewed articles more reliable?

    Peer reviewed articles are considered more reliable as they are assessed by subject experts for the research validity, quality, and integrity. Peer review

  10. Peer Review in Science: the pains and problems

    While there are a lot of factors to consider, finding out if the article is peer-reviewed can be a quick litmus test for credibility. However