The Peer Review Process: How Scientific Articles Are Evaluated
Peer review is the mechanism by which the scientific community decides what counts as credible knowledge worth publishing. A manuscript submitted to a journal does not simply get accepted or rejected by an editor — it travels through a structured evaluation involving independent experts, written critiques, and often multiple rounds of revision before a decision is made. The process shapes what appears in the scientific record, which means its design, its failures, and its reform efforts are subjects of real consequence for researchers, institutions, and anyone who relies on published science.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (non-advisory)
- Reference table or matrix
Definition and scope
Peer review, in the context of scientific publishing, is an editorial quality-control procedure in which subject-matter experts external to a journal assess a manuscript's validity, methodology, and contribution before publication. The Committee on Publication Ethics (COPE), which provides guidelines used by thousands of journals worldwide, describes peer review as a critical component of the scholarly communication system — not a guarantee of truth, but a filter for plausibility and rigor.
The scope of peer review extends well beyond individual journals. Grant-funding bodies including the National Institutes of Health (NIH) and the National Science Foundation (NSF) use peer review to evaluate funding applications, applying essentially the same logic: independent expert judgment applied before resources or credibility are committed. On the publishing side, the process applies to original research articles, review articles, brief communications, and technical notes — though the rigor and depth of review varies across those formats.
The phrase "refereed journal" has become shorthand for scientific legitimacy, which creates pressure on the system and has fueled the rise of predatory journals that mimic peer review without actually conducting it.
Core mechanics or structure
A standard peer review cycle moves through five recognizable stages, though the sequencing and terminology differ across publishers and disciplines.
Stage 1 — Editorial screening. An editor-in-chief or handling editor reads the submission and decides whether it clears a basic threshold of scope and quality. Journals at the upper end of the selectivity spectrum — Nature, Science, Cell — reject roughly 90% of submissions at this stage without external review, according to the journals' own published statistics.
Stage 2 — Reviewer selection. The editor identifies 2–3 reviewers (occasionally more for complex or interdisciplinary work) with relevant expertise. Reviewers are typically active researchers in the field. Editors use databases, citation networks, and author-suggested reviewers — though many journals treat author-suggested names with skepticism after scandals involving fake reviewer pools, most notably a 2015 case that led to 60 retractions at BioMed Central.
Stage 3 — Review period. Invited reviewers accept or decline. Those who accept receive the manuscript and produce a written evaluation within a window typically set at 21–30 days, though actual turnaround times frequently exceed this. The STM Association's 2021 Research Report noted that average first-decision time across journals ranges from 2 to 6 months depending on field and journal tier.
Stage 4 — Decision. The editor synthesizes reviewer reports and issues one of four standard decisions: accept as-is (rare), minor revision, major revision, or reject. Reviewer recommendations are advisory — the editor holds final authority.
Stage 5 — Revision and re-review. If revision is requested, authors respond with a revised manuscript and a point-by-point rebuttal letter. The revised submission may return to the original reviewers or be assessed solely by the editor. This cycle can repeat; two rounds of major revision before acceptance is not unusual in medicine or social sciences. Understanding how to respond to peer reviewer comments is itself a distinct professional skill for researchers.
Causal relationships or drivers
Peer review exists because scientific claims require independent corroboration before being treated as reliable. A single researcher's assessment of their own work is not a sufficient epistemic standard — not because researchers are dishonest, but because motivated reasoning and methodological blind spots are predictable features of human cognition.
The formalization of peer review accelerated in the 20th century alongside the expansion of government-funded research. When public money funds science, accountability mechanisms become politically and institutionally necessary. The NIH's Center for Scientific Review has operated a formal peer review system for grant applications since 1946.
Editors drive reviewer selection based on citation networks, which creates a feedback loop: researchers who publish frequently are asked to review frequently, concentrating the burden on a subset of the scientific workforce. A 2020 analysis published in PLOS ONE estimated that roughly 20% of active researchers perform approximately 70% of all peer review work across journals — a distribution with obvious sustainability implications.
Classification boundaries
Not all peer review is structurally identical. Four main models are in active use across journals today:
Single-blind: Evaluators know the authors' identities; authors do not know the evaluators'. This is the historical default in most disciplines.
Double-blind: Neither reviewers nor authors know each other's identities during review. More common in social sciences and humanities; intended to reduce prestige bias.
Open peer review: Reviewer identities are disclosed to authors during the process, and in some implementations, the full review correspondence is published alongside the article. BMJ Open and journals published by PLOS operate variants of open review.
Post-publication peer review: Formal or informal evaluation that occurs after publication, through platforms such as PubPeer or structured commentary features. This model gained visibility during the COVID-19 pandemic as preprint servers circulated findings ahead of formal review.
Each model sits at a different point on the tradeoff curve between transparency and candor — which connects directly to the tensions explored in the next section.
Tradeoffs and tensions
The peer review system carries genuine structural tensions that have resisted clean resolution for decades.
Speed vs. rigor. Thorough review takes time. In fast-moving fields — infectious disease, climate science — a 4-month review cycle means results arrive after decisions have already been made. The pandemic made this tension visceral. The response (mass adoption of preprints) solved the speed problem while creating new quality-signal problems that the field is still sorting out.
Anonymity vs. accountability. Reviewer anonymity is designed to enable honest critique without professional risk. It also enables sloppy or biased reviews without consequence. In a 2017 survey by the Wellcome Trust, 57% of researchers reported experiencing what they described as unconstructive or unhelpful peer review.
Expertise vs. availability. The most qualified reviewer for a highly specialized paper may be a direct competitor, creating conflict-of-interest risks. Editors must balance the ideal reviewer against the available reviewer.
Gatekeeping vs. innovation. Peer review by definition asks established researchers to evaluate new work against existing standards. Genuinely paradigm-shifting work has a documented history of struggling through initial peer review — a pattern discussed in the context of journal impact factor and citation dynamics.
These tensions connect to ongoing reform discussions documented at the COPE website and in the broader scientific journal authority landscape.
Common misconceptions
"Peer review checks the data." Reviewers evaluate the logic, methodology, and plausibility of findings based on what the authors report — they do not access raw datasets in most cases, and they do not independently reproduce experiments. Data availability requirements, now mandated by an increasing number of journals and funding agencies, address part of this gap. See data availability and reproducibility for a fuller treatment.
"Refereed means correct." Peer review is a filter, not a proof. Flawed studies pass the review process regularly; errors are caught through post-publication scrutiny, replication attempts, and retractions. The process reduces the probability of egregious errors, but it does not eliminate them.
"Rejection means the science is wrong." A manuscript rejected at one journal frequently gets accepted elsewhere, sometimes with minimal changes. Editorial fit, scope, novelty threshold, and reviewer lottery all contribute to rejection decisions that have little to do with scientific validity.
"All refereed journals are equivalent." A paper published in a journal indexed in PubMed and one published in a pay-to-publish venue with nominal review do not carry equivalent epistemic weight. Journal quality varies enormously — which is precisely what journal indexing databases and metrics like the SCImago Journal Rank attempt to signal.
Checklist or steps (non-advisory)
The following sequence describes what a refereed manuscript typically passes through between submission and publication decision:
- [ ] Manuscript submitted through journal's online submission system
- [ ] Editorial desk check: scope fit, formatting compliance, completeness of required sections
- [ ] Editor-in-chief or section editor assigns to handling editor
- [ ] Handling editor performs preliminary scientific assessment
- [ ] 2–3 external reviewers identified and invited
- [ ] Reviewer acceptance received; manuscript and review form distributed
- [ ] Review period opens (journal-set deadline, typically 21–30 days)
- [ ] Completed reviews received and evaluated by handling editor
- [ ] Editor decision issued: accept / minor revision / major revision / reject
- [ ] If revision requested: authors submit revised manuscript and response letter
- [ ] Revised submission evaluated (by reviewers and/or editor)
- [ ] Final decision issued
- [ ] Accepted manuscript proceeds to production (copyediting, proofing, DOI assignment)
Reference table or matrix
| Review Model | Author Knows Reviewer? | Reviewer Knows Author? | Reviews Published? | Common In |
|---|---|---|---|---|
| Single-blind | No | Yes | No | Physical sciences, engineering |
| Double-blind | No | No | No | Social sciences, humanities, some medicine |
| Open (named) | Yes | Yes | Sometimes | BMJ Open, PLOS journals |
| Post-publication | N/A | N/A | Yes (public platform) | PubPeer, journal commentary |
| Registered Reports | Yes (partial) | Yes (partial) | No | Psychology, neuroscience |
Registered Reports is a format, pioneered by journals including Cortex and supported by the Center for Open Science, in which peer review occurs before data collection — reversing the conventional sequence to reduce publication bias.
References
- Committee on Publication Ethics (COPE)
- National Institutes of Health — Peer Review
- National Science Foundation — Peer Review
- STM Association — Research Report 2021
- Wellcome Trust — What Researchers Think About Peer Review (2017)
- Center for Open Science — Registered Reports
- PLOS ONE
- PubPeer