Ethics in Science: Research Integrity, Bias, and Responsible Practice
Research ethics isn't an abstract philosophical exercise — it's the operational framework that determines whether scientific findings can be trusted, replicated, and safely applied. This page examines the principles of research integrity, the structural sources of bias, and the institutional mechanisms designed to keep science honest. It covers definitions, mechanics, classifications, persistent tensions, and common misconceptions across the full scope of scientific practice.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (non-advisory)
- Reference table or matrix
Definition and scope
The bedrock of scientific ethics rests on three principles that the Office of Research Integrity (ORI) at the U.S. Department of Health and Human Services treats as foundational: fabrication, falsification, and plagiarism — collectively called FFP — constitute the cardinal sins of research misconduct. Everything else in the ethics landscape, from undisclosed conflicts of interest to sloppy data management, falls under the broader category of "questionable research practices" (QRPs).
Scope matters here. Research integrity covers every stage of the scientific process: how studies are designed, how data are collected and stored, how results are analyzed, how findings are reported, and how credit is assigned. It applies in academic labs, federal agencies, clinical settings, and private industry. The National Science Foundation's regulations at 45 CFR Part 689 define research misconduct specifically as a "significant departure from accepted practices" — a phrase that carries more legal weight than it might appear to, since it anchors enforcement to disciplinary norms rather than to a universal static standard.
Human subjects research adds a further layer, governed by the Common Rule (45 CFR Part 46), which requires informed consent, risk minimization, and independent review. Animal research operates under the Animal Welfare Act and Public Health Service Policy on Humane Care and Use of Laboratory Animals — a domain covered in detail at Animal Research Regulations.
Core mechanics or structure
Ethics in science operates through a set of interlocking institutional mechanisms rather than through any single rule or authority.
Institutional Review Boards (IRBs) are the front-line gatekeepers for human subjects research. Before a study involving human participants can begin at a federally funded institution, an IRB must review and approve the protocol. The Office for Human Research Protections (OHRP) oversees compliance with the Common Rule across more than 8,000 registered institutions in the United States. The mechanics of that review process are examined at Institutional Review Boards.
Peer review is the second pillar — the process by which submitted manuscripts are evaluated by domain experts before publication. It functions as a quality filter, though not an infallibility guarantee. The specific mechanics of how manuscripts move through editorial systems, what reviewers assess, and where the process breaks down are detailed at Peer Review Process.
Data management requirements constitute a third structural layer. The NIH Data Management and Sharing Policy, effective January 2023, requires that researchers funded by NIH submit a data management and sharing plan with every grant application — a shift from optional to mandatory that affects tens of thousands of active grants. Proper data stewardship is central to Research Data Management.
Conflict of interest disclosure rounds out the core structure. Journals, funding agencies, and institutions each have their own disclosure requirements. The International Committee of Medical Journal Editors (ICMJE) maintains a standardized disclosure form adopted by over 5,000 journals.
Causal relationships or drivers
Research misconduct and bias don't appear from nowhere. The structural pressures that produce them are well-documented.
Publication bias — the tendency for journals to favor statistically significant, positive results over null findings — distorts the scientific literature in measurable ways. A 2012 analysis published in PNAS by Daniele Fanelli found that the proportion of positive results in published papers increased by approximately 22 percentage points between 1990 and 2007, a pattern inconsistent with improvements in research quality alone. This creates the incentive loop that drives selective reporting and p-hacking.
The replication crisis is, in significant part, a downstream consequence of these incentive structures. When journals reward novelty and significance over rigor, replication studies — unglamorous by definition — get systematically undervalued. The replication crisis covers the scope and documented scale of this problem across psychology, medicine, and adjacent fields.
Career incentives apply pressure at the individual level. Researcher advancement in academic institutions is heavily tied to publication count and grant acquisition. The National Academies of Sciences, Engineering, and Medicine's 2017 report Fostering Integrity in Research identified institutional reward structures as a primary driver of QRPs — not individual moral failure.
Funding source effects compound these pressures. Industry-sponsored research is not inherently compromised, but Conflict of Interest in Research documents the documented statistical associations between industry sponsorship and outcomes favorable to sponsors, particularly in pharmaceutical and food science literature.
Classification boundaries
Research ethics violations exist on a spectrum that institutional frameworks classify differently — and the classification matters, because consequences differ sharply.
Research misconduct (FFP) carries the most severe institutional and legal consequences. ORI findings can result in debarment from federal funding, retraction of published work, and institutional sanctions.
Questionable Research Practices (QRPs) include p-hacking, HARKing (Hypothesizing After Results are Known), selective reporting of outcomes, and undisclosed exclusion of data. These are harder to detect and rarely trigger formal misconduct proceedings, but they distort the literature at scale. The National Academies' 2017 report drew a sharp distinction between these two categories — a distinction that shapes how Research Misconduct and Fraud is investigated and adjudicated.
Honest error occupies a third category. Mistakes in data entry, statistical analysis errors, and flawed experimental design, when identified and corrected, are not misconduct — they are part of the self-correcting mechanism of science. The ethical obligation is prompt correction and transparency, not the absence of error.
Tradeoffs and tensions
Honest research ethics involves genuine tradeoffs, not just clear right-and-wrong choices.
Open science vs. participant privacy. Making datasets publicly available improves reproducibility and accelerates discovery. It also creates potential for re-identification of supposedly anonymized participants — a tension the GDPR in Europe and HIPAA in the United States resolve differently, with consequences for international collaborative research.
Speed vs. rigor. The COVID-19 pandemic accelerated timelines across clinical and basic research in ways that produced both remarkable breakthroughs and high-profile retractions. Preprint servers like medRxiv made findings available months before peer review, improving speed while removing a quality filter. The tradeoffs of Preprints and Open Access Research are not hypothetical — they played out in real time.
Transparency vs. competitive advantage. Researchers at academic institutions face pressure to publish findings before competitors, sometimes creating incentives to delay data sharing or obscure methods. Intellectual property considerations — examined in depth at Intellectual Property in Research — directly conflict with full transparency norms.
Reproducibility requirements vs. research cost. Requiring full replication before translation into policy or practice would slow applied science to a crawl. The tradeoff between evidentiary standards and practical urgency is genuinely contested — especially in fields like nutrition science and environmental policy.
Common misconceptions
Misconception: Peer review certifies truth.
Peer review is a quality screen by domain experts, typically 2–3 reviewers who cannot verify raw data or reproduce experiments. It catches methodological red flags and improves clarity — it does not validate findings. Highly cited, peer-reviewed papers are retracted routinely; the Retraction Watch Database tracks over 45,000 retracted papers as of 2024.
Misconception: Misconduct is primarily an individual moral failure.
The National Academies' 2017 Fostering Integrity in Research report explicitly reframes this: institutional incentive structures, inadequate mentorship, and competitive funding environments are proximate causes. Individual bad actors exist, but systemic pressures produce systemic outcomes.
Misconception: Bias only enters research through deliberate fraud.
Cognitive bias — confirmation bias, experimenter expectancy effects, anchoring — operates below the threshold of intent. The scientific method includes controls specifically designed to counteract predictable cognitive distortions, such as blinding and randomization.
Misconception: Null results mean failed research.
A well-designed study that finds no effect is scientifically valuable. Null results constrain the hypothesis space, prevent resource waste on ineffective interventions, and counterbalance publication bias. The underreporting of null results is a documented structural problem — not a reflection of their scientific worth.
Checklist or steps (non-advisory)
Elements of an ethically structured research project:
- Protocol pre-registration — hypothesis, analysis plan, and primary outcomes registered with an independent registry (e.g., OSF Registries, ClinicalTrials.gov) before data collection begins.
- IRB or ethics committee review — applicable for any study involving human subjects; documentation of approval retained for the study record.
- Informed consent documentation — participants receive and sign consent forms meeting the 8 basic elements required under 45 CFR §46.116.
- Conflict of interest disclosure — funding sources, financial relationships, and potential conflicts disclosed to the institution, journal, and participants as applicable.
- Data management plan — specifies storage formats, retention periods, access controls, and sharing timelines per funder requirements.
- Blinding and randomization procedures — documented in the methods section; deviations recorded with justification.
- Statistical analysis plan finalized — primary and secondary endpoints, planned covariates, and subgroup analyses specified before unblinding.
- Authorship criteria verified — all verified authors meet the ICMJE criteria: substantial contribution, drafting/revision, final approval, and accountability.
- Data availability statement — included in published manuscript specifying where data can be accessed and under what conditions.
- Corrections and retractions — if errors are identified post-publication, correction or retraction initiated through the journal within a documented timeframe.
Reference table or matrix
Classification of research ethics violations by severity and institutional response
| Category | Examples | Detection method | Institutional response | Federal consequence |
|---|---|---|---|---|
| Research misconduct (FFP) | Data fabrication, image manipulation, plagiarism | ORI investigation, institutional inquiry | Retraction, termination, debarment | Federal funding debarment (42 CFR Part 93) |
| Questionable Research Practices | P-hacking, HARKing, selective outcome reporting | Statistical audits, replication failure | Correction, post-publication scrutiny | Rarely triggered |
| Conflict of interest violations | Undisclosed industry funding, stock ownership | Disclosure review, whistleblower reports | Institutional sanctions, journal retraction | NIH/NSF enforcement possible |
| Human subjects violations | Missing IRB approval, inadequate consent | OHRP audit, participant complaint | Study suspension, institutional review | OHRP debarment, civil penalties |
| Animal welfare violations | Insufficient oversight, unapproved procedures | IACUC review, USDA inspection | Protocol suspension, funding removal | AWA penalties, funding loss |
| Authorship misconduct | Ghost authorship, gift authorship | Reviewer challenge, author dispute | Journal correction, reputational damage | No direct federal mechanism |
The broader ecosystem of scientific ethics — from the design of individual studies to the institutional frameworks that govern entire research programs — is part of what the National Science Authority examines across its reference content on scientific research. Foundational questions about how bias enters research design and methodology, how the peer review process functions under real-world constraints, and how diversity and inclusion in research shapes whose questions get asked and whose perspectives shape analysis are all interconnected with the integrity principles described here.