Research Misconduct and Fraud: Definition, Examples, and Consequences
When a scientist fabricates data, the consequences travel far beyond a single retracted paper. Federal funding dries up, graduate students find their dissertations built on sand, and in medical fields, real patients may receive treatments shaped by invented evidence. Research misconduct is one of the more quietly damaging forces in science — not always dramatic, not always caught quickly, but corrosive in ways that take years to repair.
Definition and scope
The federal definition of research misconduct in the United States comes from the Office of Research Integrity (ORI), housed within the Department of Health and Human Services. Under 42 CFR Part 93, research misconduct is defined as fabrication, falsification, or plagiarism — the FFP triad — in proposing, performing, or reviewing research, or in reporting research results.
Each element carries a specific meaning:
- Fabrication — making up data or results and recording or reporting them as real.
- Falsification — manipulating research materials, equipment, or processes, or changing or omitting data so that the research record does not accurately represent the actual work.
- Plagiarism — appropriating another person's ideas, processes, results, or words without giving appropriate credit.
The ORI's scope covers research funded by the Public Health Service, which includes the National Institutes of Health. The National Science Foundation operates its parallel authority under 45 CFR Part 689. Critically, honest error and differences of scientific opinion are explicitly excluded from the definition — a failed hypothesis is not misconduct, and a flawed methodology pursued in good faith falls outside the FFP framework.
How it works
Misconduct rarely begins with a grand plan. More often it starts with a small manipulation — a slightly trimmed dataset, a gel image adjusted beyond what's acceptable, a borrowed paragraph with the citation quietly dropped. The mechanisms that allow it to persist are structural as much as individual.
Research misconduct typically follows one of two patterns. The first is opportunistic: a researcher under pressure to publish or secure funding shaves results toward significance, crosses a line, and then finds the line increasingly easy to cross. The second is systematic: a lab head constructs a body of work built on fabricated foundations, sometimes for years, with trainees unknowingly citing tainted data in their own work.
The peer-review process, despite its central role in quality control, is poorly suited to catching fabrication. Peer reviewers evaluate manuscripts, not raw data — they see what authors choose to show them. Post-publication scrutiny by other researchers, replication attempts, and tools like ImageTwin or Proofig (which scan figures for manipulation) have caught cases that reviewers missed entirely. The replication crisis in science and misconduct are related but distinct phenomena: most unreplicated findings reflect poor statistical power or questionable research practices, not outright fraud.
Common scenarios
The ORI's published case summaries — available at ori.hhs.gov — offer a detailed look at what misconduct actually looks like in practice. Patterns that appear repeatedly include:
- Western blot manipulation: lanes spliced, bands duplicated or erased in image-editing software. This is the single most common form of falsification identified in post-publication image analysis.
- Flow cytometry data fabrication: inventing cell-sorting results in immunology and cancer biology research.
- Clinical trial data alteration: changing patient outcome data to reach a statistically significant endpoint that the actual results did not support.
- Self-plagiarism in grant applications: submitting substantially identical research aims to multiple funders without disclosure, a practice that can constitute fraud when federal funds are involved.
- Authorship misconduct: provider co-authors who did not contribute, or omitting authors who did — often intertwined with plagiarism allegations.
Decision boundaries
Distinguishing misconduct from error, and misconduct from questionable research practices (QRPs), is genuinely difficult — and the distinction carries serious institutional consequences.
Misconduct vs. error: Intent matters under the federal definition. A miscalibrated instrument that produces bad data is an error. Knowingly reporting data from that instrument as valid after discovering the problem edges toward falsification. The distinction often hinges on what the researcher knew and when — which is why institutional investigations examine lab notebooks, email records, and instrument logs.
Misconduct vs. questionable research practices: QRPs — p-hacking, HARKing (hypothesizing after results are known), selective outcome reporting — are not covered by the FFP definition but undermine scientific integrity in measurable ways. The research ethics and integrity framework treats QRPs as a distinct category requiring different institutional responses than formal misconduct.
The consequences for confirmed misconduct are concrete. ORI findings can result in debarment from federal funding, which the agency sets on a case-by-case basis — typical debarment periods run 3 to 5 years, though the statute permits longer terms for egregious cases. Institutional consequences — termination, loss of tenure — are determined separately by universities. Criminal charges apply when federal grant funds are involved; the federal False Claims Act (31 U.S.C. § 3729) allows for treble damages and civil penalties in cases involving fraudulent grant applications or reports.
The broader picture of research integrity — where misconduct sits within the larger ecosystem of funding accountability, institutional oversight, and publication standards — is part of what the national science authority documents across research disciplines.