Citizen Science: How Americans Can Participate in Research

Citizen science is the practice of enlisting non-professional volunteers in genuine scientific data collection, analysis, and sometimes hypothesis testing — a model that has produced peer-reviewed results in fields from ornithology to astrophysics. The scope in the United States is broader than most people realize: the federal government runs coordinated programs, and platforms like SciStarter catalog more than 3,000 active projects. This page explains how the model functions, what kinds of contributions volunteers actually make, and where the meaningful boundaries lie between amateur participation and professional science.

Definition and scope

Citizen science sits at the intersection of science communication and public outreach and formal data collection methods in research. The Federal Crowdsourcing and Citizen Science Act of 2016 (Pub. L. 114-329, Title VII) gave federal agencies explicit authority to run citizen science programs and established the CitizenScience.gov catalog as a central clearinghouse for federal projects. As of the catalog's public providers, more than 300 federal projects across agencies including NASA, NOAA, USGS, and the EPA have been verified there.

The defining feature is that volunteers contribute real observational data — not just enthusiasm — to research questions posed by credentialed scientists. The scientific method still applies in full. The citizen role is typically in data gathering or classification; the experimental design and peer review process remain in professional hands.

How it works

Most citizen science projects follow one of three structural models:

  1. Contributory — Scientists design the study and volunteers collect or submit data (e.g., bird counts for the Cornell Lab of Ornithology's eBird program, which has accumulated more than 1 billion bird observations globally as reported by Cornell Lab).
  2. Collaborative — Volunteers also help analyze data or refine protocols, sometimes suggesting modifications to data collection categories.
  3. Co-created — Scientists and community members jointly define the research question from the outset, a model more common in environmental justice contexts where local knowledge is central to the inquiry.

The contributory model dominates. A volunteer might photograph insects through iNaturalist, measure rainfall with a CoCoRaHS gauge, count galaxies through Galaxy Zoo, or monitor stream water quality through a state extension program. The data flow upward into aggregated datasets that, in some cases, would be impossible to assemble any other way — eBird's spatial coverage across North America, for instance, requires observers in locations no professional survey crew could staff year-round.

Research data management becomes a critical backstop here. Because citizen-collected data arrives with variable quality, most programs embed quality-control filters: automated range checks, expert validation layers, or consensus algorithms that compare multiple observers' submissions before flagging a data point as reliable.

Common scenarios

The widest participation tends to cluster in a few domains:

Each of these connects back to formal research design and methodology — the citizen contribution slots into a protocol, not free-form observation.

Decision boundaries

The most important distinction in citizen science is between contributory observation and independent research. A volunteer using a standardized protocol to count butterflies along a transect is contributing to science. A volunteer designing their own study, collecting their own data outside any protocol, and publishing findings without peer review is doing something else — possibly interesting, but not citizen science in the formal sense.

A secondary boundary separates high-stakes domains from low-stakes ones. Clinical trials and medical research operate under Institutional Review Board oversight and strict federal regulations; citizen participation there is narrow and carefully controlled. Ecological observation carries far fewer risks and correspondingly fewer constraints.

There is also a contrast worth holding in mind between citizen science and undergraduate research opportunities. Both involve non-expert contributors, but undergraduates working in supervised laboratory settings are receiving training toward professional credentials, with structured mentorship and institutional accountability. Citizen science volunteers typically have no such structure — which is both a feature (low barrier to entry) and a genuine limitation when the data needs to meet rigorous standards for statistical analysis in research.

The strongest citizen science programs address that limitation directly: they build validation pipelines, publish their protocols, and submit findings through conventional scientific publishing channels. When the data holds up, the authorship questions get interesting — and some programs now formally credit volunteer contributors in published papers, a practice that blurs the line between participant and researcher in ways the research ethics and integrity community is still working through.

The National Science Foundation's broader impacts criterion explicitly values public engagement in research, which means federally funded scientists have a structural incentive to design projects that let non-professionals contribute. That policy alignment is part of why the field has grown, and part of why the main reference index for scientific research increasingly treats citizen science as a legitimate methodological category rather than an outreach footnote.

 ·   · 

References