For decades, wildlife biologists have faced a quietly absurd problem: they collect data faster than humans can possibly read it.

A single camera-trap project — those motion-activated cameras strapped to trees and fence posts in forests, savannas, and reserves — can produce hundreds of thousands or even millions of photos. Each one has to be reviewed by someone with enough training to tell a coyote from a wolf or a margay from an ocelot. Even with a team of undergraduates and a graduate student, the work can take six to seven months. Sometimes a full year. Only then can the real analysis begin.

A new study published in the Journal of Applied Ecology this week says that bottleneck can now collapse into a matter of days — and the science still holds up.

The test: hand the whole pipeline to AI

Researchers at Washington State University and Google ran a head-to-head comparison between traditional, human-labeled camera-trap datasets and a fully automated AI pipeline with no human review at any step. They tested the system on hundreds of thousands of images from three very different places: Washington state, Glacier National Park in Montana, and the Maya Biosphere Reserve in Guatemala.

The AI they used is SpeciesNet, a general-purpose wildlife identification model developed by Google. Previous AI tools could only filter out blank or non-animal frames — typically 60 to 70% of a camera-trap dataset — leaving humans to slog through the rest. The new study asked whether that final human step could be cut entirely.

Across the key questions ecologists actually care about — where species occur, how often, and which environmental factors shape their distributions — the AI's ecological conclusions matched the human-built models in roughly 85 to 90% of cases. Divergence was concentrated in rare or visually tricky species, exactly where you'd expect.

"We're not trying to replace people," said Daniel Thornton, the WSU wildlife ecologist who led the study. "The goal is to help researchers get to answers faster so they can make better decisions about managing and conserving wildlife."

Why being slightly wrong is okay

The eye-opening twist in the paper is that the AI doesn't have to be perfect. Even when SpeciesNet misidentified an animal or missed a detection, the broader ecological models — which depend on repeated observations across many cameras and many nights — stayed robust.

"The key question wasn't whether the AI got every image right," said Dan Morris, a senior staff research scientist at Google and a co-author of the study. "It was whether the ecological conclusions you care about would end up being basically the same."

For most species in the test sets — including charismatic carnivores like jaguars, wolves, and grizzly bears — they were.

Real-time conservation, finally

The practical implications are large. Camera traps are one of the most cost-effective tools in conservation: cheap to install, non-invasive, and able to detect rare or secretive animals that no human could ever follow on foot. The problem has always been turning their flood of pixels into management decisions before the situation on the ground has changed.

Compressing months of labor into days opens the door to something biologists have wanted for a long time: near real-time monitoring of vulnerable species. If a population of jaguars starts shifting away from a corridor, or a wolf pack drifts toward a new ranch, managers could know within weeks instead of next year — when it might be too late to act.

The team is now scaling the pipeline up for use across larger reserves and longer monitoring programs. With SpeciesNet freely available through Google, smaller conservation groups that could never afford a research team are about to gain a tool that, until recently, only a well-funded university could wield.