Automating waterfowl census with UAS and deep learning

In 2020, I became the project manager for a US Fish and Wildlife cooperative agreement to reduce human effort in conducting annual waterfowl census at wildlife refuges in the southwest by using UAS for field surveys and deep learning for automated image processing. I developed and validated a workflow for these AI-assisted drone surveys, including field sampling plans as well as training and optimizing a deep learning model for detecting and classifying waterfowl at refuges in central New Mexico.
UAS imaging was conducted at waterfowl management areas throughout central New Mexico from 2018 - 2023, collecting ~40,000 images in total. Fifteen US Fish and Wildlife biologists labeled 13 representative images to the species level to create a benchmark set of image labels for training and evaluation. A semi-random subsample of images was uploaded initially to the Labelbox platform for labeling by volunteers to create a larger training dataset. I migrated the volunteer image labeling project to the Zooniverse participatory science platform and transitioned to labeling by morphology (duck/goose/crane). I also worked with web developers to create a gameification portal to encourage volunteer participation and a data upload portal for wildlife managers to have their waterfowl imagery automatically labeled using our beta deep learning pipeline.

Research Questions

  • How much variance exists in human labeling effort of UAS imagery? Is this difference significant between experts and volunteers?
  • How can we structure field imaging and annotation strategies to optimize the process of deep learning model development?
  • Does animal movement bias counts derived from UAS imagery?