Crowdsourcing wolf howls: Do you see what I see? (#342)
Crowdsourcing has been introduced with the promise to 'solve' some of the methodological problems in a wide range of fields. With regard to animal communication research, one potential application may be to reduce the influence of observer bias when processing raw data and identifying calls, songs, howls or categorizing signals by species. Electronic crowdsourcing, the practice of soliciting and obtaining services or input from a large group of individuals, provides the framework for increasing perceptual diversity when categorizing this raw data. But is all diversity created equal? Here I will present data that addresses when volunteers are more or less accurate at categorizing the howls of wolves when using a simple online interface. The current study is part of the larger ongoing Canid Howl Project which ultimately aims to understand the different vocal behaviors of canids: primarily wolves, dogs, and coyotes. For this study approximately 5,000 howls were made available online (http://howlcoder.appspot.com/science.html). Over the course of about 16 months, several hundred volunteers have categorized thousands of howls by tracing or tracking the howl on a spectrogram. Volunteer performance was measured using standardized error values for each howl. Several factors predicted to affect volunteer performance were quantified. Some of the questions these data address are: 1.) Does complexity of the howl affect volunteer performance? 2.) Does practice affect volunteer performance? 3.) Does motivation affect volunteer performance? Results will be discussed in light of previous findings from pilot data which indicated that volunteers were reliable across all situations and what these data may mean for the development of the interface for use at larger venues, given our new collaboration with the International Wolf Center.