Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
DO YOU KNOW WHAT I BARK? - CLASSIFICATION OF SEQUENCES OF ARTIFICIALLY SELECTED DOG BARKS BY HUMANS
Upcoming SlideShare
Loading in …5
×

DO YOU KNOW WHAT I BARK? - CLASSIFICATION OF SEQUENCES OF ARTIFICIALLY SELECTED DOG BARKS BY HUMANS

342 views

Published on

In an earlier study, we showed that human listeners were able to categorize dog bark samples played back for them (Pongrácz et al., 2004). They were able to classify the situations of barkings regardless of their former experiences with dogs. The listeners described the possible emotions of the given barking dog quite accurately, and the emotionality ratings given by listeners correlated well with some acoustical parameters (inter-bark interval, frequency and hoarseness) of the barkings. In this study we wanted to detect the effects of these acoustical parameters on the classification of situations and the emotionality ratings given by the listeners in a more controlled experiment. We determined the low, middle, and high value range of these three parameters. Barks were selected from a bark-bank containing cca. 5000 barkings of Mudi breed dogs. The selection was made upon the frequency and hoarseness of the barks. We sorted the selected barks into different sequences containing barks having the same frequency and hoarseness. Finally, we inserted between the barks short, mid-length, or long, time intervals. So, in the end, we had 27 different artificially made barking sequences (in regard to all the possible combinations of the three values of the three parameters). We played back these samples to human listeners. The listeners were not told that the samples were artificially made. In two independent sub-experiments, they were asked to score the possible emotions of the "given barking dog" and to classify the "situations" of the barkings. We found strong effects of all three acoustic parameters on the emotionality ratings, as well as on the classification of situations, to be very similar to our findings in the earlier experiment. We also found interaction between the effects of the parameters, which suggests that the decision of the listeners is based upon several acoustic parameters at the same time. We found there was good agreement between the emotionality scores and the situation chosen for a given sample by the same listener.

Published in: Education
  • Be the first to comment

  • Be the first to like this

×