Presentation given at ICPRAM 2013, International Conference on Pattern Recognition, Barcelona, Spain, February 2013
ABSTRACT: We propose a novel gesture spotting approach that offers a comprehensible representation of automatically inferred spatiotemporal constraints. These constraints can be defined between a number of characteristic control points which are automatically inferred from a single gesture sample. In contrast to existing solutions which are limited in time, our gesture spotting approach offers automated reasoning over a complete motion trajectory. Last but not least, we offer gesture developers full control over the gesture spotting task and enable them to refine the spotting process without major programming efforts.