The University of Sussex

Towards visually mediated interaction using appearance-based models

A.J. Howell, Hilary Buxton

This paper reports initial research on supporting visually mediated interation (VMI) by developing generic expression models and person-specific and generic gesture models for the control of active cameras. We investigate the recognition of both head pose and expression using simple generalisation of trained generic models using radial basis function (RBF) networks. Then we go on to describe a time-delay variant (TDRBF) of the network and evaluate its performance on recognising simple pointing and waving hand gestures in image sequences. Experimental results are presented that show that high levels of performance in gesture recognition can be obtained using these techniques, both for particular individuals and across a set of individuals. Characteristic visual evidence can be automatically selected and used even to recognise individuals from their gestures, depending on the task demands.

Download compressed postscript file