location: Current position: Home >> Scientific Research >> Paper Publications

Real-time estimation of hand gestures based on manifold learning from monocular videos

Hits:

Indexed by:期刊论文

Date of Publication:2014-07-01

Journal:MULTIMEDIA TOOLS AND APPLICATIONS

Included Journals:SCIE、EI

Volume:71

Issue:2

Page Number:555-574

ISSN No.:1380-7501

Key Words:Manifold learning; Locality preserving projections; Gesture recognition

Abstract:Object pose estimation by manifold learning has become a hot research area recently. In this paper, we propose an efficient method that can recover pose and viewpoints for numerous hand gestures from monocular videos based on Locality Preserving Projections. We first select some hand dynamic gestures as primitive hand motions and set a 3D-2D mapping table to relate 3D joint angles of sampling static pose with their projective silhouettes from arbitrary viewpoints. Then the embedding space and explicit mapping function are learnt for every primitive motion. In order to make classification and prediction among those embedding spaces, a Subspace Filtering Algorithm is also proposed which can recognize and recover numerous hand dynamic gestures by the combination of primitive gestures. At last, by using skin color cues and oriented k-Dops, multi-hands can be labeled and tracked separately and accurately. Extensive experimental results demonstrate qualitatively and quantitatively that 3D pose recovery of hands can be achieved by our method robustly and efficiently.

Pre One:Learning from evolved next release problem instances

Next One:Simple Fourth-Degree Cubature Formulae with Few Nodes over General Product Regions