location: Current position: Home >> Scientific Research >> Paper Publications

Unsupervised Learning of Human Pose Distance Metric via Sparsity Locality Preserving Projections

Hits:

Indexed by:期刊论文

Date of Publication:2019-02-01

Journal:IEEE TRANSACTIONS ON MULTIMEDIA

Included Journals:SCIE、Scopus

Volume:21

Issue:2

Page Number:314-327

ISSN No.:1520-9210

Key Words:Pose similarity; distance metric; unsupervised learning; sparse representation; locality preserving projection

Abstract:Human poses admit complicated articulations and multigranular similarity. Previous works on learning human pose metric utilize sparse models, which concentrate large weights on highly similar poses and fail to depict an overall structure of poses with multigranular similarity. Moreover, previous works require a large number of similar/dissimilar annotated pairwise poses, which is an tedious task and remains inaccurate due to different subjective judgments of experts. Motivated by graph-based neighbor assignment techniques, we propose an unsupervised model called sparsity locality preserving projection with adaptive neighbors (SLPPAN), for learning human pose distance metric. By using a property of the graph Laplacian, SLPPAN introduces a fixed-rank constraint to enforce an adaptive graph structure of poses and learns the neighbor assignment, the similarity measurement, and pose metric simultaneously. Experiments on pose retrieval of the CMU Mocap database demonstrate that SLPPAN outperforms traditional pose metric learning methods by capturing viewpoint variations of human poses. Experiments on keyframe extraction of the MSRAction3D database demonstrate that SLPPAN outperforms current methods by precisely detecting important frames of action sequences.

Pre One:Locality Preserving Projection Based on F-norm

Next One:DRFN: Deep Recurrent Fusion Network for Single-Image Super-Resolution With Large Factors