• 更多栏目

    孙亮

    • 副教授       硕士生导师
    • 性别:男
    • 毕业院校:吉林大学
    • 学位:博士
    • 所在单位:计算机科学与技术学院
    • 学科:计算机应用技术
    • 办公地点:创新园大厦B802
    • 联系方式:15998564404
    • 电子邮箱:liangsun@dlut.edu.cn

    访问量:

    开通时间:..

    最后更新时间:..

    Cooperative Coupled Generative Networks for Generalized Zero-Shot Learning

    点击次数:

    论文类型:期刊论文

    发表时间:2020-01-01

    发表刊物:IEEE ACCESS

    收录刊物:SCIE

    卷号:8

    页面范围:119287-119299

    ISSN号:2169-3536

    关键字:Visualization; Semantics; Generative adversarial networks; Neural networks; Correlation; Training; Task analysis; Zero-shot learning; generalized zero-shot learning; generative adversarial network; neural network; residual module

    摘要:Compared with zero-shot learning (ZSL), the generalized zero-shot learning (GZSL) is more challenging since its test samples are taken from both seen and unseen classes. Most previous mapping-based methods perform well on ZSL, while their performance degrades on GZSL. To solve this problem, inspired by the ensemble learning, this paper proposes a model with cooperative coupled generative networks (CCGN). Firstly, to alleviate the hubness problem, the reverse visual feature space is taken as the embedding space, with the mapping achieved by a visual feature center generation network. To learn a proper visual representation of each class, we propose a coupled of generative networks, which cooperate with each other to synthesize a visual feature center template of the class. Secondly, to improve the generative ability of the coupled networks, we further employ a deeper network to generate. Meanwhile, to alleviate loss semantic information problem caused by multiple network layers, a residual module is employed. Thirdly, to mitigate overfitting and to increase scalability, an adversarial network is introduced to discriminate the generation of visual feature centers. Finally, a reconstruction network, which reverses the generation process, is employed to restrict the structural correlation between the generated visual feature center and the original semantic representation of each class. Extensive experiments on five benchmark datasets (AWA1, AWA2, CUB, SUN, APY) demonstrate that the proposed algorithm yields satisfactory results, as compared with the state-of-the-art methods.