location: Current position: Home >> Scientific Research >> Paper Publications

Unsupervised Transformation Network Based on GANs for Target-Domain Oriented Multi-domain Image Translation

Hits:

Indexed by:会议论文

Date of Publication:2019-01-01

Included Journals:CPCI-S、EI

Volume:11362

Page Number:398-413

Key Words:Image-to-image translation; Generative adversarial networks; Multi-domain

Abstract:Multi-domain image translation with unpaired data is a challenging problem. This paper proposes a generalized GAN-based unsupervised multi-domain transformation network (UMT-GAN) for image translation. The generation network of UMT-GAN consists of a universal encoder, a reconstructor and a series of translators corresponding to different target domains. The encoder is used to learn the universal information among different domains. The reconstructor is designed to extract the hierarchical representations of the images by minimizing the reconstruction loss. The translators are used to perform the multi-domain translation. Each translator and reconstructor are connected to a discriminator for adversarial training. Importantly, the high-level representations are shared between the source and multiple target domains, and all network structures are trained together by using a joint loss function. In particular, instead of using a random vector z as inputs to generate high-resolution images, UMT-GAN rather employs the source domain images as the inputs of the generator, hence help the model escape from collapsing to a certain extent. The experimental studies demonstrate the effectiveness and superiority of the proposed algorithm compared with several state-of-the-art algorithms.

Pre One:Cooperative differential evolution framework with utility-based adaptive grouping for large-scale optimization

Next One:Decentralized Multiagent Reinforcement Learning for Efficient Robotic Control by Coordination Graphs