Learning Multi-view Generator Network for Shared Representation

Tian Han, Xianglei Xing, Ying Nian Wu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Multi-view representation learning is challenging because different views contain both the common structure and the complex view specific information. The traditional generative models may not be effective in such situation, since view-specific and common information cannot be well separated, which may cause problems for downstream vision tasks. In this paper, we introduce a multi-view generator model to solve the problem of multi-view generation and recognition in a unified framework. We propose a multi-view alternating back-propagation algorithm to learn multi-view generator networks by allowing them to share common latent factors. Our experiments show that the proposed method is effective for both image generation and recognition. Specifically, we first qualitatively demonstrate that our model can rotate and complete faces accurately. Then we show that our model can achieve state-of-art or competitive recognition performances through quantitative comparisons.

Original languageEnglish
Title of host publication2018 24th International Conference on Pattern Recognition, ICPR 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages7
ISBN (Electronic)9781538637883
StatePublished - Nov 26 2018
Event24th International Conference on Pattern Recognition, ICPR 2018 - Beijing, China
Duration: Aug 20 2018Aug 24 2018

Publication series

NameProceedings - International Conference on Pattern Recognition


Conference24th International Conference on Pattern Recognition, ICPR 2018

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


  • Gait recognition
  • Generator networks
  • Multi-view learning

Cite this