1-2hit |
Lin CAO Kaixuan LI Kangning DU Yanan GUO Peiran SONG Tao WANG Chong FU
Face sketch synthesis refers to transform facial photos into sketches. Recent research on face sketch synthesis has achieved great success due to the development of Generative Adversarial Networks (GAN). However, these generative methods prone to neglect detailed information and thus lose some individual specific features, such as glasses and headdresses. In this paper, we propose a novel method called Feature Learning Generative Adversarial Network (FL-GAN) to synthesize detail-preserving high-quality sketches. Precisely, the proposed FL-GAN consists of one Feature Learning (FL) module and one Adversarial Learning (AL) module. The FL module aims to learn the detailed information of the image in a latent space, and guide the AL module to synthesize detail-preserving sketch. The AL Module aims to learn the structure and texture of sketch and improve the quality of synthetic sketch by adversarial learning strategy. Quantitative and qualitative comparisons with seven state-of-the-art methods such as the LLE, the MRF, the MWF, the RSLCR, the RL, the FCN and the GAN on four facial sketch datasets demonstrate the superiority of this method.
Lin CAO Xibao HUO Yanan GUO Kangning DU
Sketch face recognition refers to matching photos with sketches, which has effectively been used in various applications ranging from law enforcement agencies to digital entertainment. However, due to the large modality gap between photos and sketches, sketch face recognition remains a challenging task at present. To reduce the domain gap between the sketches and photos, this paper proposes a cascaded transformation generation network for cross-modality image generation and sketch face recognition simultaneously. The proposed cascaded transformation generation network is composed of a generation module, a cascaded feature transformation module, and a classifier module. The generation module aims to generate a high quality cross-modality image, the cascaded feature transformation module extracts high-level semantic features for generation and recognition simultaneously, the classifier module is used to complete sketch face recognition. The proposed transformation generation network is trained in an end-to-end manner, it strengthens the recognition accuracy by the generated images. The recognition performance is verified on the UoM-SGFSv2, e-PRIP, and CUFSF datasets; experimental results show that the proposed method is better than other state-of-the-art methods.