1-2hit |
Deep Graphical Model (DGM) based on Generative Adversarial Nets (GANs) has shown promise in image generation and latent variable inference. One of the typical models is the Iterative Adversarial Inference model (GibbsNet), which learns the joint distribution between the data and its latent variable. We present RGNet (Re-inference GibbsNet) which introduces a re-inference chain in GibbsNet to improve the quality of generated samples and inferred latent variables. RGNet consists of the generative, inference, and discriminative networks. An adversarial game is cast between the generative and inference networks and the discriminative network. The discriminative network is trained to distinguish between (i) the joint inference-latent/data-space pairs and re-inference-latent/data-space pairs and (ii) the joint sampled-latent/generated-data-space pairs. We show empirically that RGNet surpasses GibbsNet in the quality of inferred latent variables and achieves comparable performance on image generation and inpainting tasks.
Zhihao LI Ruihu LI Chaofeng GUAN Liangdong LU Hao SONG Qiang FU
In this paper, we propose a class of 1-generator quasi-twisted codes with special structures and investigate their application to construct ternary quantum codes. We discuss the algebraic structure of these 1-generator quasi-twisted codes and their dual codes. Moreover, sufficient conditions for these quasi-twisted codes to satisfy Hermitian self-orthogonality are given. Then, some ternary quantum codes exceeding the Gilbert-Varshamov bound are derived from such Hermitian self-orthogonal 1-generator quasi-twisted codes. In particular, sixteen quantum codes are new or have better parameters than those in the literatures, eight of which are obtained by the progapation rules.