Abstract
We propose a framework called HyperVAE for encoding distributions of distributions. When a target distribution is modeled by a VAE, its neural network parameters are sampled from a distribution in the model space modeled by a hyper-level VAE. We propose a variational inference framework to implicitly encode the parameter distributions into a low dimensional Gaussian distribution. Given a target distribution, we predict the posterior distribution of the latent code, then use a matrix-network decoder to generate a posterior distribution for the parameters. HyperVAE can encode the target parameters in full in contrast to common hyper-networks practices, which generate only the scale and bias vectors to modify the target-network parameters. Thus HyperVAE preserves information about the model for each task in the latent space. We derive the training objective for HyperVAE using the minimum description length (MDL) principle to reduce the complexity of HyperVAE. We evaluate HyperVAE in density estimation tasks, outlier detection and discovery of novel design classes, demonstrating its efficacy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
This is not the same as zero-shot learning where label description is available.
- 2.
We use \(\theta =(\theta _{p},\theta _{q})\) to denote the set of parameters for p and q.
- 3.
We assume a Dirac delta distribution for \(\gamma \), i.e. a point estimate, in this study.
- 4.
We abused the notation and use p to denote both a density and a probability mass function. Bits-back coding is applicable to continuous distributions [10].
- 5.
We assumed a matrix multiplication takes O(1) time in GPU.
- 6.
Batched matrix multiplication can be paralleled in GPU.
References
Chen, X., et al.: Variational lossy autoencoder. arXiv preprint arXiv:1611.02731 (2016)
Choi, K., Wu, M., Goodman, N., Ermon, S.: Meta-amortized variational inference and learning. arXiv preprint arXiv:1902.01950 (2019)
Do, K., Tran, T., Venkatesh, S.: Matrix-centric neural networks. arXiv preprint arXiv:1703.01454 (2017)
Do, K., Tran, T., Venkatesh, S.: Learning deep matrix representations. arXiv preprint arXiv:1703.01454 (2018)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1126–1135 (2017). JMLR.org
Finn, C., Levine, S.: Meta-learning and universality: deep representations and gradient descent can approximate any learning algorithm. In: ICLR (2018)
Gómez-Bombarelli, R., et al.: Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Sci. 4(2), 268–276 (2018)
Grant, E., Finn, C., Levine, S., Darrell, T., Griffiths, T.: Recasting gradient-based meta-learning as hierarchical Byes. arXiv preprint arXiv:1801.08930 (2018)
Ha, D., Dai, A., Le, Q.V.: Hypernetworks. arXiv preprint arXiv:1609.09106 (2016)
Hinton, G., Van Camp, D.: Keeping neural networks simple by minimizing the description length of the weights. In: Proceedings of the 6th Annual ACM Conference on Computational Learning Theory. Citeseer (1993)
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)
Krueger, D., Huang, C.-W., Islam, R., Turner, R., Lacoste, A., Courville, A.: Bayesian hypernetworks. arXiv preprint arXiv:1710.04759 (2017)
Le, H., Tran, T., Nguyen, T., Venkatesh, S.: Variational memory encoder-decoder. In: NeurIPS (2018)
Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. In: ICLR 2018 (2018)
Nguyen, C., Li, Y., Bui, T.D., Turner, R.E.: Variational continual learning. In: ICLR (2018)
Nguyen, P., Tran, T., Gupta, S., Rana, S., Barnett, M., Venkatesh, S.: Incomplete conditional density estimation for fast materials discovery. In: Proceedings of the 2019 SIAM International Conference on Data Mining, pp. 549–557. SIAM (2019)
Rao, D., Visin, F., Rusu, A., Pascanu, R., Teh, Y.W., Hadsell, R.: Continual unsupervised representation learning. In: Advances in Neural Information Processing Systems, pp. 7645–7655 (2019)
Ratzlaff, N., Fuxin, L.: HyperGAN: a generative model for diverse. In: Performant Neural Networks, ICML (2019)
Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082 (2014)
Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., De Freitas, N.: Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104(1), 148–175 (2016)
Tomczak, J., Welling, M.: VAE with a VampPrior. arXiv preprint arXiv:1705.07120 (2017)
Townsend, J., Bird, T., Barber, D.: Practical lossless compression with latent variables using bits back coding. arXiv preprint arXiv:1901.04866 (2019)
Wang, K.-C., Vicol, P., Lucas, J., Gu, L., Grosse, R., Zemel, R.: Adversarial distillation of Bayesian neural network posteriors. In: International Conference on Machine Learning, pp. 5177–5186 (2018)
Yoon, J., Kim, T., Dia, O., Kim, S., Bengio, Y., Ahn, S.: Bayesian model-agnostic meta-learning. In: Advances in Neural Information Processing Systems, pp. 7332–7342 (2018)
Acknowledgements
This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Nguyen, P., Tran, T., Gupta, S., Rana, S., Dam, HC., Venkatesh, S. (2021). Variational Hyper-encoding Networks. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12976. Springer, Cham. https://doi.org/10.1007/978-3-030-86520-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-86520-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86519-1
Online ISBN: 978-3-030-86520-7
eBook Packages: Computer ScienceComputer Science (R0)