[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Variational Hyper-encoding Networks

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12976))

Abstract

We propose a framework called HyperVAE for encoding distributions of distributions. When a target distribution is modeled by a VAE, its neural network parameters are sampled from a distribution in the model space modeled by a hyper-level VAE. We propose a variational inference framework to implicitly encode the parameter distributions into a low dimensional Gaussian distribution. Given a target distribution, we predict the posterior distribution of the latent code, then use a matrix-network decoder to generate a posterior distribution for the parameters. HyperVAE can encode the target parameters in full in contrast to common hyper-networks practices, which generate only the scale and bias vectors to modify the target-network parameters. Thus HyperVAE preserves information about the model for each task in the latent space. We derive the training objective for HyperVAE using the minimum description length (MDL) principle to reduce the complexity of HyperVAE. We evaluate HyperVAE in density estimation tasks, outlier detection and discovery of novel design classes, demonstrating its efficacy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 99.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    This is not the same as zero-shot learning where label description is available.

  2. 2.

    We use \(\theta =(\theta _{p},\theta _{q})\) to denote the set of parameters for p and q.

  3. 3.

    We assume a Dirac delta distribution for \(\gamma \), i.e. a point estimate, in this study.

  4. 4.

    We abused the notation and use p to denote both a density and a probability mass function. Bits-back coding is applicable to continuous distributions [10].

  5. 5.

    We assumed a matrix multiplication takes O(1) time in GPU.

  6. 6.

    Batched matrix multiplication can be paralleled in GPU.

References

  1. Chen, X., et al.: Variational lossy autoencoder. arXiv preprint arXiv:1611.02731 (2016)

  2. Choi, K., Wu, M., Goodman, N., Ermon, S.: Meta-amortized variational inference and learning. arXiv preprint arXiv:1902.01950 (2019)

  3. Do, K., Tran, T., Venkatesh, S.: Matrix-centric neural networks. arXiv preprint arXiv:1703.01454 (2017)

  4. Do, K., Tran, T., Venkatesh, S.: Learning deep matrix representations. arXiv preprint arXiv:1703.01454 (2018)

  5. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1126–1135 (2017). JMLR.org

  6. Finn, C., Levine, S.: Meta-learning and universality: deep representations and gradient descent can approximate any learning algorithm. In: ICLR (2018)

    Google Scholar 

  7. Gómez-Bombarelli, R., et al.: Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Sci. 4(2), 268–276 (2018)

    Article  Google Scholar 

  8. Grant, E., Finn, C., Levine, S., Darrell, T., Griffiths, T.: Recasting gradient-based meta-learning as hierarchical Byes. arXiv preprint arXiv:1801.08930 (2018)

  9. Ha, D., Dai, A., Le, Q.V.: Hypernetworks. arXiv preprint arXiv:1609.09106 (2016)

  10. Hinton, G., Van Camp, D.: Keeping neural networks simple by minimizing the description length of the weights. In: Proceedings of the 6th Annual ACM Conference on Computational Learning Theory. Citeseer (1993)

    Google Scholar 

  11. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)

  12. Krueger, D., Huang, C.-W., Islam, R., Turner, R., Lacoste, A., Courville, A.: Bayesian hypernetworks. arXiv preprint arXiv:1710.04759 (2017)

  13. Le, H., Tran, T., Nguyen, T., Venkatesh, S.: Variational memory encoder-decoder. In: NeurIPS (2018)

    Google Scholar 

  14. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. In: ICLR 2018 (2018)

    Google Scholar 

  15. Nguyen, C., Li, Y., Bui, T.D., Turner, R.E.: Variational continual learning. In: ICLR (2018)

    Google Scholar 

  16. Nguyen, P., Tran, T., Gupta, S., Rana, S., Barnett, M., Venkatesh, S.: Incomplete conditional density estimation for fast materials discovery. In: Proceedings of the 2019 SIAM International Conference on Data Mining, pp. 549–557. SIAM (2019)

    Google Scholar 

  17. Rao, D., Visin, F., Rusu, A., Pascanu, R., Teh, Y.W., Hadsell, R.: Continual unsupervised representation learning. In: Advances in Neural Information Processing Systems, pp. 7645–7655 (2019)

    Google Scholar 

  18. Ratzlaff, N., Fuxin, L.: HyperGAN: a generative model for diverse. In: Performant Neural Networks, ICML (2019)

    Google Scholar 

  19. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082 (2014)

  20. Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., De Freitas, N.: Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104(1), 148–175 (2016)

    Article  Google Scholar 

  21. Tomczak, J., Welling, M.: VAE with a VampPrior. arXiv preprint arXiv:1705.07120 (2017)

  22. Townsend, J., Bird, T., Barber, D.: Practical lossless compression with latent variables using bits back coding. arXiv preprint arXiv:1901.04866 (2019)

  23. Wang, K.-C., Vicol, P., Lucas, J., Gu, L., Grosse, R., Zemel, R.: Adversarial distillation of Bayesian neural network posteriors. In: International Conference on Machine Learning, pp. 5177–5186 (2018)

    Google Scholar 

  24. Yoon, J., Kim, T., Dia, O., Kim, S., Bengio, Y., Ahn, S.: Bayesian model-agnostic meta-learning. In: Advances in Neural Information Processing Systems, pp. 7332–7342 (2018)

    Google Scholar 

Download references

Acknowledgements

This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Phuoc Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nguyen, P., Tran, T., Gupta, S., Rana, S., Dam, HC., Venkatesh, S. (2021). Variational Hyper-encoding Networks. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12976. Springer, Cham. https://doi.org/10.1007/978-3-030-86520-7_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86520-7_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86519-1

  • Online ISBN: 978-3-030-86520-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics