[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Toward gradient bandit-based selection of candidate architectures in AutoGAN

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

The neural architecture search (NAS) method provides a new approach to the design of generative adversarial network (GAN)’s architecture. The existing state-of-the-art algorithm, AutoGAN, is an example in discovering GAN’s architecture using reinforcement learning (RL)-based NAS. However, performance differences between the candidate architectures are not taken into consideration. In this paper, one new ImprovedAutoGAN algorithm based on AutoGAN is proposed. We found that which candidate architectures to use can affect the performance of the final network. So in the process of architecture search, compared with AutoGAN, in which candidate architectures are selected randomly, our method uses gradient bandit algorithm to increase the probability of selecting networks with better performance. This paper also introduces a temperature coefficient in the algorithm to prevent the search results from getting trapped in the local optimum. The GANs are searched using the same search space as AutoGAN, and the discovered GAN has a Frechet inception distance (FID) score of 11.60 on CIFAR-10, reaching the best level in the current RL-based NAS methods. Experiments also show that the transferability of this GAN is satisfying.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. In Liu et al. (2018), the term “cell” is called “block.” Here, we use “cell” instead in order to be consistent with AutoGAN.

  2. In stage 1, the generator is composed of one cell, so there is no candidate architectures and the operation to select it.

References

  • Abualigah LM, Khader AT, Hanandeh ES (2018a) Hybrid clustering analysis using improved krill herd algorithm. Appl Intell 48(11):4047–4071

    Article  Google Scholar 

  • Abualigah LMQ (2019) Feature selection and enhanced krill herd algorithm for text document clustering. Springer, Berlin

    Book  Google Scholar 

  • Abualigah LM, Khader AT, Hanandeh ES (2018b) A new feature selection method to improve the document clustering using particle swarm optimization algorithm. J Comput Sci 25:456–466

    Article  Google Scholar 

  • Brock A, Lim T, Ritchie JM, Weston N (2017) Smash: one-shot model architecture search through hypernetworks. In: International conference on learning representations (ICLR)

  • Brock A, Donahue J, Simonyan K (2019) Large scale gan training for high fidelity natural image synthesis. In: International conference on learning representations (ICLR)

  • Cai H, Zhu L, Han S (2019) Proxylessnas: direct neural architecture search on target task and hardware. In: International conference on learning representations (ICLR)

  • Elsken T, Metzen JH, Hutter F (2018) Efficient multi-objective neural architecture search via lamarckian evolution. In: International conference on learning representations (ICLR)

  • Gao C, Chen Y, Liu S, Tan Z, Yan S (2019) Adversarialnas: adversarial neural architecture search for gans. arXiv preprint arXiv:1912.02037

  • Gong X, Chang S, Jiang Y, Wang Z (2019) Autogan: neural architecture search for generative adversarial networks. In: IEEE International conference on computer vision (ICCV), pp 3224–3234

  • Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems (NIPS), pp 2672–2680

  • Grinblat GL, Uzal LC, Granitto PM (2017) Class-splitting generative adversarial networks. arXiv preprint arXiv:1709.07359

  • Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC (2017) Improved training of wasserstein gans. In: Advances in neural information processing systems (NIPS), pp 5767–5777

  • He H, Wang H, Lee GH, Tian Y (2019) Probgan: towards probabilistic gan with theoretical guarantees. In: International conference on learning representations (ICLR)

  • Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Neural information processing systems (NIPS), pp 6626–6637

  • Hoang Q, Nguyen TD, Le T, Phung D (2018) Mgan: training generative adversarial nets with multiple generators. In: International conference on learning representations (ICLR)

  • Karras T, Aila T, Laine S, Lehtinen J (2017) Progressive growing of gans for improved quality, stability, and variation. In: International conference on learning representations (ICLR)

  • Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li LJ, Fei-Fei L, Yuille A, Huang J, Murphy K (2018) Progressive neural architecture search. In: Proceedings of the european conference on computer vision (ECCV), pp 19–34

  • Liu H, Simonyan K, Yang Y (2019) Darts: differentiable architecture search. In: International conference on learning representations (ICLR)

  • Luo R, Tian F, Qin T, Chen E, Liu TY (2018) Neural architecture optimization. In: Advances in neural information processing systems (NIPS), pp 7816–7827

  • Miyato T, Kataoka T, Koyama M, Yoshida Y (2018) Spectral normalization for generative adversarial networks. In: International conference on learning representations (ICLR)

  • Negrinho R, Gordon G (2017) Deeparchitect: automatically designing and training deep architectures. arXiv preprint arXiv:1704.08792

  • Perez-Rua JM, Baccouche M, Pateux S (2018) Efficient progressive neural architecture search. In: British machine vision conference (BMVC), p 150

  • Pham H, Guan MY, Zoph B, Le QV, Dean J (2018) Efficient neural architecture search via parameter sharing. In: International conference on machine learning (ICML)

  • Radford A, Metz L, Chintala S (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434

  • Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A (2017) Large-scale evolution of image classifiers. In: International conference on machine learning (ICML), JMLR. org, vol 70, pp 2902–2911

  • Real E, Aggarwal A, Huang Y, Le QV (2019) Regularized evolution for image classifier architecture search. AAAI Conf Artif Intell 33:4780–4789

    Google Scholar 

  • Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems (NIPS), pp 2234–2242

  • So DR, Liang C, Le QV (2019) The evolved transformer. In: International conference on learning representations (ICLR), pp 5877–5886

  • Sutton RS, Barto AG (2011) Reinforcement learning: an introduction

  • Tran NT, Bui TA, Cheung NM (2018) Dist-gan: an improved gan using distance constraints. In: European conference on computer vision (ECCV), pp 370–385

  • Wang H, Huan J (2019) Agan: towards automated design of generative adversarial networks. arXiv preprint arXiv:1906.11080

  • Wang W, Sun Y, Halgamuge S (2018) Improving mmd-gan training with repulsive loss function. In: International conference on learning representations (ICLR)

  • Warde-Farley D, Bengio Y (2017) Improving generative adversarial networks with denoising feature matching. In: International conference on learning representations (ICLR)

  • Xie S, Zheng H, Liu C, Lin L (2019) Snas: stochastic neural architecture search. In: International conference on learning representations (ICLR)

  • Xu T, Zhang P, Huang Q, Zhang H, Gan Z, Huang X, He X (2018) Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1316–1324

  • Yang J, Kannan A, Batra D, Parikh D (2017) Lr-gan: layered recursive generative adversarial networks for image generation. In: International conference on learning representations (ICLR)

  • Zhang X, Huang Z, Wang N (2018) You only search once: single shot neural architecture search via direct sparse optimization. arXiv preprint arXiv:1811.01567

  • Zhang X, Wang Z, Liu D, Ling Q (2019) Dada: deep adversarial data augmentation for extremely low data regime classification. IEEE international conference on acoustics. Speech and Signal Processing (ICASSP), IEEE, pp 2807–2811

  • Zhong Z, Yan J, Wu W, Shao J, Liu CL (2018) Practical block-wise neural network architecture generation. In: IEEE conference on computer vision and pattern recognition (ICPR), pp 2423–2432

  • Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In: International conference on learning representations (ICLR)

  • Zoph B, Vasudevan V, Shlens J, Le QV (2018) Learning transferable architectures for scalable image recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 8697–8710

Download references

Acknowledgements

The authors are grateful to the College of Computer Science, Nanjing University of Posts and Telecommunications for kindly providing the required computational resources. We also thank Baofeng Zhang for his help with this Project. This study was funded by National Natural Science Foundation of China (NSFC) 2019-2022 (Grants 61877051 and 61872079). Associate Professor Jun Shen was also supported by research exchange program funded University Global Partnership Network.

Funding

This study was funded by National Natural Science Foundation of China (NSFC) 2019-2022 (Grants 61877051 and 61872079).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoqiang Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Communicated by V. Loia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, Y., Zhou, G., Shen, J. et al. Toward gradient bandit-based selection of candidate architectures in AutoGAN. Soft Comput 25, 4367–4378 (2021). https://doi.org/10.1007/s00500-020-05446-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-020-05446-x

Keywords

Navigation