Abstract
Zero-cost proxies are nowadays frequently studied and used to search for neural architectures. They show an impressive ability to predict the performance of architectures by making use of their untrained weights. These techniques allow for immense search speed-ups. So far the joint search for well-performing and robust architectures has received much less attention in the field of NAS. Therefore, the main focus of zero-cost proxies is the clean accuracy of architectures, whereas the model robustness should play an evenly important part. In this paper, we analyze the ability of common zero-cost proxies to serve as performance predictors for robustness in the popular NAS-Bench-201 search space. We are interested in the single prediction task for robustness and the joint multi-objective of clean and robust accuracy. We further analyze the feature importance of the proxies and show that predicting the robustness makes the prediction task from existing zero-cost proxies more challenging. As a result, the joint consideration of several proxies becomes necessary to predict a model’s robustness while the clean accuracy can be regressed from a single such feature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abdelfattah, M.S., Mehrotra, A., Dudziak, L., Lane, N.D.: Zero-cost proxies for lightweight NAS. In: Proceedings of the International Conference on Learning Representations (ICLR) (2021)
Chen, W., Gong, X., Wang, Z.: Neural architecture search on ImageNet in four GPU hours: a theoretically inspired perspective. In: Proceedings of the International Conference on Learning Representations (ICLR) (2021)
Chrabaszcz, P., Loshchilov, I., Hutter, F.: A downsampled variant of ImageNet as an alternative to the CIFAR datasets. arXiv:1707.08819 (2017)
Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: Proceedings of the International Conference on Machine Learning (ICML) (2020)
Dong, M., Li, Y., Wang, Y., Xu, C.: Adversarially robust neural architectures. arXiv:2009.00902 (2020)
Dong, X., Yang, Y.: NAS-Bench-201: extending the scope of reproducible neural architecture search. In: Proceedings of the International Conference on Learning Representations (ICLR) (2020)
Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20, 55:1-55:21 (2019)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of the International Conference on Learning Representations (ICLR) (2015)
Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.: When NAS meets robustness: in search of robust architectures against adversarial attacks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: Proceedings of the International Conference on Learning Representations (ICLR) (2019)
Hoffman, J., Roberts, D.A., Yaida, S.: Robust learning with Jacobian regularization. arXiv:1908.02729 (2019)
Hosseini, R., Yang, X., Xie, P.: DSRNA: differentiable search of robust neural architectures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Jung, S., Lukasik, J., Keuper, M.: Neural architecture design and robustness: a dataset. In: Proceedings of the International Conference on Learning Representations (ICLR) (2023)
Krishnakumar, A., White, C., Zela, A., Tu, R., Safari, M., Hutter, F.: NAS-Bench-suite-zero: accelerating research on zero cost proxies. In: Advances in Neural Information Processing Systems (NeurIPS) (2022)
Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. In: Proceedings of the International Conference on Learning Representations (ICLR) (2017)
Lee, N., Ajanthan, T., Torr, P.H.S.: SNIP: single-shot network pruning based on connection sensitivity. In: Proceedings of the International Conference on Learning Representations (ICLR) (2019)
Li, Y., Vinyals, O., Dyer, C., Pascanu, R., Battaglia, P.W.: Learning deep generative models of graphs. arXiv:1803.03324 (2018)
Lin, M., et al.: Zen-NAS: a zero-shot NAS for high-performance image recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2021)
Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: Proceedings of the International Conference on Learning Representations (ICLR) (2019)
Lopes, V., Alirezazadeh, S., Alexandre, L.A.: EPE-NAS: efficient performance estimation without training for neural architecture search. In: International Conference on Artificial Neural Networks (ICANN) (2021)
Lukasik, J., Jung, S., Keuper, M.: Learning where to look - generative NAS is surprisingly efficient. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13683, pp. 257–273. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20050-2_16
Mellor, J., Turner, J., Storkey, A.J., Crowley, E.J.: Neural architecture search without training. In: Proceedings of the International Conference on Machine Learning (ICML) (2021)
Mok, J., Na, B., Choe, H., Yoon, S.: AdvRush: searching for adversarially robust neural architectures. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2021)
Ning, X., et al.: Evaluating efficient performance estimators of neural architectures. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)
Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: Proceedings of the Conference of Artificial Intelligence (AAAI) (2019)
Ru, B., Wan, X., Dong, X., Osborne, M.: Interpretable neural architecture search via Bayesian optimisation with Weisfeiler-Lehman kernels. In: Proceedings of the International Conference on Learning Representations (ICLR) (2021)
Shen, Y., et al.: ProxyBO: accelerating neural architecture search via Bayesian optimization with zero-cost proxies. arXiv:2110.10423 (2021)
Tanaka, H., Kunin, D., Yamins, D.L.K., Ganguli, S.: Pruning neural networks without any data by iteratively conserving synaptic flow. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)
Turner, J., Crowley, E.J., O’Boyle, M.F.P., Storkey, A.J., Gray, G.: BlockSwap: fisher-guided block substitution for network compression on a budget. In: Proceedings of the International Conference on Learning Representations (ICLR) (2020)
Wang, C., Zhang, G., Grosse, R.B.: Picking winning tickets before training by preserving gradient flow. In: Proceedings of the International Conference on Learning Representations (ICLR) (2020)
Wen, W., Liu, H., Chen, Y., Li, H., Bender, G., Kindermans, P.-J.: Neural predictor for neural architecture search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 660–676. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_39
White, C., Neiswanger, W., Savani, Y.: BANANAS: Bayesian optimization with neural architectures for neural architecture search. In: Proceedings of the Conference of Artificial Intelligence (AAAI) (2021)
White, C., et al.: Neural architecture search: insights from 1000 papers. arXiv:2301.08727 (2023)
White, C., Zela, A., Ru, B., Liu, Y., Hutter, F.: How powerful are performance predictors in neural architecture search? In: Advances in Neural Information Processing Systems (NeurIPS) (2021)
Wu, J., et al.: Stronger NAS with weaker predictors. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)
Xiang, L., Dudziak, L., Abdelfattah, M.S., Chau, T., Lane, N.D., Wen, H.: Zero-cost proxies meet differentiable architecture search. arXiv:2106.06799 (2021)
Zhao, P., Chen, P., Das, P., Ramamurthy, K.N., Lin, X.: Bridging mode connectivity in loss landscapes and adversarial robustness. In: Proceedings of the International Conference on Learning Representations (ICLR) (2020)
Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: Proceedings of the International Conference on Learning Representations (ICLR) (2017)
Acknowledgment
The authors acknowledge support by the DFG research unit 5336 Learning to Sense.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lukasik, J., Moeller, M., Keuper, M. (2024). An Evaluation of Zero-Cost Proxies - From Neural Architecture Performance Prediction to Model Robustness. In: Köthe, U., Rother, C. (eds) Pattern Recognition. DAGM GCPR 2023. Lecture Notes in Computer Science, vol 14264. Springer, Cham. https://doi.org/10.1007/978-3-031-54605-1_40
Download citation
DOI: https://doi.org/10.1007/978-3-031-54605-1_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54604-4
Online ISBN: 978-3-031-54605-1
eBook Packages: Computer ScienceComputer Science (R0)