Neural network based image classifier resilient to destructive perturbation influences – architecture and training method
Abstract
Keywords
Full Text:
PDFReferences
Jha, S., Jalaian, B., Roy, A., Verma, G. Trinity: Trust, Resilience and Interpretability of Machine Learning Models. Game Theory and Machine Learning for Cyber Security, Chapter 16. IEEE, 2021, pp. 317-333. DOI: 10.1002/9781119723950.ch16.
Kumar, A., Mehta S. A Survey on Resilient Machine Learning, 2017, pp. 1–9. Available at: https://doi.org/10.48550/arXiv.1707.03184.
Mani, G., Bhargava, B., Shivakumar, B. Incremental Learning through Graceful Degradations in Autonomous Systems. IEEE International Conference on Cognitive Computing (ICCC), 2018, pp. 25–32. DOI: 10.1109/ICCC.2018.00011.
Wied, M., Oehmen, J., Welo, T. Conceptualizing resilience in engineering systems: Ananalysis of the literature. Systems Engineering, 2020, vol. 23, pp. 3–13. DOI: 10.1002/sys.21491.
Ponochovnyi, Yu. L., Kharchenko, V. S. Metodolohiya zabezpechennya harantozdatnosti informatsiyno-keruyuchykh system z vykorystannyam bahatotsil'ovykh stratehiy obsluhovuvannya [Dependability assurance methodology of information and control systems using multipurpose service strategies]. Radioelektronni i komp'uterni sistemi – Radioelectronic and computer systems, 2020, no. 3, pp. 43–58. DOI: 10.32620/reks.2020.3.05. (In Ukrainian).
Huisman, M., Rijn, J. N., Plaat, A. A survey of deep meta-learning. Artificial Intelligence Review, 2021, vol. 54, no. 6, pp. 4483–4541. DOI: 10.1007/s10462-021-10004-4.
Awasthi, A.m Sarawagim S. Continual Learning with Neural Networks: A Review. India Joint International Conference on Data Science and Management of Data, Kolkata, India, 2019. pp. 362–365. DOI: 10.1145/3297001.3297062.
Smith, L. N. A useful taxonomy for adversarial robustness of Neural Networks. Trends in Computer Science and Information Technology, 2020, pp. 037-041. DOI: 10.17352/tcsit.000017.
Pérez-Bravo, J. M., Rodríguez-Rodríguez, José A., García-González, J., Molina-Cabello, M. A., Thurnhofer-Hemsi, K., López-Rubio, E. Encoding Generative Adversarial Networks for Defense Against Image Classification Attacks. Bio-inspired Systems and Applications: from Robotics to Ambient Intelligence. IWINAC 2022, Springer, Cham, 2022, vol. 1325, pp. 163–172. DOI: 10.1007/978-3-031-06527-9_16.
Liu, G., Khalil, I., Khreishah, A. GanDef: A GAN Based Adversarial Training Defense for Neural Network Classifier. SEC 2019. IFIP Advances in Information and Communication Technology, Springer, Cham, 2019, vol. 562, pp. 19–32. DOI: 10.1007/978-3-030-22312-0_2.
Xu, J., Li, Z., Du, B., Zhang, M. Liu, J. Reluplex made more practical: Leaky ReLU. IEEE Symposium on Computers and Communications (ISCC), 2020. 7 p. DOI: 10.1109/ISCC50000.2020.9219587.
Carlini, N., Wagner, D. Adversarial Examples Are Not Easily Detected. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, pp. 3-14. DOI: 10.1145/3128572.3140444.
Athalye, A., Carlini, N., Wagner, D. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. 35th International Conference on Machine Learning, 2018. 12 p. DOI: 10.48550/arXiv.1802.00420.
Silva, S., Najafirad, P. Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey. IEEE Transactions on Knowledge and Data Engineering, 2020. 20 p. DOI: 10.48550/arXiv. 2007.00753.
Goldblum, M., Fowl, L., Feizi, S., Goldstein, T. Adversarially Robust Distillation. AAAI Technical Track: Machine Learning, 2020, vol. 34 (04). pp. 3996–4003. DOI: 10.1609/aaai.v34i04.5816.
Lee, Y., Kim, W., Park, W., Choi, S. Discrete Infomax Codes for Supervised Representation Learning. Entropy, 2022, no. 24, iss. 4, article id: 501. 31 p. DOI: 10.3390/e24040501.
Chu, L.-C., Wah, B. W. Fault tolerant neural networks with hybrid redundancy. 1990 IJCNN International Joint Conference on Neural Networks. San Diego, CA, USA, 1990, vol. 2. pp. 639-649. DOI: 10.1109/IJCNN.1990.137773.
Hacene, G. B., Leduc-Primeau, F., Soussia, A. B., Gripon, V., Gagnon, F. Training modern deep neural networks for memory-fault robustness. IEEE International Symposium on Circuits and Systems (ISCAS 2019), 2019. 5 p. DOI: 10.1109/ISCAS.2019.8702382.
Li, W., Ning, X., Ge, G., Chen, X., Wang, Y., Yang, H. FTT-NAS: Discovering Fault-Tolerant Neural Architecture. 25th Asia and South Pacific Design Automation Conference (ASP-DAC), 2020, pp. 211-216. DOI: 10.1109/ASP-DAC47756.2020.9045324.
Valtchev, S., Wu, J. Domain randomization for neural network classification. Journal of Big Data, 2021, no. 8, article no. 94. 12 p. DOI: 10.1186/s40537-021-00455-5.
Konkle, T., Alvarez, G. A self-supervised domain-general learning framework for human ventral stream representation, Nature Communications, 2022, no. 13, article no. 491. 12 p. DOI: 10.1101/2020.06.15. 153247.
Fanhe, X., Guo, J., Huang, Z., Qiu, W., Zhang, Y. Multi-Task Learning with Knowledge Transfer for Facial Attribute Classification. IEEE International Conference on Industrial Technology (ICIT), Melbourne, VIC, Australia, 2019, pp. 877–882. DOI: 10.1109/ICIT.2019.8755180.
Priya, S., Uthra, R. Deep learning framework for handling concept drift and class imbalanced complex decision-making on streaming data, Complex Intell. Syst., 2021. 17 p. DOI: 10.1007/s40747-021-00456-0.
Jiang, H, Kim, B., Guan, M. Y., Gupta, M. R. To Trust Or Not To Trust A Classifier. 32nd International Conference on Neural Information Processing Systems, 2018, pp. 5546–5557. DOI: 10.48550/arXiv. 1805.11783.
Shu, Y., Shi, Y., Wang, Y., Huang, T., Tian, Y. P-ODN: Prototype-based Open Deep Network for Open Set Recognition. Scientific Reports, 2020, no. 10, article no. 7146. DOI: 10.1038/s41598-020-63649-6.
Wang, С., Zhao, P., Wang, S., Lin, X. Detection and recovery against deep neural network fault injection attacks based on contrastive learning. 3rd Workshop on Adversarial Learning Methods for Machine Learning and Data Mining at KDD, 2021. 5 p.
Cha, J., Kim, K. S., Lee, S. Hierarchical Auxiliary Learning. Machine Learning: Science and Technology, 2020, vol. 1, no. 4, pp. 1–12. DOI: 10.1088/2632-2153/aba7b3.
Margatina, K., Vernikos, G., Barrault, L., Aletras, N. Active Learning by Acquiring Contrastive Examples. Conference on Empirical Methods in Natural Language Processing, 2021, pp. 650–663, DOI: 10.48550/arXiv.2109.03764.
Park, J., Yun, S., Jeong, J., Shin, J. OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data. International Conference on Learning Representations ICLR 2021, 2022. 14 p. DOI: 10.48550/arXiv.2107.08943.
Moskalenko, V., Zaretskyi, M., Moskalenko, A., Korobov, A., Kovalsky, Y. Multi-stage deep learning method with self-supervised pretraining for sewer pipe defects classification. Radioelektronni i komp'uterni sistemi – Radioelectronic and computer systems, 2021, no. 4, pp. 71-81. DOI: 10.32620/reks.2021.4.06.
Zhao, K., Gao, Sh., Wang, W., Cheng, Ming-Ming. Optimizing the F-Measure for Threshold-Free Salient Object Detection. IEEE International Conference on Computer Vision (ICCV), 2019, pp. 8848–8856, DOI: 10.1109/ICCV.2019.00894.
Qi, C., Su, F. Contrastive-center loss for deep neural networks. IEEE International Conference on Image Processing (ICIP), Beijing, China, 2017, pp. 2851-2855. DOI: 10.1109/ICIP.2017.8296803.
Kotyan, S., Vargas, D. Adversarial robustness assessment: Why in evaluation both L0 and L∞ attacks are necessary. PLOS ONE, 2022, no. 17(4), article no. e0265723. 22 p. DOI: 10.1371/journal.pone.0265723.
DOI: https://doi.org/10.32620/reks.2022.3.07
Refbacks
- There are currently no refbacks.