Optimization of Deep Neural Networks Using a Micro Genetic Algorithm
<p>General scheme of the proposed method. Top: CNN model trained on a source domain. Center: Transfer learning of the pre-trained model parameters and tuning of the model weights of a DNN (FC layers) using the <math display="inline"><semantics> <mi>μ</mi> </semantics></math>GA-DNN algorithm. Bottom: Schematic illustrating the operation of the proposed method.</p> "> Figure 2
<p>Example of an FC layer architecture: The input layer is related to the <span class="html-italic">d</span> features automatically extracted by the convolutional layers of the CNN model. In this architecture, there are <span class="html-italic">m</span> hidden layers, where <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>L</mi> <mi>m</mi> </msub> </mrow> </semantics></math> indicate the number of neurons in each. Finally, the output layer provides a response (prediction) <math display="inline"><semantics> <msub> <mi>z</mi> <mi>i</mi> </msub> </semantics></math>, with <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>c</mi> </mrow> </semantics></math>, for each of the <span class="html-italic">c</span> classes of the input dataset.</p> "> Figure 3
<p>Example of an FC layer architecture. The input layer is related to the features of the dataset. Thus, this scheme assumes that the problem has a dimensionality of <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2048</mn> </mrow> </semantics></math>. Additionally, it has <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> hidden layers, each with 512 neurons, and the output layer has <span class="html-italic">c</span> neurons, where <span class="html-italic">c</span> is the number of classes of the problem.</p> "> Figure 4
<p>Example of a chromosome using the binary representation <math display="inline"><semantics> <mi>μ</mi> </semantics></math>GA-1, which is composed of three binary blocks representing the first hidden layer (<math display="inline"><semantics> <msub> <mi>L</mi> <mn>1</mn> </msub> </semantics></math>), the second hidden layer (<math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>), and the learning rate (<math display="inline"><semantics> <msub> <mi>l</mi> <mi>r</mi> </msub> </semantics></math>).</p> "> Figure 5
<p>Example of a chromosome that uses the binary representation <math display="inline"><semantics> <mi>μ</mi> </semantics></math>GA-FS, which is composed of 2051 binary blocks. The first two blocks consist of 9 bits each, the third of 17 bits, and the last 2048 blocks each consist of 1 bit.</p> "> Figure 6
<p>Example of a chromosome using the binary representation <math display="inline"><semantics> <mi>μ</mi> </semantics></math>GA-MRMR, which is composed of four binary blocks.</p> "> Figure 7
<p>Results of the <math display="inline"><semantics> <mi>ACC</mi> </semantics></math> indicator obtained by each method are shown using boxplots. The best values are indicated in bold. The mean of <math display="inline"><semantics> <mi>ACC</mi> </semantics></math> and the <span class="html-italic">p</span>-value of the Wilcoxon rank sum test are shown at the top of each plot.</p> "> Figure 8
<p>Results of the mean <math display="inline"><semantics> <mi>FSR</mi> </semantics></math> obtained by each method. The top part shows the <span class="html-italic">p</span>-value of the Wilcoxon rank sum test obtained by comparing each method with <math display="inline"><semantics> <msup> <mi>F</mi> <mi>R</mi> </msup> </semantics></math> (the best variant of the proposed method in terms of this indicator). In bold, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo><</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p> "> Figure 9
<p>Results of the mean <math display="inline"><semantics> <mi>HNR</mi> </semantics></math> obtained by each method. The top part shows the <span class="html-italic">p</span>-value of the Wilcoxon rank sum test obtained by comparing each method with <math display="inline"><semantics> <msup> <mi>F</mi> <mi>R</mi> </msup> </semantics></math> (the best variant of the proposed method in terms of this indicator). In bold, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo><</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p> "> Figure 10
<p>Results of the average <math display="inline"><semantics> <mi>MC</mi> </semantics></math> obtained by each method. The best values are indicated in bold.</p> "> Figure 11
<p>Results of the average number of evaluations of the objective function obtained by each method. The best values are indicated in bold.</p> "> Figure 12
<p>Results of the average runtime (in seconds) of each method.</p> "> Figure 13
<p>Comparison of classification accuracy for the proposed models and the complete reference model.</p> ">
Abstract
:1. Introduction
1.1. Transfer Learning
1.2. Optimization Algorithms
2. State of the Art
2.1. Approaches with Transfer Learning That Optimize Fully Connected Layers
2.2. Limitations for Optimization Algorithms
3. A Deep Neural Network Based on a Micro Genetic Algorithm (GA-DNN)
3.1. Objective Functions
3.1.1. Maximizing Classification Accuracy
3.1.2. Maximizing Classification Accuracy and Percentage of Hidden Neurons Removed
3.1.3. Maximizing Classification Accuracy and MRMR Feature Selection Criteria
3.1.4. Maximizing Classification Accuracy, Percentage of Hidden Neurons Removed, and MRMR Criterion
3.2. Representations of Solutions
3.2.1. First Representation: GA-1
3.2.2. Second Representation: GA-FS
3.2.3. Third Representation: GA-MRMR
3.3. Naming Variants of GA-DNN
3.4. Pseudocode of the Proposed Algorithm
Algorithm 1 GA-DNN |
|
Algorithm 2 Evaluating a solution . |
|
4. Experimental Framework
4.1. Datasets
4.2. Reference Methods
4.2.1. EvoPruneDeepTL Algorithm
4.2.2. DNN Architecture
4.3. Parameters of the Proposed Algorithm
4.4. Resampling Method
5. Results and Comparisons
- Results of the variants of the GA-DNN method and the EvoPruneDeepTL algorithm. In this first section, the experimental results of all the variants of the proposed method (, , , , F, , and ) are compared with the experimental results of .
- Results of the best variant of each group of the proposed method. The variants are divided into two groups: algorithms that employ the selection rule (, , , and ) and those that do not (F, , , and ). These results are compared to those obtained by the model that was trained from the reference DNN based on the architecture in Figure 3.
5.1. Results of GA-DNN Variants and EvoPruneDeepTL
5.1.1. Classification Accuracy
5.1.2. Feature Selected Ratio
5.1.3. Hidden Neurons Ratio
5.1.4. Model Complexity
5.1.5. Number of Objective Function Evaluations
5.1.6. Hamming Distance Results
5.1.7. Runtime Results
5.2. Comparative Results with the Full Reference Model
6. Conclusions
Limitations and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ANN | Artificial neural network |
CNN | Convolutional neural network |
FC | Fully connected |
FS | Feature selection |
GA | Genetic algorithm |
TL | Transfer learning |
References
- Samuel, A.L. Machine learning. Technol. Rev. 1959, 62, 42–45. [Google Scholar]
- Bengio, Y. Deep Learning; Adaptive Computation and Machine Learning Series; MIT Press: London, UK, 2016. [Google Scholar]
- Pouyanfar, S.; Sadiq, S.; Yan, Y.; Tian, H.; Tao, Y.; Reyes, M.P.; Shyu, M.L.; Chen, S.C.; Iyengar, S.S. A survey on deep learning: Algorithms, techniques, and applications. ACM Comput. Surv. 2019, 51, 1–36. [Google Scholar] [CrossRef]
- Pathak, A.R.; Pandey, M.; Rautaray, S. Application of deep learning for object detection. Procedia Comput. Sci. 2018, 132, 1706–1717. [Google Scholar] [CrossRef]
- Liu, X.; Deng, Z.; Yang, Y. Recent progress in semantic image segmentation. Artif. Intell. Rev. 2019, 52, 1089–1106. [Google Scholar] [CrossRef]
- Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Wen, Y.W.; Peng, S.H.; Ting, C.K. Two-stage evolutionary neural architecture search for transfer learning. IEEE Trans. Evol. Comput. 2021, 25, 928–940. [Google Scholar] [CrossRef]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76. [Google Scholar] [CrossRef]
- Aggarwal, C.C. Neural Networks and Deep Learning; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
- Barraza, J.F.; Droguett, E.L.; Martins, M.R. Towards Interpretable Deep Learning: A Feature Selection Framework for Prognostics and Health Management Using Deep Neural Networks. Sensors 2021, 21, 5888. [Google Scholar] [CrossRef]
- Poyatos, J.; Molina, D.; Martinez, A.D.; Del Ser, J.; Herrera, F. EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks. Neural Netw. 2022, 158, 59–82. [Google Scholar] [CrossRef]
- Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison Wesley: Boston, MA, USA, 1989. [Google Scholar]
- Ledesma, S.; Cerda, G.; Aviña, G.; Hernández, D.; Torres, M. Feature Selection Using Artificial Neural Networks. In MICAI 2008: Advances in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2008; pp. 351–359. [Google Scholar] [CrossRef]
- Krishnakumar, K. Micro-Genetic Algorithms For Stationary And Non-Stationary Function Optimization. In Intelligent Control and Adaptive Systems; Rodriguez, G., Ed.; SPIE: Bellingham, WA, USA, 1990. [Google Scholar] [CrossRef]
- Goldberg, D.E. Sizing populations for serial and parallel genetic algorithms. In Proceedings of the Third International Conference on Genetic Algorithms, San Francisco, CA, USA, 1 June 1989; pp. 70–79. [Google Scholar]
- Dubey, A.; Saxena, A. An evolutionary feature selection technique using polynomial neural network. Int. J. Comput. Sci. Issues 2011, 8, 494. [Google Scholar]
- Mohammed, T.A.; Alhayali, S.; Bayat, O.; Uçan, O.N. Feature Reduction Based on Hybrid Efficient Weighted Gene Genetic Algorithms with Artificial Neural Network for Machine Learning Problems in the Big Data. Sci. Program. 2018, 2018, 2691759. [Google Scholar] [CrossRef]
- Üstün, O.; Bekiroğlu, E.; Önder, M. Design of highly effective multilayer feedforward neural network by using genetic algorithm. Expert Syst. 2020, 37, e12532. [Google Scholar] [CrossRef]
- Luo, X.; Oyedele, L.O.; Ajayi, A.O.; Akinade, O.O.; Delgado, J.M.D.; Owolabi, H.A.; Ahmed, A. Genetic algorithm-determined deep feedforward neural network architecture for predicting electricity consumption in real buildings. Energy AI 2020, 2, 100015. [Google Scholar] [CrossRef]
- Arroyo, J.C.T.; Delima, A.J.P. An Optimized Neural Network Using Genetic Algorithm for Cardiovascular Disease Prediction. J. Adv. Inf. Technol. 2022, 13, 95–99. [Google Scholar] [CrossRef]
- Souza, F.; Matias, T.; Araójo, R. Co-evolutionary genetic Multilayer Perceptron for feature selection and model design. In Proceedings of the International Conference on Emerging Technologies and Factory Automation (ETFA2011), Toulouse, France, 5–9 September 2011; pp. 1–7. [Google Scholar] [CrossRef]
- Pham, T.A.; Tran, V.Q.; Vu, H.L.T.; Ly, H.B. Design deep neural network architecture using a genetic algorithm for estimation of pile bearing capacity. PLoS ONE 2020, 15, e0243030. [Google Scholar] [CrossRef]
- Baldominos, A.; Saez, Y.; Isasi, P. Hybridizing Evolutionary Computation and Deep Neural Networks: An Approach to Handwriting Recognition Using Committees and Transfer Learning. Complexity 2019, 2019, 2952304. [Google Scholar] [CrossRef]
- Tian, H.; Chen, S.C.; Shyu, M.L. Genetic Algorithm Based Deep Learning Model Selection for Visual Data Classification. In Proceedings of the 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI), Los Angeles, CA, USA, 30 July–1 August 2019. [Google Scholar] [CrossRef]
- de Lima Mendes, R.; da Silva Alves, A.H.; de Souza Gomes, M.; Bertarini, P.L.L.; do Amaral, L.R. Many Layer Transfer Learning Genetic Algorithm (MLTLGA): A New Evolutionary Transfer Learning Approach Applied To Pneumonia Classification. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021. [Google Scholar] [CrossRef]
- Li, C.; Jiang, J.; Zhao, Y.; Li, R.; Wang, E.; Zhang, X.; Zhao, K. Genetic Algorithm based hyper-parameters optimization for transfer convolutional neural network. In Proceedings of the International Conference on Advanced Algorithms and Neural Networks (AANN 2022), Zhuhai, China, 25–27 February 2022. [Google Scholar] [CrossRef]
- Bibi, R.; Mehmood, Z.; Munshi, A.; Yousaf, R.M.; Ahmed, S.S. Deep features optimization based on a transfer learning, genetic algorithm, and extreme learning machine for robust content-based image retrieval. PLoS ONE 2022, 17, e0274764. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Casillas, E.S.M.; Osuna-Enciso, V. Architecture Optimization of Convolutional Neural Networks by Micro Genetic Algorithms. In Metaheuristics in Machine Learning: Theory and Applications; Springer International Publishing: Cham, Switzerland, 2021; pp. 149–167. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms, 2nd ed.; MIT Press: London, UK, 2001. [Google Scholar]
- Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
- Ding, C.; Peng, H. Minumum Redundancy Feature Selection from Microarray Gene Expression Data. J. Bioinform. Comput. Biol. 2005, 3, 185–205. [Google Scholar] [CrossRef]
- Deb, K. Optimization for Engineering Design: Algorithms and Examples; Prentice-Hall of India Private Limited: Delhi, India, 2012. [Google Scholar]
- github user: Yiweichen04. Cataract Dataset. 2016. Available online: https://github.com/yiweichen04/retina_dataset (accessed on 17 August 2023).
- kaggle user: Nitesh Yadav. Chessman Image Dataset. 2016. Available online: https://www.kaggle.com/datasets/niteshfre/chessman-image-dataset (accessed on 17 August 2023).
- kaggle user: Pranav Raikote. COVID-19 Image Dataset. 2020. Available online: https://www.kaggle.com/datasets/pranavraikokte/covid19-image-dataset (accessed on 17 August 2023).
- Team, T.T. Flowers. 2019. Available online: http://download.tensorflow.org/example_images/flower_photos.tgz (accessed on 17 August 2023).
- Rauf, H.T.; Saleem, B.A.; Lali, M.I.U.; Khan, M.A.; Sharif, M.; Bukhari, S.A.C. A Citrus Fruits and Leaves Dataset for Detection and Classification of Citrus Diseases through Machine Learning. 2019. Available online: https://data.mendeley.com/datasets/3f83gxmv57/2 (accessed on 17 August 2023).
- kaggle user: Muhammad Ahmad. MIT Indoor Scenes. 2019. Available online: https://www.kaggle.com/datasets/itsahmad/indoor-scenes-cvpr-2019 (accessed on 17 August 2023).
- Museum, V.R. Art Images: Drawing/Painting/Sculptures/Engravings. 2018. Available online: https://www.kaggle.com/datasets/thedownhill/art-images-drawings-painting-sculpture-engraving (accessed on 17 August 2023).
- Singh, D.; Jain, N.; Jain, P.; Kayal, P.; Kumawat, S.; Batra, N. PlantDoc: A Dataset for Visual Plant Disease Detection. 2020. Available online: https://github.com/pratikkayal/PlantDoc-Dataset (accessed on 17 August 2023).
- Moroney, L. Rock, Paper, Scissors Dataset. 2019. Available online: https://www.tensorflow.org/datasets/catalog/rock_paper_scissors?hl=es-419 (accessed on 17 August 2023).
- Collaboration, I.S.I. Skin Cancer: Malignant vs. Benign. 2019. Available online: https://www.kaggle.com/datasets/fanconic/skin-cancer-malignant-vs-benign (accessed on 17 August 2023).
- Gómez-Ríos, A.; Tabik, S.; Luengo, J.; Shihavuddin, A.; Herrera, F. Coral Species Identification with Texture or Structure Images Using a Two-Level Classifier Based on Convolutional Neural Networks. 2019. Available online: https://sci2s.ugr.es/CNN-coral-image-classification (accessed on 17 August 2023).
- Oluwafemi, A.G.; Zenghui, W. Multi-Class Weather Classification from Still Image Using Said Ensemble Method. 2019. Available online: https://www.kaggle.com/datasets/somesh24/multiclass-images-for-weather-classification (accessed on 17 August 2023).
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014. [Google Scholar] [CrossRef]
- Zhou, Z.H. Ensemble Methods: Foundations and Algorithms; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
Method | ANN Model | GA Type | FS | HL | |||||
---|---|---|---|---|---|---|---|---|---|
TL Model | B. S. | Epochs | Optimizer | Name | Gens. | Pop. | |||
Ledesma et al. [15] | – | NS | 500 | NS | GA | 8 | 100 | ✔ | |
Saxena et al. [18] | – | NS | NS | NS | GA | 25 | 60 | ✔ | |
WGGA [19] | – | NS | NS | NS | GA | 80 | 10 | ✔ | |
GNN [20] | – | NS | 1000 | SGD | GA | 1000 | 28 | ✔ | |
SGD, ADAM, | |||||||||
GA-DFNN [21] | – | NS | 12,000 | NADAM, | GA | 30, 40 | 20 | ✔ | |
ADAMAX | |||||||||
RMSPROP, | |||||||||
ADAM, SGD, | |||||||||
GA-ANN [22] | – | 1024 | 25 | ADADELTA, | GA | 25 | 20 | ✔ | |
ADAMAX, | |||||||||
NADAM. | |||||||||
CEV-MLP [23] | – | NS | NS | NS | GA | 120 | 20, 30, 200 | ✔ | ✔ |
Pham et al. [24] | – | NS | NS | Quasi-Newton, SGD, ADAM | GA | 200 | 25 | ✔ | ✔ |
Baldominos et al. [25] | NS | 25–200 | 5, 30 | SGD, ADAM | GA, GE | 100 | 50 | ||
Tian y Shyu [26] | Inception V3 ResNet50 MobileNet DenseNet201 | NS | NS | NS | GE | NS | 10 | ✔ | |
MLTGA [27] | Inception V3 | 16 | 5, 50 | SGD | GA | 5 | 20 | ||
EvoNAS-TL [8] | VGG-16 | 256 | 3 | SGD | KGEA | 30, 500 1 | 30, 200 2 | ||
Li et al. [28] | MobileNetV2 | NS | NS | NS | GA | 14 | 50 | ||
Bibi et al. [29] | VGG-19 | NS | NS | SGD | GA | NS | NS | ✔ | |
EvoPruneDeepTL [13] | ResNet50 | 32 | 600 | SGD | GA | 10 | 30 | ✔ 3 | |
GA-DNN (Our proposed approach) | ResNet50 | 32 | 100 | ADAM | GA | 50 | 4 | ✔ | ✔ |
Binary Block | ||||
---|---|---|---|---|
Hidden layer 1 () | 1 | 512 | 0 | 9 |
Hidden layer 2 () | 1 | 512 | 0 | 9 |
Learning rate () | 6 | 17 |
Binary Block | ||||
---|---|---|---|---|
Hidden layer 1 () | 1 | 512 | 0 | 9 |
Hidden layer 2 () | 1 | 512 | 0 | 9 |
Learning rate () | 6 | 17 | ||
0 | 1 | 0 | 1 | |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
0 | 1 | 0 | 1 |
Binary Block | ||||
---|---|---|---|---|
Hidden layer 1 () | 1 | 512 | 0 | 9 |
Hidden layer 2 () | 1 | 512 | 0 | 9 |
Learning rate () | 6 | 17 | ||
Number of selected features () | 1 | 2048 | 0 | 11 |
Representation | ||||
---|---|---|---|---|
GA-1 | F | |||
GA-FS | ||||
GA-MRMR |
Name | n | d | c | Source |
---|---|---|---|---|
Cataract | 601 | 2048 | 4 | [37] |
Chessman | 556 | 2048 | 6 | [38] |
COVID-19 | 317 | 2048 | 3 | [39] |
Flowers | 3670 | 2048 | 5 | [40] |
Leaves | 596 | 2048 | 4 | [41] |
MIT-IS | 15,620 | 2048 | 67 | [42] |
Painting | 8577 | 2048 | 5 | [43] |
Plants | 2576 | 2048 | 27 | [44] |
RPS | 2892 | 2048 | 3 | [45] |
Skincancer | 3297 | 2048 | 2 | [46] |
SRSMAS | 409 | 2048 | 14 | [47] |
Weather | 1125 | 2048 | 4 | [48] |
Parameter | Value |
---|---|
Steady-state GA | |
Population size | 30 |
Number of evaluations | 300 |
Crossover probability (uniform) | 0.5 |
Mutation probability | 0.07 |
SGD optimizer | |
Learning rate () | 0.001 |
Moment Nesterov | 0.9 |
Batch size | 32 |
Number of epochs | 100 |
GA | Value |
---|---|
Population size () | 4 |
Number of generations () | 50 |
Crossover probability () | 0.9 |
Convergence threshold | 0.05 |
Adam optimizer | |
Batch size | 32 |
Number of epochs | 100 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 0.616 | 0.576 | 0.570 | 0.600 | 0.611 | 0.542 | 0.567 | 0.586 | 0.607 |
Chessman | 0.798 | 0.775 | 0.769 | 0.800 | 0.778 | 0.764 | 0.741 | 0.801 | 0.756 |
COVID-19 | 0.948 | 0.972 | 0.868 | 0.970 | 0.976 | 0.967 | 0.951 | 0.975 | 0.973 |
Flowers | 0.881 | 0.866 | 0.862 | 0.878 | 0.874 | 0.793 | 0.848 | 0.869 | 0.879 |
Leaves | 0.899 | 0.891 | 0.865 | 0.894 | 0.888 | 0.894 | 0.857 | 0.886 | 0.901 |
MIT Indoor Scenes | 0.697 | 0.679 | 0.666 | 0.694 | 0.638 | 0.573 | 0.556 | 0.673 | 0.735 |
Painting | 0.937 | 0.933 | 0.931 | 0.938 | 0.934 | 0.918 | 0.922 | 0.930 | 0.935 |
Plants | 0.349 | 0.357 | 0.323 | 0.366 | 0.320 | 0.226 | 0.281 | 0.327 | 0.376 |
RPS | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.999 | 1.000 | 1.000 | 1.000 |
Skincancer | 0.819 | 0.861 | 0.854 | 0.859 | 0.855 | 0.810 | 0.847 | 0.859 | 0.869 |
SRSMAS | 0.803 | 0.801 | 0.800 | 0.804 | 0.785 | 0.751 | 0.780 | 0.782 | 0.777 |
Weather | 0.964 | 0.960 | 0.960 | 0.964 | 0.966 | 0.952 | 0.954 | 0.962 | 0.960 |
Statistic | |||||||||
Mean | 0.809 | 0.806 | 0.789 | 0.814 | 0.802 | 0.766 | 0.775 | 0.804 | 0.814 |
STD | 0.176 | 0.180 | 0.182 | 0.176 | 0.188 | 0.214 | 0.202 | 0.186 | 0.172 |
Median | 0.850 | 0.864 | 0.858 | 0.869 | 0.865 | 0.802 | 0.848 | 0.864 | 0.874 |
MAD | 0.093 | 0.092 | 0.081 | 0.082 | 0.094 | 0.133 | 0.105 | 0.090 | 0.098 |
Maximum | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.999 | 1.000 | 1.000 | 1.000 |
Minimum | 0.349 | 0.357 | 0.323 | 0.366 | 0.320 | 0.226 | 0.281 | 0.327 | 0.376 |
Count | 3 | 1 | 1 | 3 | 3 | 0 | 1 | 2 | 5 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 1.000 | 0.508 | 0.504 | 0.522 | 1.000 | 0.501 | 0.495 | 0.468 | 0.536 |
Chessman | 1.000 | 0.492 | 0.501 | 0.223 | 1.000 | 0.502 | 0.502 | 0.333 | 0.667 |
COVID-19 | 1.000 | 0.503 | 0.498 | 0.333 | 1.000 | 0.498 | 0.497 | 0.233 | 0.496 |
Flowers | 1.000 | 0.506 | 0.494 | 0.669 | 1.000 | 0.499 | 0.497 | 0.568 | 0.746 |
Leaves | 1.000 | 0.501 | 0.507 | 0.349 | 1.000 | 0.504 | 0.498 | 0.357 | 0.614 |
MIT Indoor Scenes | 1.000 | 0.497 | 0.501 | 0.629 | 1.000 | 0.501 | 0.504 | 0.488 | 0.935 |
Painting | 1.000 | 0.499 | 0.503 | 0.693 | 1.000 | 0.497 | 0.505 | 0.586 | 0.728 |
Plants | 1.000 | 0.502 | 0.499 | 0.506 | 1.000 | 0.501 | 0.500 | 0.555 | 0.856 |
RPS | 1.000 | 0.500 | 0.497 | 0.298 | 1.000 | 0.498 | 0.501 | 0.474 | 0.240 |
Skincancer | 1.000 | 0.499 | 0.501 | 0.530 | 1.000 | 0.495 | 0.500 | 0.500 | 0.667 |
SRSMAS | 1.000 | 0.501 | 0.501 | 0.346 | 1.000 | 0.504 | 0.499 | 0.332 | 0.762 |
Weather | 1.000 | 0.501 | 0.494 | 0.576 | 1.000 | 0.503 | 0.499 | 0.420 | 0.656 |
Statistic | |||||||||
Mean | 1.000 | 0.501 | 0.500 | 0.473 | 1.000 | 0.500 | 0.500 | 0.443 | 0.659 |
STD | 0.000 | 0.004 | 0.004 | 0.151 | 0.000 | 0.003 | 0.003 | 0.105 | 0.173 |
Median | 1.000 | 0.501 | 0.501 | 0.514 | 1.000 | 0.501 | 0.500 | 0.471 | 0.667 |
MAD | 0.000 | 0.002 | 0.003 | 0.160 | 0.000 | 0.003 | 0.002 | 0.091 | 0.087 |
Maximum | 1.000 | 0.508 | 0.507 | 0.693 | 1.000 | 0.504 | 0.505 | 0.586 | 0.935 |
Minimum | 1.000 | 0.492 | 0.494 | 0.223 | 1.000 | 0.495 | 0.495 | 0.233 | 0.240 |
Count | 0 | 0 | 2 | 2 | 0 | 2 | 0 | 5 | 1 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 0.352 | 0.367 | 0.447 | 0.384 | 0.096 | 0.117 | 0.132 | 0.093 | 1.000 |
Chessman | 0.333 | 0.454 | 0.344 | 0.410 | 0.098 | 0.157 | 0.202 | 0.099 | 1.000 |
COVID-19 | 0.221 | 0.338 | 0.183 | 0.215 | 0.092 | 0.141 | 0.143 | 0.080 | 1.000 |
Flowers | 0.326 | 0.382 | 0.344 | 0.273 | 0.100 | 0.127 | 0.116 | 0.098 | 1.000 |
Leaves | 0.290 | 0.350 | 0.239 | 0.340 | 0.102 | 0.154 | 0.149 | 0.091 | 1.000 |
MIT Indoor Scenes | 0.491 | 0.593 | 0.465 | 0.454 | 0.129 | 0.247 | 0.195 | 0.117 | 1.000 |
Painting | 0.180 | 0.309 | 0.318 | 0.199 | 0.105 | 0.154 | 0.153 | 0.093 | 1.000 |
Plants | 0.422 | 0.634 | 0.453 | 0.461 | 0.092 | 0.158 | 0.189 | 0.114 | 1.000 |
RPS | 0.122 | 0.148 | 0.134 | 0.113 | 0.099 | 0.172 | 0.141 | 0.093 | 1.000 |
Skincancer | 0.233 | 0.360 | 0.322 | 0.217 | 0.086 | 0.139 | 0.116 | 0.081 | 1.000 |
SRSMAS | 0.454 | 0.499 | 0.492 | 0.345 | 0.130 | 0.161 | 0.216 | 0.111 | 1.000 |
Weather | 0.151 | 0.284 | 0.192 | 0.170 | 0.100 | 0.147 | 0.116 | 0.099 | 1.000 |
Statistic | |||||||||
Mean | 0.298 | 0.393 | 0.328 | 0.298 | 0.102 | 0.156 | 0.156 | 0.097 | 1.000 |
STD | 0.115 | 0.129 | 0.116 | 0.112 | 0.013 | 0.031 | 0.034 | 0.011 | 0.000 |
Median | 0.308 | 0.364 | 0.333 | 0.307 | 0.100 | 0.154 | 0.146 | 0.096 | 1.000 |
MAD | 0.101 | 0.067 | 0.117 | 0.098 | 0.004 | 0.010 | 0.030 | 0.004 | 0.000 |
Maximum | 0.491 | 0.634 | 0.492 | 0.461 | 0.130 | 0.247 | 0.216 | 0.117 | 1.000 |
Minimum | 0.122 | 0.148 | 0.134 | 0.113 | 0.086 | 0.117 | 0.116 | 0.080 | 1.000 |
Count | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 10 | 0 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 5.5 × 105 | 2.2 × 105 | 3.4 × 105 | 2.8 × 105 | 1.0 × 105 | 6.3 × 104 | 7.5 × 104 | 5.2 × 104 | 5.6 × 105 |
Chessman | 3.6 × 105 | 2.9 × 105 | 2.1 × 105 | 1.5 × 105 | 9.9 × 104 | 7.4 × 104 | 1.4 × 105 | 2.9 × 104 | 7.0 × 105 |
COVID-19 | 1.8 × 105 | 2.5 × 105 | 1.0 × 105 | 8.4 × 104 | 9.5 × 104 | 7.8 × 104 | 8.9 × 104 | 2.1 × 104 | 5.2 × 105 |
Flowers | 4.6 × 105 | 2.5 × 105 | 2.4 × 105 | 2.1 × 105 | 1.1 × 105 | 7.0 × 104 | 6.6 × 104 | 4.6 × 104 | 7.8 × 105 |
Leaves | 3.9 × 105 | 2.5 × 105 | 1.5 × 105 | 1.7 × 105 | 1.1 × 105 | 1.0 × 105 | 7.8 × 104 | 3.1 × 104 | 6.5 × 105 |
MIT Indoor Scenes | 5.6 × 105 | 4.1 × 105 | 3.6 × 105 | 4.3 × 105 | 1.5 × 105 | 1.4 × 105 | 1.3 × 105 | 7.0 × 104 | 1.0 × 106 |
Painting | 2.1 × 105 | 1.9 × 105 | 2.0 × 105 | 1.6 × 105 | 1.2 × 105 | 6.9 × 104 | 8.4 × 104 | 4.4 × 104 | 7.7 × 105 |
Plants | 5.8 × 105 | 5.1 × 105 | 3.0 × 105 | 3.4 × 105 | 1.1 × 105 | 8.4 × 104 | 1.5 × 105 | 7.0 × 104 | 9.1 × 105 |
RPS | 1.6 × 105 | 9.8 × 104 | 7.7 × 104 | 4.7 × 104 | 1.0 × 105 | 9.6 × 104 | 6.3 × 104 | 6.0 × 104 | 2.5 × 105 |
Skincancer | 2.6 × 105 | 2.5 × 105 | 2.3 × 105 | 1.8 × 105 | 6.9 × 104 | 7.6 × 104 | 6.3 × 104 | 3.9 × 104 | 7.0 × 105 |
SRSMAS | 5.7 × 105 | 3.5 × 105 | 3.2 × 105 | 1.7 × 105 | 1.7 × 105 | 9.1 × 104 | 1.2 × 105 | 3.7 × 104 | 8.1 × 105 |
Weather | 1.9 × 105 | 2.1 × 105 | 1.3 × 105 | 1.2 × 105 | 1.2 × 105 | 7.3 × 104 | 6.7 × 104 | 3.5 × 104 | 6.9 × 105 |
Statistic | |||||||||
Mean | 3.7 × 105 | 2.7 × 105 | 2.2 × 105 | 2.0 × 105 | 1.1 × 105 | 8.5 × 104 | 9.3 × 104 | 4.5 × 104 | 7.0 × 105 |
STD | 1.6 × 105 | 1.0 × 105 | 9.1 × 104 | 1.0 × 105 | 2.4 × 104 | 2.0 × 104 | 3.0 × 104 | 1.5 × 104 | 1.9 × 105 |
Median | 3.8 × 105 | 2.5 × 105 | 2.2 × 105 | 1.7 × 105 | 1.1 × 105 | 7.7 × 104 | 8.1 × 104 | 4.1 × 104 | 7.0 × 105 |
MAD | 1.8 × 105 | 4.0 × 104 | 8.6 × 104 | 4.4 × 104 | 1.1 × 104 | 7.9 × 103 | 1.7 × 104 | 1.1 × 104 | 9.4 × 104 |
Maximum | 5.8 × 105 | 5.1 × 105 | 3.6 × 105 | 4.3 × 105 | 1.7 × 105 | 1.4 × 105 | 1.5 × 105 | 7.0 × 104 | 1.0 × 106 |
Minumum | 1.6 × 105 | 9.8 × 104 | 7.7 × 104 | 4.7 × 104 | 6.9 × 104 | 6.3 × 104 | 6.3 × 104 | 2.1 × 104 | 2.5 × 105 |
Count | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 11 | 0 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 249.2 | 250.8 | 247.6 | 249.2 | 251.2 | 249.6 | 247.2 | 253.2 | 300.0 |
Chessman | 251.6 | 247.2 | 252.4 | 253.6 | 254.0 | 248.4 | 250.4 | 253.2 | 300.0 |
COVID-19 | 249.2 | 248.0 | 243.6 | 246.8 | 250.8 | 250.0 | 248.8 | 252.0 | 300.0 |
Flowers | 249.2 | 248.8 | 241.2 | 253.6 | 252.8 | 246.8 | 250.8 | 250.4 | 300.0 |
Leaves | 245.2 | 252.0 | 242.8 | 248.0 | 250.0 | 250.4 | 246.8 | 250.4 | 300.0 |
MIT Indoor Scenes | 251.2 | 250.0 | 244.4 | 252.0 | 249.6 | 246.4 | 250.0 | 255.2 | 300.0 |
Painting | 252.0 | 241.2 | 236.8 | 252.4 | 251.6 | 250.8 | 246.0 | 248.0 | 300.0 |
Plants | 250.4 | 248.0 | 245.6 | 248.8 | 253.6 | 248.0 | 247.6 | 254.0 | 300.0 |
RPS | 246.8 | 240.4 | 239.2 | 243.6 | 248.4 | 241.2 | 247.2 | 248.0 | 300.0 |
Skincancer | 250.8 | 246.8 | 244.0 | 250.4 | 252.4 | 249.6 | 248.0 | 248.8 | 300.0 |
SRSMAS | 249.6 | 255.2 | 254.8 | 249.2 | 253.2 | 253.2 | 247.6 | 258.0 | 300.0 |
Weather | 250.4 | 250.0 | 237.6 | 250.0 | 253.6 | 244.8 | 248.0 | 252.4 | 300.0 |
Statistic | |||||||||
Mean | 249.6 | 248.2 | 244.2 | 249.8 | 251.8 | 248.3 | 248.2 | 252.0 | 300.0 |
STD | 1.887 | 3.978 | 5.227 | 2.788 | 1.724 | 3.021 | 1.438 | 2.896 | 0.000 |
Median | 250.0 | 248.4 | 243.8 | 249.6 | 252.0 | 249.0 | 247.8 | 252.2 | 300.0 |
MAD | 0.800 | 1.600 | 3.200 | 2.000 | 1.400 | 1.600 | 0.800 | 1.800 | 0.000 |
Maximum | 252.0 | 255.2 | 254.8 | 253.6 | 254.0 | 253.2 | 250.8 | 258.0 | 300.0 |
Minimum | 245.2 | 240.4 | 236.8 | 243.6 | 248.4 | 241.2 | 246.0 | 248.0 | 300.0 |
Count | 0 | 1 | 9 | 0 | 0 | 0 | 2 | 0 | 0 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 0.000 | 0.498 | 0.499 | 0.333 | 0.000 | 0.499 | 0.497 | 0.369 | 0.500 |
Chessman | 0.000 | 0.501 | 0.504 | 0.164 | 0.000 | 0.498 | 0.501 | 0.257 | 0.445 |
COVID-19 | 0.000 | 0.500 | 0.503 | 0.283 | 0.000 | 0.501 | 0.501 | 0.158 | 0.501 |
Flowers | 0.000 | 0.499 | 0.500 | 0.276 | 0.000 | 0.498 | 0.497 | 0.219 | 0.379 |
Leaves | 0.000 | 0.500 | 0.502 | 0.313 | 0.000 | 0.501 | 0.500 | 0.301 | 0.478 |
MIT Indoor Scenes | 0.000 | 0.500 | 0.498 | 0.261 | 0.000 | 0.502 | 0.502 | 0.230 | 0.122 |
Painting | 0.000 | 0.499 | 0.500 | 0.246 | 0.000 | 0.501 | 0.500 | 0.298 | 0.396 |
Plants | 0.000 | 0.503 | 0.501 | 0.313 | 0.000 | 0.499 | 0.501 | 0.268 | 0.247 |
RPS | 0.000 | 0.498 | 0.500 | 0.304 | 0.000 | 0.499 | 0.500 | 0.421 | 0.365 |
Skincancer | 0.000 | 0.498 | 0.501 | 0.359 | 0.000 | 0.497 | 0.501 | 0.337 | 0.442 |
SRSMAS | 0.000 | 0.499 | 0.497 | 0.190 | 0.000 | 0.500 | 0.501 | 0.231 | 0.365 |
Weather | 0.000 | 0.500 | 0.500 | 0.343 | 0.000 | 0.499 | 0.499 | 0.362 | 0.454 |
Statistic | |||||||||
Mean | 0.000 | 0.500 | 0.500 | 0.282 | 0.000 | 0.499 | 0.500 | 0.288 | 0.391 |
STD | 0.000 | 0.001 | 0.002 | 0.057 | 0.000 | 0.001 | 0.001 | 0.072 | 0.106 |
Median | 0.000 | 0.499 | 0.500 | 0.294 | 0.000 | 0.499 | 0.500 | 0.283 | 0.419 |
MAD | 0.000 | 0.001 | 0.001 | 0.036 | 0.000 | 0.001 | 0.001 | 0.054 | 0.054 |
Maximum | 0.000 | 0.503 | 0.504 | 0.359 | 0.000 | 0.502 | 0.502 | 0.421 | 0.501 |
Minimum | 0.000 | 0.498 | 0.497 | 0.164 | 0.000 | 0.497 | 0.497 | 0.158 | 0.122 |
Count | 0 | 0 | 0 | 6 | 0 | 0 | 0 | 4 | 2 |
Name | F | ||||||||
---|---|---|---|---|---|---|---|---|---|
Cataract | 3979.2 | 3848.3 | 4147.0 | 8806.3 | 5995.5 | 4211.9 | 8245.6 | 3216.7 | 5143.2 |
Chessman | 4222.0 | 3843.2 | 4491.5 | 9413.0 | 6734.1 | 5138.0 | 7530.5 | 3443.4 | 6653.2 |
COVID-19 | 2642.1 | 3186.6 | 3186.6 | 3810.2 | 7141.2 | 3122.8 | 6294.3 | 2538.0 | 5521.4 |
Flowers | 14,724.3 | 12,703.0 | 23,039.7 | 19,934.2 | 22,521.7 | 14,444.0 | 25,522.2 | 15,989.5 | 23,206.4 |
Leaves | 4202.4 | 4107.8 | 4111.9 | 5991.3 | 6670.5 | 3240.6 | 7113.7 | 7219.4 | 7278.7 |
MIT Indoor Scenes | 46,470.8 | 40,690.1 | 39,701.5 | 51,172.4 | 46,451.4 | 46,016.7 | 43,993.5 | 54,461.5 | 59,243.7 |
Painting | 34,243.5 | 34,206.0 | 32,419.4 | 36,665.5 | 34,821.8 | 33,535.0 | 49,336.5 | 42,956.4 | 49,177.9 |
Plants | 12,191.3 | 10,965.9 | 14,134.0 | 12,613.0 | 16,206.6 | 9014.7 | 18,021.1 | 11,433.4 | 16,839.1 |
RPS | 11,306.1 | 9309.9 | 15,451.0 | 19,411.4 | 16,854.8 | 11,502.3 | 21,079.9 | 19,926.0 | 17,154.9 |
Skincancer | 14,809.9 | 12,778.0 | 20,445.7 | 14,653.2 | 21,197.2 | 14,366.5 | 22,900.5 | 23,124.5 | 19,775.6 |
SRSMAS | 3035.0 | 3376.3 | 3988.6 | 3536.4 | 5545.1 | 3679.4 | 6441.5 | 3114.4 | 4588.2 |
Weather | 5463.2 | 6263.2 | 5925.4 | 6276.9 | 13,010.8 | 6269.7 | 10,675.6 | 10,159.1 | 7741.5 |
Statistic | |||||||||
Mean | 13,107.5 | 12,106.5 | 14,253.5 | 16,023.7 | 16,929.2 | 12,878.5 | 18,929.6 | 16,465.2 | 18,527.0 |
STD | 13,167.0 | 11,922.4 | 11,874.6 | 13,839.4 | 12,285.5 | 12,902.5 | 14,097.6 | 15,994.6 | 17,204.4 |
Median | 8384.6 | 7786.5 | 10,029.7 | 11,013.0 | 14,608.7 | 7642.2 | 14,348.3 | 10,796.2 | 12,290.3 |
MAD | 4877.5 | 4176.8 | 5979.5 | 6112.2 | 7893.8 | 4182.2 | 7570.7 | 7630.7 | 6958.0 |
Maximum | 46,470.8 | 40,690.1 | 39,701.5 | 51,172.4 | 46,451.4 | 46,016.7 | 49,336.5 | 54,461.5 | 59,243.7 |
Minimum | 2642.1 | 3186.6 | 3186.6 | 3536.4 | 5545.1 | 3122.8 | 6294.3 | 2538.0 | 4588.2 |
Name | DNN | ||
---|---|---|---|
Cataract | 0.600 | 0.586 | 0.631 |
Chessman | 0.800 | 0.801 | 0.791 |
COVID-19 | 0.970 | 0.975 | 0.981 |
Flowers | 0.878 | 0.869 | 0.881 |
Leaves | 0.894 | 0.886 | 0.901 |
MIT Indoor Scenes | 0.694 | 0.673 | 0.700 |
Painting | 0.938 | 0.930 | 0.941 |
Plants | 0.366 | 0.327 | 0.374 |
RPS | 1.000 | 1.000 | 1.000 |
Skincancer | 0.859 | 0.859 | 0.868 |
SRSMAS | 0.804 | 0.782 | 0.815 |
Weather | 0.964 | 0.962 | 0.966 |
Statistic | |||
Mean | 0.814 | 0.804 | 0.821 |
STD | 0.176 | 0.186 | 0.172 |
Median | 0.869 | 0.864 | 0.875 |
MAD | 0.082 | 0.090 | 0.087 |
Maximum | 1.000 | 1.000 | 1.000 |
Minimum | 0.366 | 0.327 | 0.374 |
Count | 1 | 2 | 11 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Landa, R.; Tovias-Alanis, D.; Toscano, G. Optimization of Deep Neural Networks Using a Micro Genetic Algorithm. AI 2024, 5, 2651-2679. https://doi.org/10.3390/ai5040127
Landa R, Tovias-Alanis D, Toscano G. Optimization of Deep Neural Networks Using a Micro Genetic Algorithm. AI. 2024; 5(4):2651-2679. https://doi.org/10.3390/ai5040127
Chicago/Turabian StyleLanda, Ricardo, David Tovias-Alanis, and Gregorio Toscano. 2024. "Optimization of Deep Neural Networks Using a Micro Genetic Algorithm" AI 5, no. 4: 2651-2679. https://doi.org/10.3390/ai5040127
APA StyleLanda, R., Tovias-Alanis, D., & Toscano, G. (2024). Optimization of Deep Neural Networks Using a Micro Genetic Algorithm. AI, 5(4), 2651-2679. https://doi.org/10.3390/ai5040127