Abstract
Learning in multi-layer neural networks (MLNNs) involves finding appropriate weights and biases and is a challenging and important task since the performance of MLNNs is directly dependent on the weights. Conventional algorithms such as back-propagation suffer from difficulties including a tendency to get stuck in local optima. Population-based metaheuristic algorithms can be used to address these issues. In this paper, we propose a novel learning approach, RDE-OP, based on differential evolution (DE) boosted by a region-based scheme and an opposition-based learning strategy. DE is a population-based metaheuristic algorithm which has shown good performance in solving optimisation problems. Our approach integrates two effective concepts with DE. First, we find, using a clustering algorithm, regions in search space and select the cluster centres to represent these. Then, an updating scheme is proposed to include the clusters in the current population. In the next step, our proposed algorithm employs a quasi-opposition-based learning strategy for improved exploration of the search space. Experimental results on different datasets and in comparison with both conventional and population-based approaches convincingly indicate excellent performance of RDE-OP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aljarah, I., Faris, H., Mirjalili, S.: Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 22(1), 1–15 (2018)
Amirsadri, S., Mousavirad, S.J., Ebrahimpour-Komleh, H.: A Levy flight-based grey wolf optimizer combined with back-propagation algorithm for neural network training. Neural Comput. Appl. 30(12), 3707–3720 (2018)
Bairathi, D., Gopalani, D.: Salp swarm algorithm (SSA) for training feed-forward neural networks. In: Bansal, J.C., Das, K.N., Nagar, A., Deep, K., Ojha, A.K. (eds.) Soft Computing for Problem Solving. AISC, vol. 816, pp. 521–534. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1592-3_41
Battiti, R.: First-and second-order methods for learning: between steepest descent and newton’s method. Neural Comput. 4(2), 141–166 (1992)
Beale, H.D., Demuth, H.B., Hagan, M.: Neural network design. Pws, Boston (1996)
Cai, Z., Gong, W., Ling, C.X., Zhang, H.: A clustering-based differential evolution for global optimization. Appl. Soft Comput. 11(1), 1363–1379 (2011)
Choi, T.J., Ahn, C.W.: Adaptive Cauchy differential evolution with strategy adaptation and its application to training large-scale artificial neural networks. In: International Conference on Bio-Inspired Computing: Theories and Applications, pp. 502–510 (2017)
Damavandi, N., Safavi-Naeini, S.: A hybrid evolutionary programming method for circuit optimization. IEEE Trans. Circ. Syst. I Regul. Pap. 52(5), 902–910 (2005)
Das, S., Konar, A.: Automatic image pixel clustering with an improved differential evolution. Appl. Soft Comput. 9(1), 226–236 (2009)
Deb, K.: A population-based algorithm-generator for real-parameter optimization. Soft Comput. 9(4), 236–253 (2005)
Fister, I., Fister, D., Deb, S., Mlakar, U., Brest, J.: Post hoc analysis of sport performance with differential evolution. Neural Comput. Appl. 32, 1–10 (2018)
Foresee, F.D., Hagan, M.T.: Gauss-newton approximation to bayesian learning. Int. Conf. Neural Networks 3, 1930–1935 (1997)
Gudise, V.G., Venayagamoorthy, G.K.: Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In: IEEE Swarm Intelligence Symposium, pp. 110–117 (2003)
Ilonen, J., Kamarainen, J.K., Lampinen, J.: Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 17(1), 93–105 (2003)
Karaboga, D., Akay, B., Ozturk, C.: Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In: International Conference on Modeling Decisions for Artificial Intelligence, pp. 318–329 (2007)
Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Global Optim. 39(3), 459–471 (2007)
Khastavaneh, H., Ebrahimpour-Komleh, H.: Neural network-based learning kernel for automatic segmentation of multiple sclerosis lesions on magnetic resonance images. J. Biomed. Phys. Eng. 7(2), 155 (2017)
Khishe, M., Safari, A.: Classification of sonar targets using an MLP neural network trained by dragonfly algorithm. Wireless Pers. Commun. 108(4), 2241–2260 (2019)
MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297 (1967)
Mirjalili, S.: How effective is the grey wolf optimizer in training multi-layer perceptrons. Appl. Intell. 43(1), 150–161 (2015)
Mirjalili, S.: Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 27(4), 1053–1073 (2016)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)
Mousavirad, S.J., Asilian Bidgoli, A., Rahnamayan, S.: Tackling deceptive optimization problems using opposition-based DE with center-based Latin hypercube initialization. In: 14th International Conference on Computer Science and Education (2019)
Mousavirad, S.J., Bidgoli, A.A., Ebrahimpour-Komleh, H., Schaefer, G.: A memetic imperialist competitive algorithm with chaotic maps for multi-layer neural network training. Int. J. Bio-Inspired Comput. 14(4), 227–236 (2019)
Mousavirad, S.J., Bidgoli, A.A., Ebrahimpour-Komleh, H., Schaefer, G., Korovin, I.: An effective hybrid approach for optimising the learning process of multi-layer neural networks. In: International Symposium on Neural Networks, pp. 309–317 (2019)
Mousavirad, S.J., Ebrahimpour-Komleh, H.: Multilevel image thresholding using entropy of histogram and recently developed population-based metaheuristic algorithms. Evol. Intell. 10(1–2), 45–75 (2017)
Mousavirad, S.J., Jalali, S.M.J., Sajad, A., Abbas, K., Schaefer, G., Nahavandi, S.: Neural network training using a biogeography-based learning strategy. In: International Conference on Neural Information Processing (2020)
Mousavirad, S.J., Rahnamayan, S.: Evolving feedforward neural networks using a quasi-opposition-based differential evolution for data classification. In: IEEE Symposium Series on Computational Intelligence (2020)
Mousavirad, S.J., Rahnamayan, S., Schaefer, G.: Many-level image thresholding using a center-based differential evolution algorithm. In: Congress on Evolutionary Computation (2020)
Mousavirad, S.J., Schaefer, G., Ebrahimpour-Komleh, H.: The human mental search algorithm for solving optimisation problems. In: Hassanien, A.-E., Taha, M.H.N., Khalifa, N.E.M. (eds.) Enabling AI Applications in Data Science. SCI, vol. 911, pp. 27–47. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-52067-0_2
Mousavirad, S.J., Schaefer, G., Jalali, S.M.J., Korovin, I.: A benchmark of recent population-based metaheuristic algorithms for multi-layer neural network training. In: Genetic and Evolutionary Computation Conference Companion, pp. 1402–1408 (2020)
Mousavirad, S.J., Schaefer, G., Korovin, I.: An effective approach for neural network training based on comprehensive learning. In: International Conference on Pattern Recognition (2020)
Mousavirad, S., Akhlaghian, F., Mollazade, K.: Classification of rice varieties using optimal color and texture features and BP neural networks. In: 7th Iranian Conference on Machine Vision and Image Processing, pp. 1–5 (2011)
Phansalkar, V., Sastry, P.: Analysis of the back-propagation algorithm with momentum. IEEE Trans. Neural Netw. 5(3), 505–506 (1994)
Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.: Quasi-oppositional differential evolution. In: IEEE Congress on Evolutionary Computation, pp. 2229–2236 (2007)
Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: IEEE International Conference on Evolutionary Computation, pp. 69–73 (1998)
Slowik, A.: Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training. IEEE Trans. Ind. Electron. 58(8), 3160–3167 (2010)
Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997)
Tizhoosh, H.R.: Opposition-based learning: a new scheme for machine intelligence. In: International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, vol. 1, pp. 695–701 (2005)
Yu, C.C., Liu, B.D.: A backpropagation algorithm with adaptive learning rate and momentum coefficient. Int. Joint Conf. Neural Netw. 2, 1218–1223 (2002)
Acknowledgements
This paper is published due to the financial support of the RFBR under research project 18-29-03225.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Mousavirad, S.J., Schaefer, G., Korovin, I., Oliva, D. (2021). RDE-OP: A Region-Based Differential Evolution Algorithm Incorporation Opposition-Based Learning for Optimising the Learning Process of Multi-layer Neural Networks. In: Castillo, P.A., Jiménez Laredo, J.L. (eds) Applications of Evolutionary Computation. EvoApplications 2021. Lecture Notes in Computer Science(), vol 12694. Springer, Cham. https://doi.org/10.1007/978-3-030-72699-7_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-72699-7_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72698-0
Online ISBN: 978-3-030-72699-7
eBook Packages: Computer ScienceComputer Science (R0)