Abstract
The Bald Eagle Search (BES) algorithm is an innovative population-based method inspired by the intelligent hunting behavior of bald eagles. While BES shows promise, it faces challenges such as susceptibility to local optima and imbalances between exploration and exploitation phases. To address these limitations, this paper introduces the Multi-Strategy Boosted Bald Eagle Search (MBBES) algorithm. MBBES enhances the original BES by incorporating an adaptive parameter, two distinct mutation strategies, and replacing the swoop stage with a fall stage. We rigorously evaluate MBBES against classic and improved algorithms using the CEC2014 and CEC2017 test sets. The experimental results demonstrate that MBBES significantly improves the ability to escape local optima and achieves superior convergence accuracy. Moreover, MBBES ranks first according to the Friedman test, outperforming its counterparts in solving five practical engineering problems and three MLP classification problems, underscoring its effectiveness in real-world optimization scenarios. These findings indicate that MBBES not only surpasses BES but also sets a new benchmark in optimization performance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In recent years, with the continuous development of various industries, the complexity of mathematical problems and the volume of data have significantly increased. Consequently, optimization algorithms have gained increasing importance. Optimization aims to find the most favorable value of an objective function while adhering to a set of constraints (Zhao et al. 2022; Hamad et al. 2022). It involves seeking the optimal solution within specific constraints, which often include real-world laws and practical considerations such as physics laws, geometric constraints, and material properties. Constraints can be broadly classified into two types: equality constraints and inequality constraints. These constraints are expressed as mathematical equations in the form of equalities and inequalities, respectively. The decision variables involved in the problem must satisfy both types of constraints. In the past, gradient-based information of the involved functions was commonly used to search for the optimal solution (Sulaiman et al. 2020). While this approach is effective in certain situations, it has limitations when dealing with problems that involve complex mathematical functions. In such scenarios, satisfactory results may not be achieved using this approach, necessitating the exploration of alternative methods.
Metaheuristic algorithms (MHs) are a type of algorithm that simulates mechanisms found in nature or biological evolution. These algorithms have gained increasing popularity among researchers. One notable advantage of metaheuristic algorithms is their ability to address challenging problems without relying on gradients, making them suitable for finding solutions that are close to optimal (Hashim et al. 2024). Furthermore, metaheuristic methods possess attributes such as simplicity, adaptability, a derivation-free framework, and the capability to overcome local optima (Yıldız et al. 2023). These features enable them to tackle complex mathematical problems, providing valuable approximations even if they may not achieve optimality. Over the past years, there has been a proliferation of metaheuristic algorithms, which can be broadly categorized into four main groups based on their sources of inspiration: evolution-based, physics-inspired, human-mimetic, and swarm intelligence optimization algorithms. By categorizing metaheuristic algorithms into these distinct groups based on their sources of inspiration, researchers have been able to develop a diverse range of algorithms with unique characteristics and capabilities.
-
1.
Evolution-based heuristic algorithms simulate the evolutionary law of “survival of the fittest” observed in nature to achieve overall progress in a population. These algorithms utilize concepts such as genetic operators and selection mechanisms to guide the search for optimal solutions, making them effective in exploring diverse solution spaces and adapting to changing environments. Examples of such algorithms include genetic algorithms (GA) (Holland 1992), differential evolution (DE) (Storn and Price 1997), Biogeography-Based Optimizer (BBO) (Simon 2008), evolutionary programming (EP) (Yao et al. 1999), and evolutionary strategies (ES) (Amoretti 2014).
-
2.
Physics-based algorithms draw inspiration from the laws of physics in the real world. These algorithms employ physical concepts like energy, force, temperature, etc. to guide the search process, often leveraging analogies from thermodynamics or celestial mechanics. This enables them to effectively navigate complex solution landscapes and escape local optima. Examples of these algorithms include simulated annealing (Kirkpatrick et al. 1983), big-bang big-crunch (BBBC) (Erol and Eksin 2006), gravitational search algorithm (GSA) (Rashedi et al. 2009), charged system search (CSS) (Kaveh and Talatahari 2010), central force optimization (CFO) (Formato 2007), and artificial chemical reaction optimization algorithm (ACROA) (Alatas 2011).
-
3.
Human-mimetic algorithms are based on human behavior. These algorithms take inspiration from various aspects of human society, such as learning, cooperation, or competition, to guide the optimization process. They often incorporate social or cognitive mechanisms to balance exploration and exploitation, allowing them to effectively adapt to dynamic environments. Notable examples in this category include the volleyball premier league algorithm (VPL) (Moghdani and Salimifard 2018), meta-optimization inspired by the FBI’s methodologies (FBI) (Chou and Nguyen 2020), algorithm for optimization based on the teaching-learning paradigm (TLBO) (Rao et al. 2011), and harmony search (HS) (Geem et al. 2001).
-
4.
Swarm intelligence optimization algorithms mimic the social behavior observed in social groups. These algorithms imitate the collective intelligence and cooperation seen in natural systems, where multiple agents interact and share information to collectively optimize the search process. They often feature communication and coordination mechanisms among the individuals of the swarm, enabling efficient exploration and exploitation of the search space. The most prominent optimization algorithm in this class is particle swarm optimization (PSO). Other examples include artificial bee colony (ABC) (Karaboga and Akay 2009), dandelion optimizer (DO), ant-lion optimizer (ALO) (Mirjalili 2015), cuckoo search algorithm (CSA) (Rakhshani and Rahati 2017), African vultures optimization algorithm (AVOA) (Abdollahzadeh et al. 2021), nutcracker optimizer algorithm (NOA) (Abdel-Basset et al. 2023a), Hybrid Moth Flame Optimizer (HMFO) (Sahoo and Saha 2022), and snake optimizer (SO) (Hashim and Hussien 2022).
In addition to these categories, metaheuristic algorithms can also be classified based on the number of search agents involved and the number of objectives being optimized. Single-agent optimization algorithms, such as simulated annealing and tabu search, rely on a single agent to explore the search space and find optimal solutions. In contrast, population-based optimization algorithms, like genetic algorithms and particle swarm optimization, utilize a population of agents that collaborate and share information to effectively navigate the solution landscape (Sahoo et al. 2023a, b).
Furthermore, metaheuristic algorithms can be designed to solve either single-objective optimization problems, where the goal is to optimize a single performance measure, or multi-objective optimization problems, where multiple, often conflicting, objectives need to be considered simultaneously (Sahoo et al. 2024a, b).
Swarm intelligence optimization algorithm as a kind of methods that ignore gradient changes, it main utilize particles swarm continuously explores the feasible space, and updates and iterates through certain strategies to achieve better fitness values. This process is somewhat analogous to a process of continuous trial and error, what we need to do is find a best feasible solution under limited amount of computation. Therefore, metaheuristic algorithms conduct a selective exhaustive search, which gives birth to the two stages of exploration and exploitation. The significant numerical disparities among individuals within a dimension suggest their widespread distribution in the search environment. This widespread exploration is termed ”Exploration” in the realm of swarm intelligence optimization algorithms. On the contrary, as the population converges, the disparities diminish, resulting in the individuals converging towards a specific region. This is known as ”Exploitation”. Exploration can be understood as a process of global search, whereas Exploitation is akin to a local search. Both these search paradigms play a pivotal role in determining the effectiveness of the algorithm. Different swarm intelligence optimization algorithms seem to be doing the same thing, that is balance the exploration and exploitation. Empirical experimentation has unequivocally shown that there exists a profound correlation between the exploration-exploitation capacity of a particular search method and its rate of convergence (Sahoo et al. 2022, 2023c; Morales-Castañeda et al. 2020; Sahoo et al. 2024c).
The execution and transition between the exploration and exploitation phases differ across various optimization algorithms, highlighting the diversity in their approaches. Each algorithm employs distinct strategies to conduct these phases, allowing for effective navigation of the solution space. The specific methods utilized during these phases significantly impact the algorithm’s overall performance and its ability to achieve optimal solutions. Examples of such algorithms include sine cosine algorithm (SCA) (Mirjalili 2016), use the parameter r to control the two stage. When r is less than 1, the optimization algorithms is in the exploration stage, when r is more than 1 the optimization algorithms is in the exploitation stage. Similarly, the grey wolf optimizer (GWO) (Mirjalili et al. 2014) used the parameter A to control the two stage. The arithmetic optimization algorithm (AOA) (Abualigah et al. 2021), which mainly consists of four operators: addition, subtraction, multiplication and division.In addition, subtraction operator the AOA mainly going the exploration stage, in the multiplication and division operator the AOA mainly going the exploitation stage. The dandelion optimizer (DO) (Zhao et al. 2022), which mainly consists of three operators: rising stage, descending stage, and landing stage in the rising and descending stage the algorithm mainly going to exploration in the landing stage the algorithm mainly going to exploitation. In the African vultures optimization algorithm (AVOA), when \(|F| \ge 1\), the vulture is in the exploration stage, and when \(|F| \le 1\), it is in the exploration stage. Swarm intelligence optimization algorithm optimization problems abound in our lives. Hussain et al. used artificial bee colony (ABC) algorithm to optimize co-clustering (Hussain et al. 2020). Tharwat et al. use Chaotic antlion algorithm to optimize parameter of support vector machine (Tharwat and Hassanien 2018). Braik et al. used cuckoo search algorithm (CSA) to model industrial coiling process (Braik et al. 2023). Swain et al. used African vultures optimization algorithm to stabilize frequency response of a microgrid (Swain et al. 2023). Chen et al. used the grey wolf optimization (GWO) to conduct research on face recognition (Chen et al. 2023). Wang et al. used the adaptive parallel arithmetic optimization algorithm to plan robot path (Wang et al. 2021).
Although there are so many different optimization algorithms, based on the ”No Free Lunch” theorem (Wolpert and Macready 1997), it is hardly possible that an optimization algorithm can solve all optimization problems very well. Different algorithms have different effects in solving different problems. Perhaps an algorithm is very efficient in handling one type of the problems, but has certain flaws in dealing with another type of problem. Concerning this, it is imperative to propose either novel optimization techniques or modified versions of the existing ones in order to address subsets of tasks across diverse fields. Li et al. used Levy flight and nonlinear sine function to enhance the solution quality of northern goshawk optimization algorithm (Li et al. 2023). Jiang et al. (2023) proposed an adaptive tree seed algorithm (ATSA) by using the tree migration and seed intelligent generation mechanisms. Alsattar et al. 2019 proposed a novel swarm intelligence algorithm bald eagle search (BES) for solving engineering optimization problems inspired by the hunting strategy and intelligent social behaviours of bald eagles as they search for prey. The algorithm mainly consist of three stage: select stage, search stage and swoop stage. The select stage is exploration stage of the algorithm. In this stage, bald eagles extensively search for the location of the prey. The search stage and swoop stage are exploitation stages of the algorithms. In this stage, bald eagles precisely locate the prey’s position. The superiority and distinction of BES are clearly demonstrated by mean, standard deviation, best point and Wilcoxon signed-rank test on CEC2014, CEC2017 test functions. From the function convergence curve, it can be clearly seen that BES has a faster convergence speed than other compared algorithms, such as DO (Zhao et al. 2022), GWO (Mirjalili et al. 2014), AOA (Abualigah et al. 2021), WOA (Mirjalili and Lewis 2016), SCA (Mirjalili 2016), and the optimal solution found is better. BES also has the advantage of simple structure few control parameters.
Since its inception, the BES algorithm has garnered significant attention from researchers due to its superior performance in various applications. It has been extensively studied and enhanced to optimize diverse domains, such as fuel cell design (Alsaidan et al. 2022), pixel selection (Bahaddad et al. 2023), battery parameter extraction (Ferahtia et al. 2023), path planning for unmanned vessels (Chen et al. 2023), battery capacity enhancement (El Marghichi et al. 2023), feature selection (Chhabra et al. 2023), controller parameter tuning (Liu et al. 2023), speaker verification (Nirmal et al. 2024), energy scheduling optimization (Zhang et al. 2022), and dam safety monitoring (Yu et al. 2023). Notably, the BES exhibits faster convergence, higher accuracy, and outperforms many well-known algorithms. Its simplicity, comprehensibility, and ability to achieve desirable results with smaller populations make it an appealing choice.
Despite its favorable optimization effects, BES faces challenges in finding the global optimal solution. After a large number of iterations, the difference in performance between BES and its early iterations becomes negligible, leading to inefficient utilization of computational resources. Moreover, BES exhibits limitations in terms of low convergence accuracy and inadequate handling of high-dimensional and complex problems. Previous attempts at improving BES have either been overly complex or effective only for specific problem domains.
As a result, researchers have been motivated to further explore and extend the BES model, aiming to unlock even more remarkable capabilities and address its limitations. The ongoing efforts in developing modified versions of BES seek to enhance its overall performance, overcome convergence issues, and improve its applicability to high-dimensional and complex problem scenarios. For that reason, (Ramadan et al. 2021) introduced IBES, a novel variant of the BES algorithm, designed for precise estimation of photovoltaic (PV) model parameters. IBES improves upon the original BES by dynamically adjusting the learning parameter, which controls the position changes in each iteration. The value of this parameter is updated using a decay equation to enhance exploration and exploitation. IBES exhibits superior performance compared to the original BES and other algorithms across 23 classical benchmark tests. Real data from a commercial silicon solar cell is employed to highlight IBES’ exceptional accuracy in parameter estimation for both original and modified PV models. In another work, (Chen et al. 2023) proposed mBES, a modified version of the BES algorithm that addresses its limitations by incorporating opposition-based learning (OBL), chaotic local search (CLS), and transition & pharsor operators. Evaluation on 29 CEC2017 and 10 CEC2020 benchmark functions, engineering design problems, and a feature selection problem demonstrates the superiority of mBES over classical metaheuristic algorithms, highlighting its effectiveness in solving diverse optimization problems.
Ferahtia et al. (2023) proposed a modified bald eagle search (mBES) algorithm. The mBES algorithm introduces adaptive parameters based on the current and maximum number of iterations to enhance diversity and address issues such as sluggish convergence and an imbalance between exploitation and exploration. The algorithm’s performance is evaluated using benchmark functions CEC2020 and CEC2022. Additionally, the efficacy of mBES is demonstrated in the complex problem of parameter extraction for lithium-ion batteries. The results of this evaluation are compared to other state-of-the-art algorithms, and statistical tests are conducted to validate the significance of the findings. Overall, the mBES algorithm exhibits promising results in real-world scenarios.
In Alsaidan et al. (2022), Alsaidan et al. introduced a novel optimization algorithm, the enhanced bald eagle search (EBES) optimization algorithm, to estimate the design variables of proton exchange membrane fuel cells. The main objective is to minimize the sum of squared errors (SSE) by improving the accuracy of the calculated data compared to the measured data. The EBES algorithm builds upon the original BES by incorporating enhancements for increased search efficiency and overcoming local optima via Levy function. To evaluate the proposed algorithm, three tested cases are utilized, including the BCS 500 W, 250 W, and Horizon H-12 stacks. Furthermore, the impact of pressure and temperature variations on the algorithm’s performance is investigated. The results are compared with existing optimization algorithms to highlight the effectiveness and superiority of the newly developed EBES algorithm in solving complex optimization problems.
To address the challenges encountered in the path planning of unmanned ships in complex waters. Chen et al. (2023) proposed a novel optimization algorithm called the self-adaptive hybrid bald eagle search (SAHBES) algorithm, which enhances the traditional BES algorithm by incorporating adaptive factors and pigeon-inspired optimization (PIO). The improved fitness function incorporates a distance metric between the ships’ path corners and factors in obstacle-based path length calculations. Furthermore, a curve optimization module is employed to generate optimal and collision-free path planning results. The proposed SAHBES algorithm is evaluated through 4 selected multimodal surface functions from 23 standard test functions (CEC2005) and path planning under different obstacle scenarios. The results demonstrate that the SAHBES algorithm can effectively generate the shortest and smoothest paths in complex water environments, while considering the limitations of ship maneuvering operations.
To address issues related to local optima stagnation and premature convergence in the BES algorithm, (Sharma et al. 2023) introduced the self adaptive bald eagle search (SABES) algorithm. The SABES algorithm incorporates the dynamic-opposite learning (DOL) method during the initialization process to improve population diversity and convergence speed. To enhance the exploitation capability and discover better global solutions, dynamic-opposite solutions are considered. Additionally, the algorithmic parameter values of the BES algorithm are determined using a linear and non-linear time-varying adaption strategy, striking a balance between search abilities and overall performance. The performance of the SABES algorithm is evaluated by CEC2017 and CEC2020 test functions, in comparison to previous algorithms. The SABES algorithm achieves optimal results for the highest number of functions compared to state-of-the-art algorithms.
A novel approach called the polar coordinate bald eagle search algorithm (PBES) is proposed for curve approximation tasks by Zhang et al. (2022). Inspired by the spiral predation mechanism of the bald eagle, the PBES algorithm utilizes a polar coordinate representation to optimize problems in polar coordinates more effectively. The initialization stage of the PBES algorithm is modified to ensure a more uniform distribution of initialized individuals, while additional parameters are introduced to enhance exploration and exploitation capabilities. The performance of the PBES algorithm is evaluated through experimentation on various benchmark functions, including polar coordinate transcendental equations, curve approximation, and robotic manipulator problems. The results demonstrate the superiority of the PBES algorithm over well-known metaheuristic algorithms in effectively addressing curve approximation problems.
A variant known as the cauchy adaptive BES algorithm (CABES) is proposed by Wang et al. (2023). CABES integrates Cauchy mutation and adaptive optimization techniques to enhance the performance of BES in mitigating local optima. In CABES, the Cauchy mutation strategy is introduced to adjust the step size during the selection stage, enabling the selection of a more favorable search range. Furthermore, in the search stage, the search position update formula is updated using an adaptive weight factor, further improving the local optimization capability of BES. To evaluate the performance of CABES, simulations are conducted using the benchmark functions from CEC2017. Comparative analysis is performed against three other algorithms, namely PSO, whale optimization algorithm (WOA), and Archimedes optimization algorithm (AOA). The experimental results demonstrate that CABES exhibits strong exploration and exploitation capabilities and competes favorably with other tested algorithms. Moreover, CABES is applied to four constrained engineering problems and a groundwater engineering model to validate its effectiveness and efficiency in practical engineering scenarios. These applications further confirm the robustness and efficacy of CABES.
In the domain of vehicular networks, the focus is on reducing computing delay and energy consumption. To address this challenge, a task offloading model that combines local vehicle computing, mobile edge computing server computing, and cloud computing is proposed by Shen et al. (2022). This model takes into account task prioritization, as well as system delay and energy consumption. In their work, an improved variant of the BES, named IBES, is introduced to make computational of floading decisions. The IBES algorithm incorporates several enhancements to the original BES, including the integration of tent chaotic mapping, Levy flight mechanism, and adaptive weights. These modifications aim to increase the diversity of the initial population, enhance local search capabilities, and improve global convergence. To evaluate the proposed IBES algorithm, simulations are conducted and compared against two other algorithms, namely PSO and the original BES. The results reveal that the total cost of IBES is found to be 33.07% and 22.73% lower than that of PSO and BES, respectively. These findings demonstrate the superior performance of the IBES in this task.
Table 1 offers an overview of the referenced studies, encompassing five primary criteria: author names and references, the designation of the BES version, the method of enhancement employed, the utilization of the IEEE CEC benchmark functions, and the identification of real-world applications.
Despite the extensive development of various BES variants by numerous researchers, the underlying limitations of the algorithm persist. These limitations serve as the impetus for our endeavor to propose a novel variant that aims to overcome these challenges and enhance BES’s efficacy in addressing complex optimization problems. Introducing the multi-strategy boosted bald eagle search algorithm (MBBES), our proposed variant integrates multiple strategies to effectively address the shortcomings of original BES. MBBES incorporates several enhancements, including combines two mutations that are DE/Current-to-best/1 mutation and DE/BEST/1 mutation, one adaptive weight parameters \(\alpha\), and the fall stage. Firstly, the adaptive weight parameters \(\alpha\) is applied at the select space stage to increase the randomness of the algorithm and improve the global search capability. Secondly, the DE/Current-to-best/1 mutation is used in the select space stage to increase the possibility of algorithm for finding the global optimal. And then the DE/BEST/1 mutation is added to the search space stage, which helps to conduct in-depth search within the neighborhood of the current individual optimal value. Finally, the fall stage is used to instead the swoop stage. With the increasing of the number of iterations, the fall step size increases gradually, which helps the algorithm to jump out of the local optimal solution. The main contributions of this study are as follows:
-
1.
MBBES introduces enhancements to the BES algorithm, including the incorporation of adaptive parameters, two DE mutation strategies, and the replacement of the swoop stage with a fall stage. These modifications aim to enhance the exploration and exploitation capabilities of the optimizer.
-
2.
The performance of MBBES is evaluated by comparing it to other advanced algorithms on the CEC2014 and CEC2017 test sets. Statistical indicators such as mean, standard deviation, convergence curve, Wilcoxon signed-rank test, and box plots are utilized to demonstrate the superior performance of MBBES.
-
3.
The practical evaluation conducted in this research demonstrates the effectiveness of MBBES in real-world scenarios. The practical applicability of MBBES is showcased in two ways:
-
(a)
By successfully solving a variety of engineering problems, including Welded Beam Design, Pressure Vessel Design, Reducer Design, Three-Bar Truss Design, and String Design. MBBES proves its capability to provide optimized solutions for these engineering challenges.
-
(b)
By effectively solving multi-layer Perceptron problems. MBBES demonstrates its ability to optimize the performance of multi-layer Perceptron models, showcasing its applicability in this domain.
-
(a)
The organization of the paper is structured as follows: Sect. 2 introduces the basic BES algorithm. Section 3 demonstrates the improvements for the BES algorithm. Section 4 discussed the optimization performance of MBBES on two sets of functions, which are the CEC2014 and CEC2017. In Sect. 5, five engineering design optimization problems are utilized to verify the superiority of proposed MBBES. Section 6 is about the application of MBBES in three multi-layer perceptron (MLP) classification problems. Finally, conclusions and look forward to the research content of this article are given in Sect. 7.
2 The bald eagle search algorithm
In this section, the bald eagle search algorithm (BES) is described in detail. It is raised by Hassan et al. (2019) in 2019, which is a population-based and nature-inspired metaheuristic algorithm. The BES mimics the hunting strategies and intelligent social behaviours of bald eagles when they search for fish. The hunting processes of bald eagles are divided into three stages: select space stage, search space stage and swoop stage. The mathematical models of BES are described in the following.
2.1 Initialization
The initialization operation is used to determine the initial positions of bald eagle group. The initial positions of bald eagles are an important factor for the global optimal solution, which may affect the convergence speed and accuracy of BES. In the BES, the bald eagles are randomly distributed in the search space. The initial positions are generated by Eq. (1):
where \(X_{i,j}\) is the position of individual within the defined area. \(ub_j\) and \(lb_j\) indicate the j-th upper bound and nether bound of the feasible domain. N is the size of bald eagle group. D is the dimension of variables. And rand is a random number within [0, 1].
2.2 Select space stage
During the hunting process of bald eagles, the first step is to identify the area where prey may appear. At this stage, all bald eagles randomly search the space based on the previously selected search space. In other words, the bald eagles utilize the location information obtained during the previous hunting to search other areas near it, where may exist better optimal solution. The position is calculated by Eq. (2):
where \(P_{i,new}\) is the new generated position of the i-th individual. \(P_{best}\), \(P_{mean}\) and \(P_{i}\) are the prey position, average position and original i-th individual position, respectively. \(\alpha\) is a constant in the interval [1.5, 2], and r is a random number lying in [0, 1].
2.3 Search space stage
After identifying the area where the prey may appear, the bald eagles accelerate to their prey within the chosen search space using the spiral line. This spiral search not only improves the convergence speed of BES, but also increases the likelihood of finding the global optimum. The trajectory of spiral motion is mathematically modeled by Eqs. (3)–(6):
where \(P_{i + 1}\) is the adjacent position of the i-th individual. a is a constant value between [5, 10]. And R takes the constant within [0.5, 2].
2.4 Swoop stage
In the swoop stage, all bald eagles swing from the best position to their target prey position in search space. The polar equation is employed to model this hunting behavior. The formulas are shown below:
where the \(c_1\) and \(c_2\) are in the range of [1, 2].
Algorithm 1 gives the pseudo-code of BES.
3 The multi-strategy boosted BES (MBBES)
In this study, three different strategies are used to enhance the optimization ability of basic BES, including a adaptive weight parameter, two mutation operators (DE/Current-to-best/1 mutation and DE/BEST/1 mutation), and the fall stage. The fall stage is used to replace the swoop stage, which is inspired from the whale falls in nature (Changting et al. 2022).
3.1 Motivation
The original BES has an outstanding performance in seeking capability and rapid convergence. It also has a wide range of applications in the field of practical engineering. Discovered through our experiments that the ability to optimization of BES is better than many algorithms like WOA (Mirjalili and Lewis 2016), SCA (Mirjalili 2016), GWO (Mirjalili et al. 2014), DO (Zhao et al. 2022), ALO (Mirjalili 2015), MFO (Mirjalili 2015) on most test functions in CEC2014 and CEC2017. BES has the advantage of converges quickly but it hard to avoid falling into the local optimal solution. The inability to efficiently jump out the global optimal and accurately convergence emphasizes the importance to improve BES (Gharehchopogh 2024). The key is BES has potential to be promoted. We get a result through a lot of experiments that each of the three phases of BES has play its own role, the select stage help BES algorithm identify an area where an optimal solution is likely to occur, which is like the algorithm’s exploration, at this stage bald eagle group extensive search within the feasible domains. However, randomness of the select stage is not strong enough, the region where the optimal solution occurs may be missed. At search phase the bald eagle group hover at a very high altitude to search their prey in their selected area,this means that the algorithm exploitation phase has begun. However, hovering in the air may mistake a location near the prey for the prey’s location. After bald eagle group find a area where prey may appear, the hunt enters the swoop phase. At the swoop stage, bald eagle swoop down in a spiral from high to catch their prey, it is obvious that the distance traveled in a spiral is longer than that of a straight line.
3.2 Adaptive parameter
A crucial aspect that significantly impacts the performance and precision of an optimization algorithm is the equilibrium achieved between the exploration phase and the exploitation phase. The balance between these two phases will directly affect the performance of the algorithm. As the most commonly used balance parameter, inertia weight has been widely used in various optimization algorithms. In order to balance the exploration phase and exploitation phase, an adaptive parameter is applied as the new \(\alpha\), which is defined in the following formula.
The cos function is one of the most well-know mathematical functions used in the MHs, such as the SCA algorithm (Mirjalili 2016). The cos function has the advantage of high degree of flexibility and volatility, which enables the rate of parameter \(\alpha _{new}\) to gradually decrease with the number of iterations, as shown in Fig. 1. The original \(\alpha\) is a constant value set to 2 in the basic BES. However, The \(\alpha _{new}\) is non-linearly decreased from 2.5 to 1.5
3.3 Mutation operator
Mutation operators are commonly improvement strategies with the advantages of use varied information currently obtained. Mutation strategies are equivalent to interacting location information of the present with others, which can efficient help the individuals escape the local optimal condition. Many different mutation operators were developed to meet different demand. Hu et al. (2023) used the DE/best/1/bin mutation to improve the dandelion optimizer, and achieved significant better optimization results. Zheng et al. (2022) adopted multiple mutation strategies in the ESMA to increase the diversity of basic SMA. In our proposed MBBES, two mutation operators are applied to help the original BES escape the local optimal.
3.3.1 DE/Current-to-best/1 mutation
DE/Current-to-best/1 mutation is an efficient mutation operator. The individuals will use the distance between the best position and the position of the current individual, and the distance between two random individuals to update the position. In the MBBES, the DE/Current-to-best/1 mutation is applied in the select space stage to help the bald eagle group select a better area. The equation is given in follows:
The \(P_i\) is the location of the current iteration. \(P_a\) and \(P_b\) are random bald eagle individuals selected from the group. Scaling parameters \(F_1\) and \(F_2\) are the constants that control the specific gravity, which generally take the values between 0 and 1. In the MBBES, some common values of scaling parameters, such as 1.0, 0.9, 0.8 (Zhang et al. 2021), are evaluated and it is found that 0.8 is the best value for the performance of MBBES.
\(F_1\) and \(F_2\) are set to 0.8.
3.3.2 DE/BEST/1 mutation
DE/BEST/1 mutation is another popular mutation operator, the individuals will appear randomly around the best individual. The best position and the distance between two randomly selected individuals are considered in this method. In the MBBES, the DE/BEST/1 mutation is applied in the search stage to help to find prey with precision while noticing more areas where prey may appear in search stage, and increasing randomness of the algorithm. The fomula is expressed as follows.
where \(P_i\) is the location of the current iteration. \(P_{best}\) is the best position so far. \(P_a\) and \(P_b\) are the randomly selected bald eagle individuals. Scaling parameter F is a constant value which is also set to 0.8.
3.4 Fall stage
The fall stage is inspired by the whale fall in nature (Xinguang et al. 2023). Shortly after a whale’s demise, the colossal creature of the ocean commences its descent, plummeting kilometers upon kilometers until it ultimately settles on the ocean floor. Fall stage has the advantages of high randomness and mobility, that is why we use the fall stage to instead swoop stage. The mathematical model of fall stage is shown below:
where \(P_i(t + 1)\) represents the position of the \(i-th\) individual in the next iteration, \(P_i(t)\) represent the position of the \(i-th\) individual, \(P_{rand}\) is a random individual in this iteration. \(r_5\), \(r_6\), \(r_7\) are random number between 0 and 1. \(P_{step}\) is shown below:
where t is the current number of iterations. T is the max number of iteration. \(C_2\) is a constant that controls the change in step size, which is related to the number of the bald eagle group.
where \(W_f\) is calculated as below:
The process of changing the value of \(W_f\) is a process that decreases with the number of iterations. The \(C_2\) is proportional to \(W_f\) and the size of bald eagle group. so within the upper and lower bounds, \(X_{step}\) increases with the number of iterations. The step size of the algorithm increases as iteration to help jump out of the local optimal solution. \(r_5\), \(r_6\) and \(r_7\) are random numbers that can increase the searching randomness.
The flowchart of MBBES is illustrated in Fig. 2. And Algorithm 2 gives the pseudo-code of MBBES.
3.5 Computational time complexity of proposed MBBES
The computational time complexity is crucial for the optimization algorithms. There are three stages in the proposed MBBES, select space stage, search space stage and fall stage. The main factors of computational time complexity for the optimization algorithms are the population size N, dimension of optimization problem D, the maximum number of iterations T, and the objective function calculation time f. In the proposed MBBES, the time complexity of initialization process is \(O(N \times D)\). In the iterative process, the time complexity of selected space stage, search space stage and fall stage is \(O(N \times D \times T)\) + \(O(N \times f \times T)\). Therefore, the total time complexity for the proposed MBBES is \(O(N \times D + N \times D \times T + N \times f \times T)\), which is the same as the original BES.
4 Experimental results of the MBBES
In this section, the feasibility and effectiveness of the improvements applied on basic BES are demonstrated. The optimization performance of MBBES were evaluated using two test sets, i.e., the CEC2014 and CEC2017 (Liang et al. 2013; Wu edt al. 2017). To illustrate the optimization ability of MBBES compared with other methods, eight different and well-known optimization techniques are employed for the comparison of performance, including four basic algorithms, the BES (Hassan et al. 2019), GWO (Mirjalili et al. 2014), BO (Das and Pratihar 2022) and DO (Zhao et al. 2022), and four modified algorithms, the QRHHO (Fan et al. 2020), DETDO (Hu et al. 2023), MPSO (Bao et al. 2009) and TACPSO (Ziyu and Dingxue 2009). The setting of each algorithm can be founded in Table 2.
For a fair comparison, the number of individuals in the population and the iterations are set to 30 and 500, respectively. To obtain statistically valid results, all algorithms have been performed 30 times for each test function. In addition, all experiments were performed using the same computer. The operating system is Windows 11 with 13th Gen Intel(R) Core(TM) i5-13500 H @ 2.60 GHz and 16 GB RAM. And the code is run by using MATLAB R2021b.
4.1 The detailed information of the CEC2014 and CEC2017 test sets
CEC2014 and CEC2017 are two commonly used sets of test functions for performance evaluation of optimization algorithms. The detailed information also can be found in study (Abdel-Basset et al. 2023b). The primary objective of these functions is to comprehensively evaluate the convergence speed, solution accuracy, ability to escape local optima, and exploration capability of various optimization algorithms. By running optimization algorithms on these test functions, the strength and weakness of optimization algorithms for solving optimization problems can be obtained.
The details of CEC2014 and CEC2017 are given in Tables 3 and 4. The CEC2014 test suite comprises 30 test functions, encompassing various types of functions, including unimodal functions (F1–F3), simple multimodal functions (F4–F16), hybrid functions (F17–F22), and composite functions (F22–F30). For the CEC2017 test suite, it contains 29 test functions, including unimodal functions(F1, F3), multimodal functions (F4–F10), hybrid functions (F11–F20) and composite functions (F21–F30).
4.2 Comparison between MBBES and other algorithms on CEC2014
In this section, the experimental results of MBBES and other eight advanced algorithms on the CEC2014 test set are discussed. It is known that the figures have the advantages of accuracy, credibility, comparability, quantifiability and other advantages, which can be used to describe the tendency of data concentration and to compare the levels of different data. The mean value, standard deviation and ranking of the 30 test function are shown in Table 5. In the Table 5, the best figures of the average values in these nine compared algorithms under the same test function are marked in bold to make it look more intuitive. From the Table 5, it can be observed that the mean values of MBBES are smaller than BES on almost all of test functions in CEC2014, especially the F1, F3, F11, F17, F18, F21, F23, F28, F29. The average of MBBES on these test functions is significantly lower than that of BES. Moreover, according to the results of Friedman rankings (Friedman 1937), the proposed MBBES obtains first rank in CEC2014 with the smallest average rank of 1.97, which is significantly lower than those of BES and BO.
Wilcoxon signed-rank test (R. A., Nonparametric statistics for non-statisticians 2010) results are shown in Table 6. According to the 5% level of significance and combining the ranking results, MBBES is better than other algorithms on most of the test functions. The comparison results for BES, BO, DO, and GWO are 15/14/1, 16/8/6, 21/8/1 and 22/8/0. And for the QRHHO, DETDO, MPSO and TACPSO, the results are 29/1/0, 26/5/2, 20/10/0, 12/16/2.
Figure 3 shows the convergence curves of MBBES and other compared algorithms. These convergence curves clearly show that the convergence curve’s descent speed of MBBES is faster than BES and other prominent ones are used to contrast algorithms. In a certain iteration interval, the convergence curves of MBBES even has plummeting effects during the iterations, such as those of F5, F11, F16. On simple multimodal functions, MBBES performs closer to the ideal value compared to BES, with the most significant improvement observed on the F11 test function. Moreover, on most of the test functions, MBBES converges faster than other algorithms in the first 50 iterations.
Boxplots can visually show the dispersion characteristics of data. The bottom and top of the box are represented by the lower and upper quartiles, respectively. The median is located in the center of the box. The length of the box reflects the range of most of the data in the dataset, and the two whiskers (also known as the box whiskers) extend to the minimum and maximum values of the data, respectively. The boxplots of these tested algorithms on CEC2014 are presented in Fig. 4. It can be seen that the MBBES has narrower distribution ranges or smaller minimum values than those of other comparison algorithms on most of functions. In addition, significant improvements can be observed on functions F16, F22, F25, F27, F28, indicating the effectiveness of improvement strategies applied on the basic BES.
4.3 Ablation test on some functions from CEC2014
To demonstrate the effectiveness of the MBBES approach, we conducted an ablation study by testing the BES algorithm with each of our proposed strategies independently. Table 7 presents the results of BES combined with each strategy separately. BES1 refers to BES with an adaptive weight parameter, BES2 represents BES enhanced with two mutation operators (DE/Current-to-best/1 and DE/BEST/1), and BES3 involves BES where the fall stage is replaced by the swoop stage. We evaluated these variations on a selection of random functions from different categories. Specifically, F1 and F3 were chosen from unimodal functions; F4, F6, and F9 from multimodal functions; F13, F15, and F18 from hybrid functions; and F21, F22, and F23 from composition functions. Table 7 provides a detailed comparison of the results, showcasing the contribution of each strategy to the overall performance of the BES algorithm.
4.4 Comparison between MBBES and other algorithms on CEC2017
To further validate the superiority and competitiveness of MBBES, 29 challenging test functions of CEC2017 are selected for the optimization performance evaluation. Table 8 gives the results of mean value, standard deviation and ranking. The best results are marked in bold. It is observed that MBBES has obtained the best optimal solutions on most of the test functions, including F1, F4, F6, F8, F10, F12, F13, F16, F18, F23, F25, F27, F29. In addition, MBBES also gets good performance on other functions. And MBBES obtains better or equivalent solutions than basic BES on CEC2017 only except test function F24. In addition,the results of Friedman rankings show that the proposed MBBES obtains first rank in CEC2017 with the smallest average rank of 1.86.
The Wilcoxon signed– rank test results of CEC2017 are given in Table 9. It can be obsearved that MBBES has significant difference with other algorithms on most of the test functions. The comparison results for BES, BO, DO, and GWO are 15/14/0, 16/12/1, 21/8/0 and 23/6/0. And for the QRHHO, DETDO, MPSO and TACPSO, the results are 29/0/0, 26/6/1, 21/8/0, 14/15/0.
The Figs. 5 and 6 show the convergence curves and boxplots of CEC2017. In Fig. 5, MBBES is able to find the best solutions within fewer iterations than other comparative algorithms. From the Fig. 6, it can be seen that MBBES has the most compact boxes, indicating the stability of MBBES among all these algorithms.
5 Application of MBBES on real-world engineering issues
Engineering design is a process that aims to fulfill the requirements involved in the construction of a product. The number of design variables can be extremely numerous, and their influences on the objective function seeking optimization result can be extremely complex and nonlinear. In this section, five real-world engineering issues are employed to verify the ability of MBBES, including welded beam, pressure vessel, speed reducer, three-bar truss design issue, and tubular column. And the size of population and maximum number of iterations are set to 30 and 500, respectively.
5.1 Welded beam design issue
The welded beam design problem is to find optimal variables to minimize the fabrication cost of welding materials. There are seven constraints and four variables in this problem, as shown in Fig. 7. The mathematical model is expressed in Eq. (18).
Table 10 gives the optimal solutions of MBBES and other algorithms when solving the welded beam design problem. MBBES outperforms other algorithms and achieves the lowest cost with 1.6952. It shows that MBBES has the merits in dealing with this problem.
5.2 Pressure vessel design issue
Pressure vessel design issue is one of the most common engineering design problems, with the objection of minimizing the manufacturing cost of cylindrical pressure vessels while meeting the required pressure standards. As shown in Fig. 8, there are four key structural parameters: shell thickness (\(T_s\)), head thickness (\(T_h\)), inner radius (R), and length of the headless cylindrical section (L). The mathematical formulas are shown in Eq. (19).
Table 11 lists the results of pressure vessel design issue. MBBES obtains the lowest optimal value of 5734.9131, which reveals that MBBES is superior to other competing algorithms.
5.3 Speed reducer design issue
The target of speed reducer design issue is to obtain the lowest weight of mechanical device with eleven constraints. As shown in Fig. 9, there are seven variables that need to be optimized. The mathematical model is given in Eq. (20).
As shown in Table 12, MBBES obtains the minimum weight of 2995.4373. According to the results, it can be seen that MBBES has good ability to tackle this problem.
5.4 Three-bar truss design issue
The goal of three-bar truss design issue is to minimize its weight (Sadollah et al. 2013), as shown in Fig. 10. Two variables with three constraints need to be optimized. The detailed formulas of this problem is given in Eq. (21).
Table 13 shows the optimal results obtained by MBBES and other algorithms. According to the results, MBBES obtains the minimum weight with 263.8523, indicating that MBBES is superior to other comparison algorithms.
5.5 Tubular column design issue
The tubular column design problem requires to design uniform columns withstand compressive load while minimizing both material and construction costs (Hu et al. 2023), as shown in Fig. 11. There are two variables in this problem: average diameter of column d and thickness of tube T. The model is given in Eq. (22).
As shown in Table 14, the cost obtained by MBBES is the lowest, which is 26.5313. Therefore, MBBES has better optimization performance for solving this problem.
6 MLP classification problems
Multi-layer perceptron (MLP) is a particular type of feedforward neural networks that contains only one hidden layer (Bansal et al. 2019). The core idea of MLP is to learn different complex patterns by adjusting the number of neurons in the hidden layer and how they connect. Neurons in the hidden layer perform nonlinear transformations of the input data through activation functions, thereby increasing the representation power of the model. MLP is a supervised learning model that is typically trained using a backpropagation algorithm. Through continuous iterative training, MLP can automatically learn the complex relationships between input features and make predictions on new data.
To examine the wide advantage of MBBES, in this section, we show the three results for MLP classification problems, including MLP_XOR dataset problem, MLP_Iris dataset problem and MLP_Cancer dataset problem.
6.1 MLP_XOR dataset
The MBBES and the comparison algorithms are used to solve the MLP_XOR classification problem. The results obtained are shown in Table 15, which is obviously shown that MBBES has the best classification rate, classify 100% correctly. Figure 12 shows the convergence curves of these algorithms on this MLP_XOR classification problem. It is observed that MBBES has the lowest fitness compared with other algorithms. In the Fig. 13, we also can learn that the MBBES has very good robustness when solving this classification problem.
6.2 MLP_Iris dataset
As shown in Table 16, the proposed MBBES gets a 100% classification rate. In the aspect of best and average values, MBBES obtains the first rank with 1.16E– 03 and 8.65E– 03, respectively. As shown in Fig. 14, the MBBES exhibits excellent optimization ability to find the global solution. Figure 15 illustrates the good stability of MBBES when solving the MLP_Iris classification problem.
6.3 MLP_Cancer problem
Table 17 presents the outcomes of MBBES and compared algorithms for the MLP_Cancer classification problem. It can be seen that MBBES has the best classification rate with 99%, which is also obtained by the TACPSO. It’s also noted that MBBES has the lowest mean value. Figure 16 shows the optimizing process for these methods. It can be seen that MBBES performs the best with lowest final fitness value. Figure 17 shows the optimization stability of these algorithms. It can be observed that the MBBES algorithm also has the merits on robustness when solving this problem.
7 Conclusions and future outlook
BES has been utilized to solve complex optimization problems across a diverse range of fields. However, the basic BES still has a likelihood of getting stuck in a local optimal stagnation and failing to attain the satisfied global optimal solution. To handle this problem, we proposed a novel multi-strategy boosted BES (MBBES), which includes the modifications of parameter \(\alpha\), two mutation operators and fall stage strategy. The modified adaptive parameter \(\alpha\) strikes a good balance between the exploration and exploitation stages of the algorithm. Two mutation operators cooperate well, helping the algorithm escape from local optima as well as enhancing its convergence. Moreover, a fall stage is introduced instead of the swoop stage, which can accelerate the convergence speed of the basic BES. The performance of proposed MBBES is demonstrated using two sets of test functions CEC2014 and CEC2017. The numerical results, convergence curves, boxplots, Friedman ranking test and Wilcoxon signed-rank test show that the proposed MBBES is better than the other compared algorithms, including the BO, DO, GWO, QRHHO, DETDO, MPSO, and TACPSO. Moreover, the MBBES also has been employed to tackle five realistic constrained issues, including the welded beam design, pressure vessel design, reducer design, three-bar truss design and tubular column design, as well as three MLP classification problems, namely MLP_XOR, MLP_Iris and MLP_Cancer. According to the optimization results, the proposed MBBES has demonstrated its superior performance for solving these optimization problems in practise.
However, it also should be noted that the proposed MBBES is not good at solving some test functions in CEC2014 and CEC2017, such as F29 in CEC2014 and F30 in CEC2017.
In the future, the MBBES can be further strengthened by introducing more effective strategies to avoid falling into the local optimal and get closer to the theoretical results. It can be also compared with some recent algorithms such as GAEFA-HK (Chauhan et al. 2024), iAEFA (Chauhan and Yadav 2023), and others. Meanwhile, the MBBES can be used to solve other real-world complex optimization problems as well as develop a parallel computing algorithm that leverages MBBES and parallel theory to tackle intricate optimization tasks.
Data availability
Data are available on request.
References
Abdel-Basset M, Mohamed R, Jameel M, Abouhawwash M (2023a) Nutcracker optimizer: a novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl-Based Syst 262:110248
Abdel-Basset M, Mohamed R, Azeem S, Jameel M, Abouhawwash M (2023b) Kepler optimization algorithm: a new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl-Based Syst 110454
Abdollahzadeh B, Gharehchopogh FS, Mirjalili S (2021) African vultures optimization algorithm: a new nature-inspired metaheuristic algorithm for global optimization problems. Comput Ind Eng 158:107408
Abualigah L, Diabat A, Mirjalili S, Elaziz MA, Gandomi AH (2021) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609
Alatas B (2011) ACROA: artificial chemical reaction optimization algorithm for global optimization. Expert Syst Appl 38(10):13170–13180
Alsaidan I, Shaheen MA, Hasanien HM, Alaraj M, Alnafisah AS (2022) A PEMFC model optimization using the enhanced bald eagle algorithm. Ain Shams Eng J 13(6):101749
Amoretti M (2014) Evolutionary strategies for ultra-large-scale autonomic systems. Inf Sci 274:1–16
Bahaddad AA, Almarhabi KA, Abdel-Khalek S (2023) Image steganography technique based on bald eagle search optimal pixel selection with chaotic encryption. Alex Eng J 75:41–54
Bansal P, Gupta S, Kumar S, Sharma S, Sharma S (2019) MLP-LOA: a metaheuristic approach to design an optimal multilayer perceptron. Soft Comput 23:12331–12345
Bao G, Mao K (2009) Particle swarm optimization algorithm with asymmetric time varying acceleration coefficients. In: IEEE international conference on robotics and biomimetics (ROBIO). IEEE, pp 2134–2139
Braik M, Sheta A, Al-Hiary H, Aljahdali S (2023) Enhanced cuckoo search algorithm for industrial winding process modeling. J Intell Manuf 34(4):1911–1940
Changting Z, Gang L, Zeng M (2022) Beluga whale optimization: a novel nature-inspired metaheuristic algorithm. Knowl-Based Syst 251(5):109215
Chauhan D, Yadav A (2023) An adaptive artificial electric field algorithm for continuous optimization problems. Expert Syst 40(9):e13380
Chauhan D, Yadav A, Neri F (2024) A multi-agent optimization algorithm and its application to training multilayer perceptron models. Evol Syst 15(3):849–879
Chen Y, Wu W, Jiang P, Wan C (2023a) An improved bald eagle search algorithm for global path planning of unmanned vessel in complicated waterways. J Mar Sci Eng 11(1):118
Chen Y, Wang P, Ling J, Wu Z, Ding L (2023b) Research on face recognition based on grey wolf algorithm optimization. In: 2023 4th International conference on computer vision, image and deep learning (CVIDL). IEEE, pp 329–333
Chhabra A, Hussien AG, Hashim FA (2023) Improved bald eagle search algorithm for global optimization and feature selection. Alex Eng J 68:141–180
Chou J-S, Nguyen N-M (2020) FBI inspired meta-optimization. Appl Soft Comput 93:106339
Corder GW, Foreman DI (2010) Nonparametric statistics for non-statisticians: a step-by-step approach. Int Stat Rev 78(3):451–452
Das AK, Pratihar DK (2022) Bonobo optimizer (BO): an intelligent heuristic with self-adjusting parameters over continuous spaces and its applications to engineering problems. Appl Intell 52(3):2942–2974
El Marghichi M, Loulijat A, Dangoury S, Chojaa H, Abdelaziz AY, Mossa MA, Hong J, Geem ZW (2023) Enhancing battery capacity estimation accuracy using the bald eagle search algorithm. Energy Rep 10:2710–2724
Erol OK, Eksin I (2006) A new optimization method: big bang-big crunch. Adv Eng Softw 37(2):106–111
Fan Q, Chen Z, Xia Z (2020) A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems. Soft Comput 24:14825–14843
Ferahtia S, Rezk H, Djerioui A, Houari A, Motahhir S, Zeghlache S (2023) Modified bald eagle search algorithm for lithium-ion battery model parameters extraction. ISA Trans 134:357–379
Formato R (2007) Central force optimization: a new metaheuristic with applications in applied electromagnetics. Progr Electromagn Res 77:425–491
Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32:675–701
Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68
Gharehchopogh FS (2024) An improved boosting bald eagle search algorithm with improved African vultures optimization algorithm for data clustering. Ann Data Sci 1–33
Hamad QS, Samma H, Suandi SA, Mohamad-Saleh J (2022) Q-learning embedded sine cosine algorithm (QLESCA). Expert Syst Appl 193:116417
Hashim FA, Hussien AG (2022) Snake optimizer: a novel meta-heuristic optimization algorithm. Knowl-Based Syst 242:108320
Hashim FA, Mostafa RR, Khurma RA, Qaddoura R, Castillo PA (2024) A new approach for solving global optimization and engineering problems based on modified sea horse optimizer. J Comput Design Eng 11(1):73–98
Hassan AZ, Alsattar A, Zaidan B (2019) Novel meta-heuristic bald eagle search optimisation algorithm. Artif Intell Rev 53:2237–2264
Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–73
Hu G, Zheng Y, Abualigah L, Hussien AG (2023a) DETDO: an adaptive hybrid dandelion optimizer for engineering optimization. Adv Eng Inform 57:102004
Hu G, Zhong JM, Wei G, Chang C-T (2023b) DTCSMO: an efficient hybrid starling murmuration optimizer for engineering applications. Comput Methods Appl Mech Eng 405:115878
Hussain SF, Pervez A, Hussain M (2020) Co-clustering optimization using artificial bee colony (ABC) algorithm. Appl Soft Comput 97:106725
Jiang J, Yang X, Li M, Chen T (2023) ATSA: an adaptive tree seed algorithm based on double-layer framework with tree migration and seed intelligent generation. Knowl-Based Syst 279:110940
Karaboga D, Akay B (2009) A comparative study of artificial bee colony algorithm. Appl Math Comput 214(1):108–132
Kaveh A, Talatahari S (2010) A novel heuristic optimization method: charged system search. Acta Mech 213(3):267–289
Kirkpatrick S, Gelatt Jr CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680
Li K, Huang H, Fu S, Ma C, Fan Q, Zhu Y (2023) A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput Methods Appl Mech Eng 415:116199
Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, vol 635, no 2. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University. Singapore
Liu Y, Li G, Jiang D, Yun J, Huang L, Xie Y, Jiang G, Kong J, Tao B, Zou C et al (2023) Dynamic ensemble multi-strategy based bald eagle search optimization algorithm: a controller parameters tuning approach. Appl Soft Comput 148:110881
Mirjalili S (2015a) The ant lion optimizer. Adv Eng Softw 83:80–98
Mirjalili S (2015b) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249
Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Moghdani R, Salimifard K (2018) Volleyball premier league algorithm. Appl Soft Comput 64:161–185
Morales-Castañeda B, Zaldivar D, Cuevas E, Fausto F, Rodríguez A (2020) A better balance in metaheuristic algorithms: does it exist? Swarm Evol Comput 54:100671
Nirmal A, Jayaswal D, Kachare PH (2024) A hybrid bald eagle-crow search algorithm for gaussian mixture model optimisation in the speaker verification framework. Decis Analyt J 10:100385
Rakhshani H, Rahati A (2017) Snap-drift cuckoo search: a novel cuckoo search optimization algorithm. Appl Soft Comput 52:771–794
Ramadan A, Kamel S, Hassan MH, Khurshaid T, Rahmann C (2021) An improved bald eagle search algorithm for parameter estimation of different photovoltaic models. Processes 9(7):1127
Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–2248
Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems. Appl Soft Comput 13(5):2592–2612. https://doi.org/10.1016/j.asoc.2012.11.026
Sahoo SK, Saha AK (2022) A hybrid moth flame optimization algorithm for global optimization. J Bionic Eng 19(5):1522–1543
Sahoo SK, Saha AK, Sharma S, Mirjalili S, Chakraborty S (2022) An enhanced moth flame optimization with mutualism scheme for function optimization. Soft Comput 1–28
Sahoo SK, Houssein EH, Premkumar M, Saha AK, Emam MM (2023a) Self-adaptive moth flame optimizer combined with crossover operator and Fibonacci search strategy for COVID-19 CT image segmentation. Expert Syst Appl 227:120367
Sahoo SK, Saha AK, Nama S, Masdari M (2023b) An improved moth flame optimization algorithm based on modified dynamic opposite learning strategy. Artif Intell Rev 56(4):2811–2869
Sahoo SK, Sharma S, Saha AK (2023c) A novel variant of moth flame optimizer for higher dimensional optimization problems. J Bionic Eng 20(5):2389–2415
Sahoo SK, Saha AK, Houssein EH, Premkumar, M Reang S, Emam MM (2024a) An arithmetic and geometric mean-based multi-objective moth-flame optimization algorithm. Clust Comput 1–35
Sahoo SK, Premkumar M, Saha AK, Houssein EH, Wanjari S, Emam MM (2024b) Multi-objective quasi-reflection learning and weight strategy-based moth flame optimization algorithm. Neural Comput Appl 36(8):4229–4261
Sahoo SK, Reang S, Saha AK, Chakraborty S (2024c) F-WOA: an improved whale optimization algorithm based on Fibonacci search principle for global optimization. Handbook of whale optimization algorithm. Elsevier, New York, pp 217–233
Sharma SR, Kaur M, Singh B (2023) A self-adaptive bald eagle search optimization algorithm with dynamic opposition-based learning for global optimization problems. Expert Syst 40(2):e13170
Shen X, Chang Z, Xie X, Niu S (2022) Task offloading strategy of vehicular networks based on improved bald eagle search optimization algorithm. Appl Sci 12(18):9308
Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359
Sulaiman MH, Mustaffa Z, Saari MM, Daniyal H (2020) Barnacles mating optimizer: a new bio-inspired algorithm for solving engineering optimization problems. Eng Appl Artif Intell 87:103330
Swain N, Sinha N, Behera S (2023) Stabilized frequency response of a microgrid using a two-degree-of-freedom controller with African vultures optimization algorithm. ISA Trans 140:412–425
Tharwat A, Hassanien AE (2018) Chaotic antlion algorithm for parameter optimization of support vector machine. Appl Intell 48:670–686
Wang R-B, Wang W-F, Xu L, Pan J-S, Chu S-C (2021) An adaptive parallel arithmetic optimization algorithm for robot path planning. J Adv Transp 2021:1–22
Wang W, Tian W, Chau K, Xue Y, Xu L, Zang H (2023) An improved bald eagle search algorithm with Cauchy mutation and adaptive weight factor for engineering optimization. Comput Model Eng Sci 136(2):1603–1642
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Wu G, Mallipeddi R, Suganthan PN (2017) Problem definitions and evaluation criteria for the CEC 2017 competition on constrained real-parameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report
Xinguang Y, Gang H, Jingyu Z, Guo W (2023) HBWO-JS: jellyfish search boosted hybrid beluga whale optimization algorithm for engineering applications. J Comput Design Eng 10:1615–1656
Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102
Yıldız BS, Kumar S, Panagant N, Mehta P, Sait SM, Yildiz AR, Pholdee N, Bureerat S, Mirjalili S (2023) A novel hybrid arithmetic optimization algorithm for solving constrained optimization problems. Knowl-Based Syst 271:110554
Yu X, Li J, Kang F (2023) A hybrid model of bald eagle search and relevance vector machine for dam safety monitoring using long-term temperature. Adv Eng Inform 55:101863
Zhang H, Wang Z, Chen W, Heidari AA, Wang M, Zhao X, Liang G, Chen H, Zhang X (2021) Ensemble mutation-driven salp swarm algorithm with restart mechanism: framework and fundamental analysis. Expert Syst Appl 165:113897
Zhang Z-L, Zhang H-J, Xie B, Zhang X-T (2022a) Energy scheduling optimization of the integrated energy system with ground source heat pumps. J Clean Prod 365:132758
Zhang Y, Zhou Y, Zhou G, Luo Q, Zhu B (2022b) A curve approximation approach using bio-inspired polar coordinate bald eagle search algorithm. Int J Comput Intell Syst 15(1):30
Zhao S, Zhang T, Ma S, Chen M (2022) Dandelion optimizer: a nature-inspired metaheuristic algorithm for engineering applications. Eng Appl Artif Intell 114:105075
Zheng R, Jia H, Wang S, Liu Q (2022) Enhanced slime mould algorithm with multiple mutation strategy and restart mechanism for global optimization. J Intell Fuzzy Syst 42(6):5069–5083
Ziyu T, Dingxue Z (2009) A modified particle swarm optimization with an adaptive acceleration coefficients. In: 2009 Asia-Pacific conference on information processing, vol 2. IEEE, pp 330–332
Acknowledgements
The authors would like to thank the support of Engineering Research Center of Big Data Application in Private Health Medicine (Putian University), Fujian Province University, Putian, Fujian 351100, China and Putian Science and Technology Plan Project (Putian Electronic Information Industry Research Institute), 2023GJGZ003.
Funding
This work was supported by the Startup Fund for Advanced Talents of Putian University (2023013), Fujian Provincial Natural Science Foundation of China (2022J011179, 2023J011015), Sanming University National Natural Science Foundation of China Breeding Project (PYT2103), Sanming University introduces high-level talents to start scientific research funding support project (21YG01).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Authors has no conflict of interest.
Ethical approval
(1) This material is the authors’ own original work, which has not been previously published else where. (2) The paper is not currently being considered for publication elsewhere. (3) The paper reflects the authors’ own research and analysis in a truthful and complete manner.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zheng, R., Li, R., Hussien, A.G. et al. A multi-strategy boosted bald eagle search algorithm for global optimization and constrained engineering problems: case study on MLP classification problems. Artif Intell Rev 58, 18 (2025). https://doi.org/10.1007/s10462-024-10957-2
Accepted:
Published:
DOI: https://doi.org/10.1007/s10462-024-10957-2