1 Introduction

The classification of meta-heuristic algorithms is typically based on their inspiration sources [1,2,3,4,5]. In this paper, we categorize meta-heuristic algorithms into two groups based on the number of offspring individuals [6]: single-solution-based optimization algorithms and population-based optimization algorithms. Single-solution-based optimization algorithms require only one individual to search for a solution, such as large neighborhood search, tabu search and variable neighborhood search. On the other hand, population-based optimization algorithms involve multiple individuals searching for the optimal solution in the global space through operators, such as crossover, mutation, and selection. Population-based algorithms can be further divided into swarm intelligence (SI) [7, 8], evolutionary algorithms (EA) [9, 10], and algorithms based on physics or chemistry [11]. Swarm intelligence includes both human-related algorithms and non-human algorithms.

Table 1 Algorithms inspired by water

We primarily focus on population-based optimization algorithms. To find the global optimal solution through cooperation among multiple individuals, exploration and exploitation of algorithms must be balanced [23, 24]. Exploration involves searching for multiple valuable solutions by distributing different individuals across various locations within the entire search space. On the other hand, exploitation entails individuals seeking better solutions in the vicinity of valuable solutions. Exploration increases the diversity of the population, avoiding the algorithm falling into local optima, while exploitation enables the algorithm to converge quickly towards the optimal solution. Achieving a balance between exploration and exploitation enables the algorithm to find the global optimal solution and accelerate convergence speed.

Water is abundant in nature, and there are many meta-heuristic algorithms inspired by water, as presented in Table 1. The water flow optimizer (WFO) is a novel global optimization algorithm inspired by the shape of water flow in nature. In [22], WFO was proposed for the first time and successfully applied to spacecraft trajectory optimization. In [25], the binary version was added to WFO to solve the feature selection problem, and experimental results demonstrated the effectiveness of the binary water flow optimizer (BWFO) for this optimization problem. In [26], various strategies such as the fusion of Halton sequence and Cauchy mutation were introduced to improve the convergence speed and convergence ability of WFO. WFO is the latest proposed algorithm, and there are few improved versions of it. Due to the strong search performance of WFO, more and more WFO variants will emerge in the future.

In recent years, fractional-order calculus has garnered significant attention from many researchers. In [27], the efficiency and accuracy of the fractional-order genetic algorithm (FOGA) were higher than methods of genetic algorithm (GA), random algorithm (RA), and particle swarm optimization (PSO) in parameter optimization of ecological systems. In [28], the fractional controller was added to augmented Lagrangian particle swarm optimization (ALPSO) to improve the convergence speed to solve the optimization of the fixed structure controller. Simulation results showed that augmented lagrangian particle swarm optimization with fractional order velocity (ALPSOFV) has achieved good results. In [29], fractional order Darwinian particle swarm optimization (FO-DPSO) was applied to the estimation of plane wave parameters, and it was found that the experimental results basically match the expected values, which verifies the accuracy of this scheme. In [30], FO-DPSO was used to optimize line loss minimization and voltage deviation problems, and the results outperformed the state-of-the-art algorithms. In [31], the fractional calculus was added to bidirectional least mean square algorithm to form fractional order bidirectional least mean square algorithm (FOBLMS), which was applied to the global positioning system (GPS) receiver, and the performance of FOBLMS was verified on the existing beamforming algorithms. In [32], fractional order particle swarm optimization (FOPSO) was used to optimize the multi-objective core reload pattern, and FOPSO was found to be robust and efficient. In [33], mixing autoregressive fractional integrated moving average model (ARFIMA) and long-short term memory (LSTM) networks to forecast the stock market had better prediction accuracy. In [34], the fractional chaotic maps was added to enhanced whale optimization algorithms (EWOA), which improves the accuracy of the algorithm in the parameter identification of isolated wind-diesel power systems. In [35], fractional derivative (FC) was introduced into particle swarm optimization (PSO) to improve the convergence speed and enhance the memory effect. And fractional particle swarm optimization gravitational search algorithm (FPSOGSA) was applied to the optimal reactive power dispatch (ORPD) problem, and compared with the state-of-the-art counterparts, the best results were obtained. In [36], complex order particle swarm optimization (CoPSO) was proposed by introducing conjugate order derivatives into PSO. Experimental results showed that CoPSO outperforms fractional order particle swarm optimization (FOPSO). In [37], the velocity of bat algorithm (BA) was updated through the fractional calculus to improve the ability of the algorithm to jump out of local solutions. In [38], FO-DPSO based on artificial neural network (ANN) was used to compute the solutions of the corneal shape model, achieving more accurate solutions. In [39], introducing Shannon entropy into FOPSO solved the ORPD problem better. In [40], using the memory of fractional calculus, the cuckoo memories are captured during the movement, so that the cuckoo search (CS) can get rid of the local minimum and fast converge to the optimal solutions, and the fractional-order cuckoo search ( FO-CS) was applied to the parameter identification of financial system, and more accurate and consistent results were obtained. In [41], the memory feature of fractional-order (FO) was used to enhance the local search ability of flower pollination algorithm (FPA), and experiments showed that fractional-order flower pollination algorithm (FO-FPA) improves the quality of solutions and the acceleration of convergence speed. In [42], the fractional-order chaos maps was introduced into FPA, and its memory advantage and new dynamical distribution were used to adaptively adjust parameters. After several rounds of validations, it showed that the fractional chaotic flower pollination algorithm had the highest accuracy and convergence speed. In [43], the fractional order chaos maps were used to generate the initial population of harris hawks optimizer (HHO), so that the optimization solution converges to the global optimal solution. In [44], fractional long-term memory was used to recalculate the transition probability of ant colony algorithm. Experiments proved that fractional-order ant colony algorithm (FACA) had better search ability. In [45], the history dependency of fractional calculus (FC) was used to improve the exploitation ability of manta ray foraging optimizer (MRFO) and avoid falling into local optima. The superior performance of the fractional-order Caputo manta ray foraging optimizer (FCMRFO) was demonstrated through its performance on global optimization problems, constraint engineering problems, and image segmentation. In [46], the application of FODPSO to the identification of electrical parameters of solar photovoltaic cells had achieved good results. In [47], Caputo–Fabrizio fractional order model was used to explore the dynamics of COVID-19 variations, and fractional Adams–Bashforth was used to compute the iterative solution of the model. In [48], the stability and performance of three FOPSO variants were analyzed and they were applied to the insulin injection optimization problem. In [49], the fractional order dragonfly algorithm (FO-DA) was used for parameter identification of Solid Oxide Fuel Cells (SOFC), and compared with state-of-the-art approaches, better results were obtained. In summary, FO possesses inherent advantages in long-term memory, non-locality, and weak singularity. It aids algorithms in improving convergence properties, enhancing memory effects, and increasing stability, reliability, and consistency.

According to the No-Free-Lunch theory [50], it is impossible for any algorithm to successfully optimize all problems, and the same holds true for WFO, which motivates us to improve WFO. In this paper, we introduce fractional order (FO) with memory properties to enhance the performance of WFO. We also replace the inherent probability with linearly increasing probability to balance exploration and exploitation, and propose the fractional-order water flow optimizer (FOWFO). In the experimental section, the superior performance of FOWFO is verified through experimental and statistical results comparing FOWFO with nine other algorithms on the IEEE CEC2017 functions. The practicality of FOWFO is demonstrated through its performance on four real-world optimization problems. Finally, the parameters, exploration and exploitation and algorithm complexity of FOWFO are discussed and analyzed, which provides a research direction for further improvement and application in the future.

The main contributions of the present study are summarized as follows: (1) fractional order technology can significantly improve the ability of the algorithm to optimize real-world problems with large dimensions. (2) Fractional order has little effect on algorithm complexity.

The rest of this paper is organized as: Sect. 2 introduces the original WFO and fractional-order calculus. Section 3 proposes FOWFO. Section 4 makes some comparative experiments and statistical analysis. Section 5 discusses parameters, balance between exploration and exploitation, and algorithm complexity. Section 6 draws conclusions and future work.

2 Preliminaries

2.1 Water Flow Optimizer

The WF0 algorithm consists of two operators: laminar operator and turbulent operator, which simulate two hydraulic phenomena of water particles flowing from highlands to lowlands [51]: regular laminar flow and irregular turbulent flow in hydraulic systems.

In WFO, there are N water particles in total, and each water particle \(X_{i}\) is expressed as: \(X_{i}=\left( x_{i}^{1}, x_{i}^{2}, \ldots , x_{i}^{d}\right)\), \(i \in \{1,2, \ldots , N\}\), where \(x_{i}^d\) represents the position of ith water particle in the dth dimension.

Fig. 1
figure 1

Flowchart of WFO

2.1.1 Laminar Operator

When the water velocity is small, the water particles move regularly in parallel and straight lines in their respective layers, this regular flow is called laminar flow. By simulating this phenomenon, the laminar operator is designed in the mathematical model of WFO.

In laminar flow, the velocity of water particles is different in different layers, and particles in layers away from walls or obstacles are faster than particles in layers close to walls or obstacles. The laminar operator is modeled by the following equations:

$$\begin{aligned} \begin{aligned} X_i(t+1)=X_i(t)+s * \vec {d}, \quad \forall i \in \{1,2, \ldots , N\} \end{aligned} \end{aligned}$$
(1)
$$\begin{aligned} \begin{aligned} \vec {d}=X_{\text {best}}(t)-X_k(t), \quad (k \ne {\text {best}}), \end{aligned} \end{aligned}$$
(2)

where \(X_i(t)\) is the position of the ith particle at the tth iteration, and \(X_i(t+1)\) is the position of the ith particle at the (\(t+1\))th iteration. s represents the shifting coefficient of ith particle, which is a random number between 0 and 1. The \(\vec {d}\) vector represents the common movement direction of all particles, which is determined by the position of the current best particle \(X_{\text {best}}\) and the position of the randomly selected particle \(X_{k}\).

During the same iteration, \(\vec {d}\) is constant, ensuring that each particle moves in the same direction, and the shifting coefficient of each particle is generated randomly, ensuring that each particle has different shift.

2.1.2 Turbulent Operator

When the rapid water flow hits obstacles, local oscillations and even eddy will occur. In mathematical modeling, one dimension of the problem to be solved is regarded as a layer of water flow, then the transformation between dimensions can simulate the irregular motion of particles in turbulent flow. Therefore, the moving position \(X_i(t+1)\) of the water particle in the turbulent operator is generated by oscillation in a randomly selected dimension, as:

$$\begin{aligned} \begin{aligned} X_i(t+1)=\left( \ldots , x_i^{q_{1}-1}(t), m, x_i^{q_{1}+1}(t), \ldots \right) \end{aligned} \end{aligned}$$
(3)
$$\begin{aligned} \begin{aligned} m= {\left\{ \begin{array}{ll}\psi \left( x_i^{q_{1}}(t), x_k^{q_{1}}(t)\right) , &{} \text{ if } r<p_{e} \\ \varphi \left( x_i^{q_{1}}(t), x_k^{q_{2}}(t)\right) , &{} \text{ otherwise } \end{array}\right. }, \end{aligned} \end{aligned}$$
(4)

where \(q_{1}\) and \(q_{2}\) are randomly selected distinct dimensions from d dimensions, i.e., \(q_{1} \in \{1,2, \ldots , d\}\), \(q_{2} \in \{1,2, \ldots , d\}\) & \(q_{1} \ne q_{2}\). m is the mutation value. k is a randomly selected individual from N particles, i.e., \(k \in \{1,2, \ldots , N\}\) & \(k \ne i\). r is a random number between [0, 1]. \(p_{e}\) represents the eddying probability in the range (0, 1). \(\psi\) is eddying transformation. \(\varphi\) is over-layer moving transformation.

The eddying transformation equations are as follows:

$$\begin{aligned} \begin{aligned} \psi \left( x_i^{q_{1}}(t), x_k^{q_{1}}(t)\right) =x_i^{q_{1}}(t)+\beta * \theta * \cos (\theta ) \end{aligned} \end{aligned}$$
(5)
$$\begin{aligned} \begin{aligned} \beta =\left| x_i^{q_{1}}(t)-x_k^{q_{1}}(t)\right| , \end{aligned} \end{aligned}$$
(6)

where \(\theta\) is a random number in \([-\pi , \pi ]\). \(\beta\) is the shear force of the kth particle to the ith particle.

The over-layer moving transformation equation is

$$\begin{aligned} \begin{aligned} \varphi \left( x_i^{q_{1}}(t), x_k^{q_{2}}(t)\right) =\left( u b^{q_{1}}-l b^{q_{1}}\right) * \frac{x_k^{q_{2}}(t)-l b^{q_{2}}}{u b^{q_{2}}-l b^{q_{2}}}+l b^{q_{1}}, \end{aligned} \end{aligned}$$
(7)

where lb and ub represent the lower and upper bounds of the search space, respectively.

The flowchart of WFO is shown in Fig. 1. \(T_{max}\) is the maximum number of iterations. The laminar probability \(p_{l} \in (0, 1)\) controls whether the algorithm implements laminar operator or turbulent operator. The eddying probability \(p_{e}\) controls whether the turbulent operator performs eddying transformation or over-layer moving transformation. Finally, the global optimal solution \(X_{\text {best}}\) is output.

2.2 Fractional-order Calculus

There are several definitions of fractional-order (FO) calculus in mathematics. In this paper, we introduce the definition of Grunwald–Letnikov (GL). Its mathematical calculations are as follows [52]:

$$\begin{aligned} \begin{aligned} D^\epsilon (x(t))=\lim _{h \rightarrow 0} \frac{1}{h^\epsilon } \sum _{g=0}^{\infty }(-1)^g\left( \begin{array}{l} \epsilon \\ g \end{array}\right) x(t-g h), \end{aligned} \end{aligned}$$
(8)
$$\begin{aligned} \begin{aligned} \left( \begin{array}{l} \epsilon \\ g \end{array}\right) =\frac{\Gamma (\epsilon +1)}{\Gamma (g+1) \Gamma (\epsilon -g+1)}=\frac{\epsilon (\epsilon -1)(\epsilon -2) \ldots (\epsilon -g+1)}{g !}, \end{aligned} \end{aligned}$$
(9)

where \(D^\epsilon (x(t))\) is the GL fractional derivative of order \(\epsilon\). \(\Gamma\) represents gamma function.

In discrete-time implementation, Eq. (8) can be formulated as [53]:

$$\begin{aligned} \begin{aligned} D^\epsilon [x(t)]=\frac{1}{T^\epsilon } \sum _{g=0}^e \frac{(-1)^g \Gamma (\epsilon +1) x(t-g T)}{\Gamma (g+1) \Gamma (\epsilon -g+1)}, \end{aligned} \end{aligned}$$
(10)

where T is the sampling period. e is the number of terms from memory or previous events.

When the derivative order coefficient \(\epsilon\) equals 1, Eq. (10) becomes

$$\begin{aligned} \begin{aligned} D^1[x(t)]=x(t+1)-x(t), \end{aligned} \end{aligned}$$
(11)

where \(D^1[x(t)]\) is the difference between two followed events.

3 Proposed FOWFO

WFO uses eddying transformation and over-layer moving transformation in the turbulent operator to increase population diversity and improve global exploration ability. To balance exploration and exploitation, we enhance the laminar operator of WFO based on fractional-order (FO), which improves the local exploitation ability of the algorithm. At the same time, we replace the laminar probability \(p_{l}\) in WFO with the control parameter \(\text {Coef}=t/T_{max}\) to balance the exploration and exploitation of the algorithm.

Fig. 2
figure 2

Flowchart of FOWFO

3.1 Enhancement the Laminar Operator of Water Flow Optimizer Based on FO

Using the memory property of FO for previous events, FO is added to the laminar operator to improve the accuracy and convergence speed of the solution by sharing information among solutions in the exploitation stage.

According to Eq. (11) in the fractional order definition, when the derivative order coefficient \(\epsilon =1\), the position update Eq. (1) of the laminar operator in WFO can be rewritten as:

$$\begin{aligned} \begin{aligned} D^1[X_i(t+1)]=X_i(t+1)-X_i(t)=s * \vec {d} \end{aligned} \end{aligned}$$
(12)

On the GL general definition, for any \(\epsilon\), Eq. (12) can become

$$\begin{aligned} \begin{aligned} D^\epsilon [X_i(t+1)]=s * \vec {d} \end{aligned} \end{aligned}$$
(13)

Bring Eq. (13) into Eq. (10). When \(T=1\),

$$\begin{aligned} \begin{aligned} D^\epsilon [X_i(t+1)]=X_i^{t+1}+\sum _{g=1}^e \frac{(-1)^g \Gamma (\epsilon +1) X_i^{t+1-g}}{\Gamma (g+1) \Gamma (\epsilon -g+1)}=s * \vec {d}. \end{aligned} \end{aligned}$$
(14)

By shifting terms on both sides of the equation, Eq. (14) becomes

$$\begin{aligned} \begin{aligned} X_i^{t+1}=s * \vec {d}-\sum _{g=1}^e \frac{(-1)^g \Gamma (\epsilon +1) X_i^{t+1-g}}{\Gamma (g+1) \Gamma (\epsilon -g+1)}. \end{aligned} \end{aligned}$$
(15)

When we set the first two terms (\(e=2\)) of memory, the position of FOWFO is updated as below:

$$\begin{aligned} \begin{aligned} X_i^{t+1}=\frac{1}{1 !} \epsilon X_i^t+\frac{1}{2 !} \epsilon (1-\epsilon ) X_i^{t-1}+s * \vec {d}. \end{aligned} \end{aligned}$$
(16)

When we set the first four terms (\(e=4\)) of memory,

$$\begin{aligned} \begin{aligned}&X_i^{t+1}=\frac{1}{1 !} \epsilon X_i^t+\frac{1}{2 !} \epsilon (1-\epsilon ) X_i^{t-1}+\frac{1}{3 !} \epsilon (1-\epsilon )(2-\epsilon ) X_i^{t-2} \\&\quad +\frac{1}{4 !} \epsilon (1-\epsilon )(2-\epsilon )(3-\epsilon ) X_i^{t-3}+s * \vec {d}. \end{aligned} \end{aligned}$$
(17)

When \(e=8\),

$$\begin{aligned} \begin{aligned}&X_i^{t+1}=\frac{1}{1 !} \epsilon X_i^t+\frac{1}{2 !} \epsilon (1-\epsilon ) X_i^{t-1}+\frac{1}{3 !} \epsilon (1-\epsilon )(2-\epsilon ) X_i^{t-2} \\&\quad +\frac{1}{4 !} \epsilon (1-\epsilon )(2-\epsilon )(3-\epsilon ) X_i^{t-3}+\\&\frac{1}{5 !} \epsilon (1-\epsilon )(2-\epsilon )(3-\epsilon )(4-\epsilon ) X_i^{t-4}+ \\&\frac{1}{6 !} \epsilon (1-\epsilon )(2-\epsilon )(3-\epsilon )(4-\epsilon )(5-\epsilon ) X_i^{t-5}+ \\&\frac{1}{7 !} \epsilon (1-\epsilon )(2-\epsilon )(3-\epsilon )(4-\epsilon )(5-\epsilon )(6-\epsilon ) X_i^{t-6}+ \\&\frac{1}{8 !} \epsilon (1-\epsilon )(2-\epsilon )(3-\epsilon )(4-\epsilon )(5-\epsilon )(6-\epsilon )(7-\epsilon ) X_i^{t-7}\\&\quad +s * \vec {d}. \end{aligned} \end{aligned}$$
(18)

3.2 Linear Increase of Laminar Probability (\(p_{l}\))

In WFO, the laminar probability \(p_{l}\) is set as a constant, and we replace the constant \(p_{l}\) with \(\text {Coef}=t/T_{max}\). When \(rand < \text {Coef}\), FOWFO executes the laminar operator, and when \(rand > \text {Coef}\), FOWFO executes the turbulent operator. As the number of iterations increases, the value of \(t/T_{max}\) increases linearly from \(1/T_{max}\) to 1. At the early stage of iteration, the value of \(\text {Coef}\) is small, and the algorithm is more inclined to global exploration. At the later stage of iteration, the value of \(\text {Coef}\) is large, and the algorithm is more inclined to local exploitation. The whole process plays a role in balancing exploration and exploitation.

The flowchart of FOWFO is in Fig. 2, and the pseudocode is in Algorithm 1. Due to the memory property of the fractional order, the population of the last e iterations is recorded in the memory according to the first-in-first-out rule.

3.3 Advantages of FOWFO

In this paper, FOWFO exhibits the following advantages:

  1. (1)

    The fractional-order water flow optimizer (FOWFO) is derived through rigorous mathematical reasoning. FOWFO possesses the fractional-order advantages of long-term memory, non-locality, and weak singularity.

  2. (2)

    In the experimental section, FOWFO demonstrates excellent performance on large-dimensional real-world problems.

Algorithm 1
figure a

FOWFO

4 Experiments

4.1 Experimental Settings

To verify the performance of the algorithm, we compared and analyzed the results of the proposed algorithm and the state-of-the-art algorithms on the IEEE Congress on Evolutionary Computation 2017 (CEC2017) benchmark functions [54]. IEEE CEC2017 has 30 functions, F1-F3 are unimodal functions, F4-F10 are multimodal functions, F11-F20 are hybrid functions, and F21-F30 are composition functions. Among them, the optimization result of the algorithm on F2 is unstable, so F2 is not used as a test function.

The basic parameters of all algorithms on IEEE CEC2017 functions are set as follows: the population size (N) is 100, the upper and lower boundaries of the search space are 100 and -100 respectively, the maximum number of function evaluations is \(10000*D\), where D is the dimension, and each algorithm runs 51 times independently on each function. All experiments are implemented in MATLAB R2021b on PC with 2.60GHz Intel(R) Core(TM) i7-9750 H CPU and 16GB RAM.

Table 2 Parameter settings of FOWFO and other algorithms
Table 3 Friedman ranks of FOWFO and nine competitive algorithms on IEEE CEC2017

4.2 Performance Evaluation Criteria

To evaluate the performance of the algorithm, in this paper the experimental data and statistical data are processed according to the following criteria:

  1. (1)

    The mean and standard deviation (std) in the IEEE CEC2017 experimental data tables are calculated from optimization errors between obtained optimal values and known global optimal values. The best mean values are highlighted in \(\textbf{boldface}\).

  2. (2)

    Non-parametric statistical tests include the Wilcoxon rank-sum test [55] and the Friedman test [56]. The Wilcoxon rank-sum test utilizes optimization errors to detect whether there is a significant difference (\(\alpha =0.05\)) between the proposed algorithm and the compared algorithm. The symbol “\(+\)” indicates that the proposed algorithm is superior to its competitor, while the symbol “−” denotes that the proposed algorithm is significantly worse than its competitor. There is no significant difference between the two algorithms, recorded as symbol “\(\approx\)”. Additionally, “W/T/L” represents how many numbers the proposed algorithm has won, tied and lost to its competitor, respectively.

    In the Friedman test, the mean values of optimization errors are employed as test data. A smaller Friedman rank for the algorithm indicates better performance. The minimum value is highlighted in \(\textbf{boldface}\).

  3. (3)

    Box-and-whisker diagrams show the robustness and accuracy of the solutions. The lower edge, red line and upper edge of the blue box denote the first quartile, the median and the third quartile, respectively. The height of the box indicates the fluctuation of solution, and the median indicates the average level of solution. The lines above and below the blue box represent the maximum and minimum non-outliers, respectively. The red symbol “\(+\)” displays outlier.

  4. (4)

    Convergence graphs intuitively display the convergence speed and accuracy of the algorithm optimization process.

  5. (5)

    The mean, std, best and worst values obtained by the algorithm optimizing real-world optimization problems with large dimensions are the smaller the better, and the best values are highlighted in \(\textbf{boldface}\).

4.3 Comparison for Competitive Algorithms

To verify the optimization performance of FOWFO, FOWFO, WFO [22], FCMRFO [45], FOFPA [41], spherical search algorithm (SS) [57], spherical evolution (SE) [58], chameleon swarm algorithm (CSA) [59], an expanded particle swarm optimization (XPSO) [60], teaching-learning-based artificial bee colony (TLABC) [61] and artificial hummingbird algorithm (AHA) [62] were tested on 29 IEEE CEC2017 benchmark functions with 10, 30, 50 and 100 dimensions, respectively, where XPSO is an expanded particle swarm optimization (PSO), TLABC is a hybrid algorithm combining teaching-learning based optimization (TLBO) and artificial bee colony (ABC). Their parameter settings are in Table 2. The experimental and statistical results are in Tables 3456 and 7. Note that the best results among all compared methods are shown in bold in the following tables.

The experimental and statistical results of FOWFO and other nine algorithms on IEEE CEC2017 benchmark functions with 10 dimensions are shown in Table 4. From the table, FOWFO has the best mean values on 10 functions, SS has the best mean values on 11 functions, and the number of best mean values of FOWFO ranks second. But it can be seen from the statistical results (W/T/L) that FOWFO wins WFO, FCMRFO, FOFPA, SS, SE, CSA, XPSO, TLABC and AHA on 15, 22, 29, 14, 24, 24, 21, 16 and 18 functions, respectively. Therefore, FOWFO performs best on IEEE CEC2017 functions with 10 dimensions. The experimental and statistical results of FOWFO and other algorithms on IEEE CEC2017 benchmark functions with 30 dimensions are in Table 5. In the table, FOWFO, WFO, FCMRFO, FOFPA, SS, SE, CSA, XPSO, TLABC and AHA gain the best mean values on 15, 4, 0, 0, 4, 1, 0, 0, 5 and 0 functions, respectively. From the W/T/L, FOWFO is better than other algorithms on 14, 27, 29, 22, 27, 23, 22, 17 and 26 functions,respectively. The results denote that FOWFO has the best performance on IEEE CEC2017 functions with 30 dimensions. The experimental and statistical results of FOWFO and competitive algorithms on IEEE CEC2017 benchmark functions with 50 dimensions are represented in Table 6. It can be seen that FOWFO gets the best mean values on 9 functions, and the number of best mean values of FOWFO ranks first. FOWFO significantly outperforms the competitive algorithms on 8, 28, 29, 22, 25, 21, 20, 18 and 28 functions, respectively. These show that FOWFO keeps superior performance on functions with 50 dimensions. The results of FOWFO and competitive algorithms on functions with 100 dimensions are displayed in Table 7. From it, FOWFO, WFO, FCMRFO, FOFPA, SS, SE, CSA, XPSO, TLABC and AHA find the best mean values on 8, 6, 0, 2, 5, 1, 1, 3, 3, and 0 functions, respectively. W/T/L indicates that FOWFO wins other nine algorithms on 11, 29, 28, 20, 26, 22, 22, 24 and 28 functions,respectively. It demonstrates that FOWFO can still maintain high performance on functions with 100 dimensions.

The Friedman test in Table 3 can more intuitively show that the performance of FOWFO ranks first in every dimension, indicating that FOWFO performs best on IEEE CEC2017 functions.

Box-and-whisker diagrams and convergence graphs of the optimized data obtained by FOWFO and other nine algorithms on the IEEE CEC2017 functions with 10, 30, 50 and 100 dimensions are shown in Figs. 345 and 6, where vertical axis of convergence graphs denotes log value of average optimization error. From the box-and-whisker diagrams, the red median line of FOWFO is the lowest, and its solution distribution is more stable, indicating that FOWFO has superior and stable performance. From the convergence graphs, the convergence curves of FOWFO are at the lowest positions in the late stage of iteration compared with the curves of other algorithms, indicating that the average error of FOWFO is the smallest, and FOWFO still has strong exploration ability in the late stage of iteration, which prevents the algorithm from falling into local optima.

Table 4 Experimental and statistical results of FOWFO and nine competitive algorithms on IEEE CEC2017 benchmark functions with 10 dimensions, where FOWFO is the main algorithm in statistical results (W/T/L)
Table 5 Experimental and statistical results of FOWFO and nine competitive algorithms on IEEE CEC2017 benchmark functions with 30 dimensions, where FOWFO is the main algorithm in statistical results (W/T/L)
Table 6 Experimental and statistical results of FOWFO and nine competitive algorithms on IEEE CEC2017 benchmark functions with 50 dimensions, where FOWFO is the main algorithm in statistical results (W/T/L)
Table 7 Experimental and statistical results of FOWFO and nine competitive algorithms on IEEE CEC2017 benchmark functions with 100 dimensions, where FOWFO is the main algorithm in statistical results (W/T/L)
Fig. 3
figure 3

Box-and-whisker diagrams and convergence graphs of ten algorithms on F25 and F26 with 10 dimensions

Table 8 Experimental results of ten algorithms on HSP
Fig. 4
figure 4

Box-and-whisker diagrams and convergence graphs of ten algorithms on F21 and F23 with 30 dimensions

Table 9 Experimental results of ten algorithms on DED
Fig. 5
figure 5

Box-and-whisker diagrams and convergence graphs of ten algorithms on F7 and F12 with 50 dimensions

Table 10 Experimental results of ten algorithms on LSTPP
Fig. 6
figure 6

Box-and-whisker diagrams and convergence graphs of ten algorithms on F7 and F18 with 100 dimensions

Table 11 Experimental results of ten algorithms on ELD
Fig. 7
figure 7

Search history of individuals in FOWFO with respect to iteration

Table 12 Wilcoxon rank-sum test of FOWFO with e and \(\epsilon\) on IEEE CEC2017 functions with 30 dimensions, where \(e=12\), \(\epsilon =0.9999999\) is the main algorithm in W/T/L
Table 13 Friedman ranks of FOWFO with e and \(\epsilon\) on IEEE CEC2017 functions with 30 dimensions

4.4 Real-World Optimization Problems with Large Dimensions

Our research found that FOWFO performs well on real-world optimization problems with large dimensions. To demonstrate the practicality of FOWFO on practical optimization problems with large dimensions, FOWFO, WFO, FCMRFO, FOFPA, SS, SE, CSA, XPSO, TLABC and AHA are used to optimize the following four real-world optimization problems with large dimensions: hydrothermal scheduling problem (HSP), dynamic economic dispatch (DED) problem, large scale transmission pricing problem (LSTPP), static economic load dispatch (ELD) problem. The dimensions of HSP, DED, LSTPP and ELD are 96, 120, 126 and 140, respectively, and their specific description can be found in [63]. The population size (N) of all algorithms is set to 100, and each algorithm runs 51 times independently on each problem. The optimization results are shown in Tables 8910 and 11, where Mean, Std, Best and Worst denote mean, standard deviation, minimum and maximum values, respectively.

From Tables 8910 and 11, it can be found that Mean, Best, and Worst obtained by FOWFO on HSP, DED, LSTPP and ELD are the smallest compared with those of the other nine algorithms, and the optimization values obtained by FOWFO are significantly better than that of the other algorithms, indicating that FOWFO has superior performance on real-world optimization problems with large dimensions, and it can be applied to large dimensional practical problems in the future.

5 Discussion

5.1 Analysis for Parameters of FOWFO

Table 14 CPU running time consumed by all tested algorithms on IEEE CEC2017 functions with 10, 30, 50 and 100 dimensions
Table 15 CPU running time consumed by all tested algorithms on real-world optimization problems with large dimensions
Fig. 8
figure 8

Bar graph illustrating the CPU running time consumed by all tested algorithms on IEEE CEC2017 functions with 10, 30, 50 and 100 dimensions

Fig. 9
figure 9

Bar graph illustrating the CPU running time consumed by all tested algorithms on real-world optimization problems with large dimensions

The two most important parameters in the fractional order in this paper are: the memory terms e and the derivative order coefficient \(\epsilon\). Therefore, the performance of FOWFO is also affected by e and \(\epsilon\). To analyze the sensitivity of e and \(\epsilon\), FOWFO with different e and \(\epsilon\) are tested on the IEEE CEC2017 functions with 30 dimensions. The experimental results of the eighteen combinations are presented in Tables 12 and 13. From these tables, it can be seen that FOWFO with \(e=12\) and \(\epsilon =0.9999999\) performs best.

5.2 Balance Between Exploration and Exploitation

To gain a more intuitive understanding of the exploration and exploitation of FOWFO, we display the distribution of the population in the solution space for the two-dimensional unimodal function (F3), multimodal function (F9), and composition function (F25) in Fig. 7. In the figure, the lines represent contour lines of fitness values, with redder lines indicating higher fitness values. The search space range for each dimension is \([-100, 100]\). The red dot signifies the current position of the individual, while the blue triangle denotes the current best individual position.

The population size is 100, and the maximum number of iterations is set at 200. In Fig. 7, at iteration \(t=1\), individuals are uniformly distributed within the solution space. For F3 and F9, as the number of iterations increases, the population gradually converges toward the optimal solution and eventually narrows down to a minimal range. This demonstrates the strong exploitation ability of FOWFO. In the case of the more complex function F25, the population explores multiple valuable regions and progressively consolidates within each region. Even at the end of the iteration, the population still maintains a certain degree of search range, indicating that FOWFO retains high exploration ability in later iterations. The search history of FOWFO population individuals on different functions demonstrates its ability to independently switch between exploration and exploitation on various problems, effectively achieving a balance between the two.

5.3 Algorithm Complexity

In this subsection, the central processing unit (CPU) running times consumed by all tested algorithms on IEEE CEC2017 functions with 10, 30, 50 and 100 dimensions and on real-world optimization problems with large dimensions are given, respectively, where the maximum number of function evaluations for all algorithms on each function and problem is set to be the same. The CPU running times on IEEE CEC2017 functions are shown in Table 14. From Table 14, it can be observed that WFO has the shortest CPU running time on 10-dimensional, 30-dimensional, 50-dimensional and 100-dimensional functions. The computational time of FOWFO is higher than that of WFO, but the increase is not substantial. The CPU running times on real-world optimization problems are displayed in Table 15. In Table 15, FCMRFO ranks first for the shortest CPU running times on HSP and DED. WFO has the shortest CPU running times on LSTPP and ELD. FOWFO also has reasonable computation times on these real-world optimization problems. These findings indicate that FOWFO and WFO exhibit similar computational times and require very little time. The bar graphs illustrating the CPU running time consumed by all tested algorithms are plotted in Figs. 8 and 9.

The computational complexities of FOWFO and WFO are closely aligned and reasonable. This denotes that the effect of fractional order on the algorithm complexity of WFO is little. FOWFO exhibits low computational complexity and little computation cost, thus affirming its usability and indicating its potential applicability to high-dimensional and engineering problems.

6 Conclusion and Future Work

In this paper, fractional-order water flow optimizer (FOWFO) is proposed to add fractional order to WFO to enhance the performance of the algorithm and use linear increase in laminar probability to balance the exploration and exploitation. Through a comparative analysis of experimental results between FOWFO and nine state-of-the-art algorithms on the IEEE CEC2017 functions, it can be demonstrated that the fractional order is effective in improving the performance of the algorithm. Experiments have found that FOWFO has achieved good results on real-world optimization problems with large dimensions, and its low computational complexity and little computation cost enable FOWFO to be applied to high-dimensional practical problems.

In the future work, there are the following suggestions: (1) The performance of FOWFO could be further enhanced, such as introducing hydraulic drop and jump. (2) FOWFO could be applied to protein structure prediction [64, 65], solar photovoltaic parameter estimation [66, 67], dendritic neural model [68,69,70], biology [71, 72] and physics [73,74,75,76,77,78,79]. (3) Fractional order (FO) could be used to improve other meta-heuristic algorithms.