[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Fixed-Point Theorems for Nonlinear Contraction in Fuzzy-Controlled Bipolar Metric Spaces
Next Article in Special Issue
RPCGB Method for Large-Scale Global Optimization Problems
Previous Article in Journal
Nonuniform Dichotomy with Growth Rates of Skew-Evolution Cocycles in Banach Spaces
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem

1
Department of Computer Science and Engineering, Sri Manakula Vinayagar Engineering College, Pondicherry 605107, India
2
Department of Computer Science and Technology, Madanapalle Institute of Technology and Science, Madanapalle 517325, India
3
Department of Information Systems, College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Computer Science and Information Technology, KL Deemed to be University, Guntur District, Vaddeswaram 522302, India
5
Department of Computer Engineering, Faculty of Science and Technology, Vishwakarma University, Pune 411048, India
6
Department of Computer Science and Engineering, Manonmaniam Sundaranar University, Tirunelveli 627012, India
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(4), 395; https://doi.org/10.3390/axioms12040395
Submission received: 23 January 2023 / Revised: 12 April 2023 / Accepted: 15 April 2023 / Published: 19 April 2023

Abstract

:
This article addresses the problem of converting a single-objective combinatorial problem into a multi-objective one using the Pareto front approach. Although existing algorithms can identify the optimal solution in a multi-objective space, they fail to satisfy constraints while achieving optimal performance. To address this issue, we propose a multi-objective artificial bee colony optimization algorithm with a classical multi-objective theme called fitness sharing. This approach helps the convergence of the Pareto solution set towards a single optimal solution that satisfies multiple objectives. This article introduces multi-objective optimization with an example of a non-dominated sequencing technique and fitness sharing approach. The experimentation is carried out in MATLAB 2018a. In addition, we applied the proposed algorithm to two different real-time datasets, namely the knapsack problem and the nurse scheduling problem (NSP). The outcome of the proposed MBABC-NM algorithm is evaluated using standard performance indicators such as average distance, number of reference solutions (NRS), overall count of attained solutions (TNS), and overall non-dominated generation volume (ONGV). The results show that it outperforms other algorithms.

1. Introduction

Multi-objective optimization is the method of finding a single optimal result which has the potential to satisfy more than one objective for the given problem. There are three possible situations in multi-objective optimization problems [1,2]:
  • Diminish all the objective functions;
  • Increase all the objective functions;
  • Diminish a few objectives and increase other objective functions.
M a x f x = m i n ( f x )
A multi-objective optimization problem (MOP) aims to determine a better compromising solution than a solitary individual. The vector of the decision variable x * F is Pareto optimal when there is no other decision variable x F such that f i x f i x * , i = 1 , 2 , , n . The vector x * is determined as Pareto optimal or globally non-dominated if no other solution in the set can dominate [3]. The set of solutions thus produced is said to be a Pareto-optimal set. The optimal set is specified as the Pareto-optimal front. From the multi-objective set, the user can select an ideal solution [4].
The dominance relation is used to associate two individuals. For example, a solution u is said to lead to another solution v if and only if f i u f i v ,   f o r   i = 1 , 2 , , n and f i u < f i v for at least one i , it is known as a globally non-dominated set. No other solution among the set can dominate it. The Pareto dominance for the solution is to dominate other solutions, and should not get worse in any given objectives with strict efficiency than one of them [5]. The Pareto dominance among two solutions u and v can possibly occur in any one of these cases:
  • The solution u dominates solution v , denoted as u i v i ;
  • The solution u is dominated by solution v , denoted as v i u i ;
  • Both the solutions u and v are not dominated by each other, and they are said to be non-dominated. It is denoted as ¬ u i v i ¬ ( v i u i ) .
Recently, several meta-heuristic algorithms have been introduced to address the multi-dimensional combinatorial optimization problem [6]. Some of the famous techniques, namely genetic algorithm [7], differential evolution [8], particle swarm optimization [9], grey wolf optimization [10], and firefly algorithm [11], have been applied in various real-time applications, including optimum design for a centrifugal pump [12,13], Optimizing Magnification Ratio for the Flexible Hinge Displacement Amplifier [14], clustering [15], economic load dispatch [16], and job scheduling [17], to determine optimal solutions. However, the algorithms must be reinforced while applying them to multi-objective problems [18]. In this work, we utilized the ABC algorithm, which is more robust in mathematical analysis and provides more adequate solutions than all other algorithms. However, the ABC algorithm consumes ample computation time due to inefficient search direction while handling multi-objective problems. To eradicate these issues, we introduced the Nelder—Mead technique with non-dominated sorting and fitness allotment methods to address the multi-objective concerns.
The main theme of this work is discussed below:
  • A novel algorithm, MBABC-NM, is proposed to improve the exploitation of the artificial bee colony (ABC) technique. The algorithm incorporates a modified non-dominated sorting and fitness-sharing approach to handle multi-dimensional problems efficiently.
  • The proposed MBABC-NM algorithm is tested on two different real-time datasets: the knapsack problem and the nurse scheduling problem.
  • The algorithm’s performance is compared with other state-of-the-art algorithms, like genetic algorithm, cyber swarm optimization, and particle swarm optimization.
  • The results of the experiments demonstrate that MBABC-NM outperforms the compared algorithms significantly. This result suggests that the proposed algorithm can effectively solve real-world optimization problems.
The rest of the paper is structured such that Section 2 discusses modified non-dominated sorting and fitness-sharing techniques over the multi-dimensional problem; Section 3 illustrates the detailed working process of the proposed MBABC-NM algorithm; and Section 4 presents the experimental setup for NSP and the 0-1 knapsack problem. The empirical study and the discussion of the results are shown in Section 5, while Section 6 summarizes the work and its future directions.

2. Methodology

2.1. Modified Non-Dominated Sorting

In modified non-dominant sorting, the algorithm divides the population L , 1 L N fronts in decreasing order of their dominance F = { F 1 , F 2 , , F L } . Each solution in a front is non-dominated by the other. Each individual in F l is conquered by at least one individual in its preceding front F l ´ . Non-dominated arrangement aids in arranging the solutions sequentially based on the dominance, as mentioned in the above relation [19]. It improves the search capability of the multi-objective approach by introducing modified non-dominated solutions into the search space. The detailed narrative of the modified non-dominated arrangement is discussed in Algorithm 1. This algorithm is specified as one function involved in Algorithm 3.
Algorithm 1: Non-Dominated Sort ( Z )
Input:  Z
For each individual a Z do
Individuals dominated by a
   P a
   P b
Solutions which dominate a
   C a 0
  For each solution b Z do
      if  a b then
      Add the individuals b to the set of solutions dominated by a
       P a P a b
      else if b a then
      Increment the domination counter a
       C a C a + 1
      End if
  end for
if  C a = 0 then
Assign non-dominance rank as 1 for individual a
   a r a n k 1
   L 1 L 1 a
End if
end for
Initialize front counter
u 1
While  L u do
     Members of next front K
      K
     For each solution a L u do
          For each solution b P a do
          Decrement the dominant counter of b
           C b C b 1
             if C b = 0 then
             Assign rank for the individual b
              b r a n k u + 1
              K K b
             End if
          end for
      end for
u u + 1
L u K
The dominant solution of L u are stored in L ´ u
This section discusses a modified non-domination sorting process that helps improve the multi-objective algorithm’s search capability. In addition, the fitness-sharing function that aids the population in exploring diverse groups based on individual similarity is discussed in Section 2.2.

2.2. Fitness Sharing

Fitness sharing in evolutionary computing is used for isolating the population into diverse groups based on individual similarity [1]. It transforms an individual’s fitness into the shared fitness value; usually, it is a lower value than the original. Only a limited amount of fitness value is available in each niche, and individuals in the same niche will share fitness value. The shared fitness f s h a r e d ( i ) of food particle i with fitness f i t i can be measured by
f s h a r e d ( i ) = f i t i n i
where n i is the niche total, which counts the number of food particles with fitness f i t i shared. The niche count can be calculated by summating the distribution function over the swarm.
n i = j = 1 F P φ ( d i j )
where F P denotes number of food particles and ( d i j ) is the distance between the food particles i and j . The sharing function φ computes the relationship between two food particles. The sharing function returns one if the food particles are identical and return 0 if the distance ( d i j ) is greater than a threshold of dissimilarity value. The distribution function can be represented as
φ ( d i j ) = f x = 1 d i j θ r φ ,    d < θ r       0 ,    o t h e r w i s e
where θ r represents the sharing radius, which defines the size of the niche and threshold of dissimilarity. The food particles within this sharing radius are like each other and share their fitness. φ is the constant which normalizes the shape of the distribution function. d i j is the distance between two food particles measured based on genotypic or phenotypic resemblance. The genotypic similarity is based on bit-string and is usually measured using Hamming distance. The phenotypic resemblance measures accurate parameters available in the search space using Euclidean distance.
d ( a , b ) ψ = ( p a p b ) 2 + ( q a q b ) 2 1 2 ψ
The Euclidean distance d a , b is the distance between the nodes a and b , ( p a , q a ) are the coordinates of the node a , and p b , q b are the coordinates of the node b . The minimum transmission energy T E s o l i contains the near-optimal solution. Fitness distribution based on phenotypic resemblance provides an improved outcome compared to distribution based on genotypic similarity [20,21,22].
In our algorithm, every individual finds a new solution. If a new solution dominates the original individual, it is entered into the external archive. If both do not dominate, then solutions are chosen randomly. When many non-dominated solutions exceed the archive size, our proposed algorithm uses a niching technique to truncate the crowded member and maintain uniform distribution among the archive members. Maintaining diversity among archive members is a complex task. Thus, in our proposed algorithm, we incorporated a fitness-sharing technique based on niche count, to ensure the diversity of the population.
The niching method maintains diversity and permits the algorithm to examine multiple summits in parallel. It also prevents the algorithm from being stuck in the local optima of search space, and can be viewed as the subspace of the population. For each niche in our proposed algorithm, the fitness is finite and shared among the population. It is the process of optimizing the entire domain set. Fitness sharing transforms the raw fitness of a solution into a shared one. It helps to sustain diversity among the population, and thus our algorithm explores a better search space. The proposed fitness sharing technique is shown in Algorithm 2. Algorithm 3 invokes Algorithm 2 as a function while processing the execution.
Algorithm 2: Fitness Sharing ( L u )
Number of solutions in Front counter L
      g | L u |
For  k 1 to g do
      L u ( S h a r e k ) 0
     For each objective m do
     Sort population with respect to all objectives
       L u s o r t ( L u , m )
       L u [ 1 ]
       L u [ g ]
     For  k 2 to g 1 do
     Calculate Shared fitness of the k t h solution with f i t k
            L u S h a r e k f i t k n k
      Niche count can be measured by
            n k j = 1 | L | φ ( d k j )
      The sharing function between two population elements can be measured using
            φ ( d k j ) 1 d k j θ r ρ , d < θ r       0 , o t h e r w i s e
      End for
      End for
End for

3. Multi-Objective BABC-NM for a Multi-Dimensional Combinatorial Problem

Multi-Objective BABC-NM consists of an algorithm explained in this thesis and Algorithms 3–6 in this chapter. Algorithm 3 is modified based on a multi-objective perspective, and the pseudocode of the proposed MBABC-NM was described in detail in Algorithm 3. The working process of the formulated MBABC-NM is portrayed in Figure 1. The mapping process of this algorithm involves the following steps:
Initialization: The algorithm randomly generates a population of candidate solutions to the MDCOP. Each candidate solution is represented as a vector of decision variables.
Fitness Evaluation: The fitness of each candidate solution is evaluated by computing its objective function values. In multi-objective optimization, multiple objective functions usually need to be optimized simultaneously. Thus, the fitness of a candidate solution is represented as a vector of accurate function values.
Employed Bees: In this step, some bees are selected to perform the exploration process. The selected bees modify the solutions in the population by adding or subtracting a random value from the decision variables. It generates a new solution for each bee.
Onlooker Bees: Some other bees are selected to perform the exploitation process in this step. The selected bees choose solutions from the population based on their fitness and then modify them similarly to the employed bees. It generates a new solution for each onlooker bee.
Algorithm 3: MBABC-NM
Input
      FS: Number of Food Sources
      MI: Maximum iteration
      Limit: number of predefined trials
Iter = 0
Prepare the population
For i = 1 to FS do
    For j = 1 to S do
        Produce x i , j solution
x i , j x m i n , j ± r a n d 0 , 1 ( x m a x , j x m i n , j )
           Where x m i n , j and x m a x , j are the min and max bound of the dimension j
            x ^ i , j BinaryConv( x i , j ) using Algorithm 5
           For  h = 1 to M do
             Evaluate the fitness of the population for a M number of Objectives
                 f h f h ( x ^ i , j )
           End for
                 t r i a l ( i ) 0
    End for
End for
i t e r 1
Repeat
{
//*Employed Bee process*//
    For each food source i do
        Create new individual v i using
         v i , j x i , j + i , j ( x i , j x k , j )
         v ^ i , j BinaryConv( v i , j ) using Algorithm 5
    Evaluate f ( v ^ i )
    Select between f ( v ^ i ) and f ( x ^ i ) using greedy method
    If  f v ^ i < f ( x ^ i )
                 x i v i
                f ( x ^ i ) f ( v ^ i )
                 t r i a l ( i ) 0
        Else
             t r i a l i t r i a l i + 1
        End if
      End For
//*Onlooker Bee Phase*//
If iter = 1
Set r = 0 , i = 1 ;
    While (r ≤ FS)
        Calculate Probabilities for onlooker bees using Algorithm 4
        If rand (0, 1) < P r o i
                   r r + 1
              For each food source, i do
              Generate new individual v i using Algorithm 6
              NM method ( v i )
               v ^ i , j BinaryConv ( v i , j ) using Algorithm 5
              Evaluate f ( v ^ i )
        Select between f ( v ^ i ) and f ( x ^ i ) using greedy method
        If   f v ^ i < f ( x ^ i )
                    x i v i
                  f ( x ^ i ) f ( v ^ i )
                   t r a i l ( i ) 0
             Else
                t r i a l i t r i a l i + 1
             End if
             End For
         End if
                i i + 1 m o d   F S
          End while
Else
   For each food source, i do
        Generate new individual v i using Algorithm 6
        NM method ( L u )
                   u L u
        Divide { L u } into L u equal chunks
                S u { L u } L u
                L u i , i 1,2 , , L u
         T x i Rank ( L u i , S u i )
         T x i Delete least rank individual ( T x i )
         v i celltomat  T x i
    End For
End if
//*Scout Bee Phase*//
q = i : t r i a l i = max   ( t r i a l )
If  t r i a l ( q ) > l i m i t
          Abandon the food source x i
           x q , j x m i n , j ± r a n d 0 , 1 ( x m a x , j x m i n , j )
           x ^ q , j BinaryConv( x q , j ) using Algorithm 5
            For  h = 1 to M do
               Evaluate the fitness of the population for a M number of Objectives
                  f h f h ( x ^ q )
            End for
         t r i a l ( q ) 0
End if
Add the new solution obtained to Z i
Non-Dominated Sort ( Z i ) using Algorithm 1
                     L Z i
Fitness Sharing ( L ) using Algorithm 2//density estimation where L denotes dense population around the individual i
Memorize the best solution obtained so far
i t e r i t e r + 1
}
Until  i t e r = M I
Output: Optimal value of the objective function
Algorithm 4: Probability Computation
For i = 1 to FS, do
     Compute the probability P i j for the individual v i , j
       P r o i f i t i j = 1 F S f i t j
       f i t i 1 1 + f i ,      f i 0 1 + a b s f i , f i < 0
End for
Algorithm 5: BinaryConv( x i , j )
For i = 1 to FS do
  For j = 1 to S do
      b i t x i , j = sin ( 2 π x i , j cos ( 2 π x i , j ) )
             x ^ i , j = 1 ,    i f   b i t x i , j > 0 0 ,    o t h e r w i s e
    End for
End for
Algorithm 6: NM method ( v i )
Generate new food source v i using modified NM technique
Let v i denotes list of vertices
ɽ, μ, λ and ζ are the coefficients of reflection, expansion, contraction, and shrinkage
ƒ is the objective function to be minimized
For i = 1, 2,…, n + 1 vertices, do
   Arrange the values from lowest fit value ƒ( v 1 ) to highest fit value ƒ( v n + 1 )
ƒ( v 1 ) ƒ( v 2 ) ƒ( v n + 1 )
   Compute mean for best two summits
v m v i n , where i = 1, 2,…, n
  //*Likeness point v r *//
              v r v m + ɽ ( v m v n + 1 )
    If ƒ( v 1 ) ƒ( v r ) ƒ( v n ) then
     v n v r and go to end condition
    End if
  //*Enlargement point v e *//
    If ƒ( v r ) ƒ( v 1 ) then
     v e v r + μ v r v m
    End if
    If ƒ( v e ) < ƒ( v r ) then
     v n v e and go to end condition

    Else
     v n v r and go to end condition

    End if
  //*Reduction point v c *//
    If ƒ( v n ) ƒ( v r ) ƒ( v n + 1 ) then
    Compute outside reduction
                  v c λ v r + ( 1 λ ) v m
    End if
    If ƒ( v r ) ƒ( v n + 1 ) then
    Compute inside reduction
     v c λ v n + 1 + ( 1 λ ) v m .
    End if
    If ƒ( v r ) ƒ( v n ) then
    Contraction is done among v m and the best vertex among v r and v n + 1 .
    End if
    If ƒ( v c ) < ƒ( v r ) then
         v n v c and go to end condition
    Else go to Shrinkage part
    End if
    If ƒ( v c ) ƒ( v n + 1 ) then
         v n + 1 v c and go to end condition
    Else go to Shrinkage part
    End if
//* Shrinkage part *//
    Shrink towards the best solution with new vertices
     v i ζ v i + v 1 ( 1 ζ ) , where i = 2 , , n + 1
End condition
    Arrange and rename the newly constructed simplex’s summits according to their fit values, then carry on with the reflection phase.
Neighbourhood Mutation: In this step, the solutions generated by the employed and onlooker bees are subjected to a neighbourhood mutation process. It involves selecting a neighbourhood around each solution and generating a new solution within that neighbourhood.
Scout Bees: In this step, if a solution has not been improved after a certain number of iterations, it is considered a non-promising solution and is replaced by a new random solution generated by a scout bee.
Pareto Optimization: After generating the new solutions, the algorithm performs a Pareto optimization process to determine the best solutions. The Pareto optimization process identifies solutions not dominated by any other solution in the population.
Termination: The algorithm continues to iterate through steps 3 to 7 until a termination criterion is met. It could be a maximum number of iterations or a satisfactory level of solution quality.

4. Experimental and Environment Setup

This section specifies the experimental structure of the proposed approach and other techniques. In addition, the projected outcomes with other methods are compared, to confirm the model’s efficacy.

4.1. Experimental Setup

The proposed MBABC-NM algorithm to solve NSP and 0-1 knapsack problems, is demonstrated concisely in this division. The simulation is conducted on various optimization algorithms with similar environmental constraints, and the outcomes are analyzed. The technique proposed to handle NSP and 0-1 knapsack problems is implemented using a MATLAB 2018a tool under a Windows Intel I7 processor with 8GB of RAM. The experimental analysis will set the bounds of the formulated work. The parameters are considered based on the trial and error method. We used the standard dataset for both the NSP and the 0-1 knapsack problem. The compared algorithms are selected to ensure the performance of the formulated technique for the NSP in Table 1. The heuristic parameters and the consistent values are symbolized in Table 2.

4.2. Standard 0-1 Knapsack Problem Dataset

This work performs experiments on standard instances of the 0-1 knapsack problem from OR-Library to evaluate the performance of the proposed algorithm MBABC-NM. We used nine different instance classes to illustrate the outcomes of the proposed approach. Each problem suite is classified based on the number of knapsack constraints and object items used. A detailed description of the problem suite is discussed in Table 3. Table 3, column 2 describes the number of knapsacks constraints available in the corresponding problem suite. Column three represents the number of available object items within it, and column four shows the known optimum solution provided by the standard OR-Library. The compared algorithms are selected to ensure the efficacy of the formulated model on the 0-1 knapsack problem, as shown in Table 4.
This section discussed the experimental setup and dataset description used for implementing the proposed algorithm and other compared techniques. Moreover, the performance of the proposed algorithm with other compared techniques is discussed in Section 5.

5. Experimental Result Analysis and Discussion

5.1. Standard NSP Dataset

The experimental outcomes achieved by the MBABC-NM algorithm on solving the standard NSP dataset are presented in Table 5 and Table 6. The performance of the proposed algorithm is compared with existing multi-objective algorithms listed in Table 1 for M1, M2, M3, and M4. The value present in the table specifies the ONGV value attained via the consistent system. The multi-objectives of the NSP are the minimization of cost, maximizing nurse preferences, and minimizing the deviation between the number of nurses required and the least numeral of nurses for the shift; day shift followed by night shift is not permissible. To legalize the proposed algorithm, we utilized 15 test cases of various sizes with multiple issues. It is proven that projected MBABC-NM accomplished maximum ONGV values for a maximum number of instances. The experimentation has been carried out on four different algorithms with the same simulation parameters.
Table 5 reviews the comparison and assessment of ONGV performance indicators attained by our proposed technique MBABC-NM associated with other methods, as shown in Table 1.
On comparing the mean values of ONGV for the NSP dataset, our proposed MBABC-NM outperforms existing algorithms for smaller datasets, with 14.99% against genetic NSGA, 59.67% against the cyber swarm, 28.70% against PSO, and 63.24% against the MABC algorithm. Our proposed MBABC-NM also outperforms existing algorithms for medium-sized datasets, with 84.75% against genetic NSGA, 23.12% against the cyber swarm, 60.43% against PSO, and 54.21% against the MABC algorithm. For larger-sized datasets, it achieved 43.15% against genetic NSGA, 44.27% against the cyber swarm, 25.14% against PSO, and 16.50% against the MABC algorithm.
Table 6 reviews the comparison and valuation of the SP performance indicator and shows the proposed MBABC-NM with another competitor’s technique, as shown in Table 1. Our proposed algorithm achieved minimized Euclidean distance among the Pareto solutions.
Table 6. Experimental results of NSP dataset in terms of SP.
Table 6. Experimental results of NSP dataset in terms of SP.
CaseNurseInstanceMBABC-NMM1M2M3M4
C-1N2511.1 × 10−43.9 × 10−42.0 × 10−42.1 × 10−44.2 × 10−5
C-1N2571.1 × 10−58.8 × 10−42.4 × 10−49.2 × 10−53.1 × 10−4
C-1N25121.1 × 10−49.8 × 10−42.2 × 10−49.4 × 10−58.4 × 10−5
C-1N25191.9 × 10−47.0 × 10−41.1 × 10−48.1 × 10−61.3 × 10−5
C-1N25251.1 × 10−42.6 × 10−41.0 × 10−45.6 × 10−43.3 × 10−4
C-2N2521.8 × 10−43.1 × 10−44.7 × 10−45.5 × 10−42.3 × 10−4
C-2N2559.7 × 10−56.1 × 10−43.4 × 10−41.1 × 10−42.9 × 10−5
C-2N2597.4 × 10−58.2 × 10−44.0 × 10−43.4 × 10−49.5 × 10−6
C-2N25156.6 × 10−59.0 × 10−53.5 × 10−42.5 × 10−43.3 × 10−4
C-2N25273.7 × 10−54.3 × 10−52.7 × 10−41.6 × 10−41.0 × 10−4
C-3N2517.2 × 10−57.6 × 10−44.4 × 10−47.4 × 10−51.0 × 10−4
C-3N2531.4 × 10−49.0 × 10−41.0 × 10−44.7 × 10−43.5 × 10−4
C-3N25161.5 × 10−43.2 × 10−42.0 × 10−41.7 × 10−41.4 × 10−4
C-3N25276.6 × 10−52.3 × 10−43.5 × 10−42.3 × 10−42.7 × 10−4
C-3N25354.5 × 10−55.6 × 10−43.8 × 10−41.9 × 10−42.7 × 10−4
C-4N2551.9 × 10−45.1 × 10−47.0 × 10−55.6 × 10−41.9 × 10−5
C-4N25109.4 × 10−59.1 × 10−48.4 × 10−55.0 × 10−41.0 × 10−4
C-4N25251.6 × 10−46.9 × 10−46.3 × 10−55.4 × 10−42.0 × 10−4
C-4N25383.2 × 10−54.6 × 10−41.0 × 10−47.8 × 10−58.1 × 10−5
C-4N25411.4 × 10−45.3 × 10−41.9 × 10−47.2 × 10−51.3 × 10−4
C-5N2571.8 × 10−43.8 × 10−52.4 × 10−49.9 × 10−51.2 × 10−4
C-5N25113.5 × 10−54.3 × 10−44.2 × 10−42.4 × 10−41.2 × 10−4
C-5N25301.2 × 10−42.3 × 10−44.1 × 10−44.9 × 10−42.0 × 10−4
C-5N25422.1 × 10−52.5 × 10−44.6 × 10−41.0 × 10−42.2 × 10−5
C-5N25471.8 × 10−42.9 × 10−46.2 × 10−51.4 × 10−41.0 × 10−4
C-6N5011.2 × 10−44.4 × 10−41.7 × 10−45.3 × 10−42.3 × 10−4
C-6N5041.9 × 10−46.8 × 10−46.3 × 10−52.5 × 10−53.2 × 10−4
C-6N50121.4 × 10−43.2 × 10−52.5 × 10−52.5 × 10−41.5 × 10−4
C-6N50269.4 × 10−58.4 × 10−41.1 × 10−42.6 × 10−43.3 × 10−4
C-6N50293.4 × 10−52.2 × 10−41.8 × 10−42.9 × 10−43.4 × 10−4
C-7N5039.2 × 10−53.5 × 10−41.9 × 10−42.4 × 10−43.5 × 10−4
C-7N5061.3 × 10−44.5 × 10−48.8 × 10−52.5 × 10−43.5 × 10−4
C-7N50123.1 × 10−54.0 × 10−42.2 × 10−48.0 × 10−59.3 × 10−5
C-7N50263.4 × 10−51.8 × 10−41.3 × 10−44.3 × 10−44.1 × 10−5
C-7N50361.2 × 10−42.1 × 10−42.7 × 10−43.7 × 10−49.1 × 10−5
C-8N5049.2 × 10−51.3 × 10−44.9 × 10−43.5 × 10−42.7 × 10−4
C-8N5091.9 × 10−41.2 × 10−54.0 × 10−58.6 × 10−53.7 × 10−5
C-8N50151.8 × 10−41.2 × 10−49.3 × 10−54.3 × 10−41.6 × 10−4
C-8N50409.1 × 10−55.7 × 10−43.1 × 10−42.1 × 10−41.2 × 10−4
C-8N50472.0 × 10−48.6 × 10−42.1 × 10−42.3 × 10−53.1 × 10−4
C-9N6051.0 × 10−49.6 × 10−42.0 × 10−43.5 × 10−52.8 × 10−4
C-9N60104.4 × 10−56.2 × 10−41.3 × 10−44.5 × 10−41.3 × 10−4
C-9N60231.6 × 10−45.5 × 10−43.2 × 10−43.6 × 10−41.3 × 10−5
C-9N60299.8 × 10−56.6 × 10−46.9 × 10−52.3 × 10−46.7 × 10−5
C-9N60401.2 × 10−42.0 × 10−47.8 × 10−52.6 × 10−43.0 × 10−4
C-10N6063.9 × 10−52.0 × 10−54.1 × 10−45.4 × 10−53.3 × 10−4
C-10N60147.2 × 10−52.3 × 10−41.9 × 10−41.4 × 10−41.4 × 10−4
C-10N60201.5 × 10−48.8 × 10−44.2 × 10−52.1 × 10−42.4 × 10−4
C-10N60321.5 × 10−48.4 × 10−43.0 × 10−47.0 × 10−51.4 × 10−4
C-10N60413.6 × 10−57.0 × 10−49.8 × 10−52.4 × 10−42.8 × 10−4
C-11N6024.0 × 10−59.4 × 10−44.9 × 10−44.1 × 10−47.3 × 10−5
C-11N6081.4 × 10−54.5 × 10−41.8 × 10−43.8 × 10−41.7 × 10−4
C-11N60148.7 × 10−52.1 × 10−43.5 × 10−42.7 × 10−42.0 × 10−4
C-11N60201.9 × 10−44.3 × 10−44.8 × 10−42.3 × 10−44.7 × 10−5
C-11N60325.2 × 10−52.3 × 10−48.0 × 10−55.4 × 10−43.1 × 10−4
C-12N6035.8 × 10−51.1 × 10−43.7 × 10−44.5 × 10−52.9 × 10−4
C-12N60121.9 × 10−49.1 × 10−42.8 × 10−42.0 × 10−41.5 × 10−4
C-12N60193.4 × 10−57.5 × 10−41.0 × 10−43.3 × 10−42.6 × 10−4
C-12N60231.8 × 10−43.4 × 10−42.9 × 10−49.3 × 10−57.2 × 10−5
C-12N60341.2 × 10−45.2 × 10−44.0 × 10−45.3 × 10−51.7 × 10−4
C-13N6015.4 × 10−59.8 × 10−43.3 × 10−43.6 × 10−43.5 × 10−4
C-13N6041.3 × 10−46.7 × 10−42.3 × 10−43.5 × 10−42.0 × 10−4
C-13N60196.6 × 10−67.5 × 10−41.9 × 10−41.4 × 10−52.0 × 10−4
C-13N60292.0 × 10−44.9 × 10−41.3 × 10−44.3 × 10−41.5 × 10−4
C-13N60401.1 × 10−45.8 × 10−41.9 × 10−41.3 × 10−41.3 × 10−5
C-14N6051.1 × 10−47.5 × 10−42.6 × 10−45.0 × 10−43.3 × 10−4
C-14N6092.0 × 10−43.9 × 10−41.7 × 10−43.0 × 10−43.0 × 10−4
C-14N60151.1 × 10−41.9 × 10−48.2 × 10−51.3 × 10−43.2 × 10−4
C-14N60302.3 × 10−46.5 × 10−42.1 × 10−41.1 × 10−43.3 × 10−4
C-14N60431.5 × 10−48.1 × 10−44.4 × 10−45.4 × 10−41.9 × 10−4
C-15N6062.1 × 10−65.7 × 10−52.7 × 10−45.5 × 10−43.2 × 10−4
C-15N60151.5 × 10−43.5 × 10−43.4 × 10−41.4 × 10−51.2 × 10−4
C-15N60261.2 × 10−47.7 × 10−43.8 × 10−41.6 × 10−52.6 × 10−4
C-15N60352.2 × 10−51.3 × 10−42.2 × 10−48.1 × 10−51.0 × 10−4
C-15N60449.6 × 10−58.4 × 10−51.5 × 10−41.6 × 10−49.7 × 10−7
Comparing the mean values of SP for the NSP dataset, our proposed MBABC-NM outperforms existing algorithms for smaller datasets, with 80.90% against genetic NSGA, 68.48% against the cyber swarm, 60.46% against PSO, and 43.86% against MABC algorithm. Our proposed MBABC-NM also outperforms existing algorithms for medium-sized datasets, with 79.55% against Genetic NSGA, 59.14% against the cyber swarm, 68.37% against PSO, and 57.84% against the MABC algorithm. For larger-sized datasets, it achieved 74.30% against Genetic NSGA, 53.72% against the cyber swarm, 63.22% against PSO, and 32.98% against the MABC algorithm.

5.2. Standard 0-1 Knapsack Problem

The experimental outcome achieved by the MBABC-NM algorithm on solving the standard 0-1 knapsack problem was presented in Table 7 and Table 8. The performance of the proposed algorithm was compared with existing multi-objective meta-heuristic methods listed in Table 3. The values in the table specify the mean value attained for number of the reference solution and the total number of solutions using the corresponding algorithm. Our proposed MBABC-NM obtained better results for a maximum number of instances shown in [31].
The experimentation has been carried out on four different algorithms with the same simulation parameters. The outcome attained by the proposed technique MBABC-NM and another competitor algorithm is presented in Table 7. The values in the table represent the number of reference solutions obtained for corresponding algorithms. TNS symbolizes an overall count of attained solutions, and NRS defines the number of reference solutions for the instances. Table 8 describes the experimental work gained by our projected technique MBABC-NM and another competitor algorithm. |R| represents several Pareto optimal or reference sets obtained for our proposed algorithm. Davg denotes the average distance between the non-dominated individual and the reference set.
Compared with an existing algorithm, our proposed algorithm MBABC-NM generated a maximum number of reference solutions from the total solutions, as illustrated in Figure 2 and Figure 3. The mean value of NRS for a smaller dataset with 250 objects, our proposed algorithm had achieved 38% more than other competitor algorithms. The mean value of TNS was 40% against the competitor algorithm. For a medium dataset with 500 objects, the NRS was 29%, and the TNS was 21% against the competitor algorithm. For a larger dataset with 750 objects, our proposed algorithm MBABC-NM achieved 43% of NRS and 38% of TNS against the competitor algorithm.
On comparing the mean values of Davg for the 0-1 knapsack problem dataset, as shown in Figure 4, our proposed MBABC-NM outperforms existing algorithms for smaller datasets with 250 objects, as it achieved 95.07% against Local search, 93.94% against GRASP, 98.87% against Genetic Tabu search, and 97.30% against the ACO algorithm. For a medium-sized dataset with 500 objects, our proposed algorithm achieved 90.90% against Local search, 62.91% against GRASP, 96.34% against Genetic Tabu search, and 90.30% against the ACO algorithm. For a larger dataset with 750 objects, we achieved 72.85% against Local search, 16.90% against GRASP, 74.75% against Genetic Tabu search, and 82.17% against the ACO algorithm.

6. Conclusions

This paper proposed multi-objective BABC-NM on the multi-dimensional combinatorial problem. The proposed multi-objective BABC-NM with fitness sharing and modified non-dominated sorting algorithm have been incorporated. The experimentation is carried out on a MATLAB 2018a. In addition, we consider the experimental setup to assess the outcome of the projected approach MBABC-NM. The practical results and discussions on the obtained effects prove the significance of the projected work. In all three experimental methodology stages, the projected algorithm MBABC-NM’s enhanced outcome was outclassed by attaining precise and satisfactory outcome factors. The projected multi-objective binary ABC with Nelder—Mead (MBABC-NM) is outstripped for all the test cases when associating with other standard classical algorithms. These studies indeed confirmed the competence of the projected algorithm in all perceptions. The proposed algorithm could be extended to handle more complex real-time optimization problems, including scheduling and resource allocation. In addition, the algorithm could be further optimized to reduce its computational complexity and improve its scalability.

Author Contributions

Conceptualization, M.R. (Muniyan Rajeswari); methodology, M.R. (Muniyan Rajeswari) and R.R.; validation, M.R. (Mamoon Rashid) and S.B.; formal analysis, K.S.B. and R.S.; writing—original draft preparation, M.R. (Muniyan Rajeswari); writing—review and editing, M.R. (Mamoon Rashid) and S.B., supervision, R.R. Funding Acquisition, S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R195), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data in this research paper will be shared upon request with the corresponding author.

Acknowledgments

This research is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R195), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goldberg, D.E.; Korb, B.; Deb, K. Messy genetic algorithms: Motivation, analysis, and first results. Complex Syst. 1989, 3, 493–530. [Google Scholar]
  2. Tharwat, A.; Houssein, E.H.; Ahmed, M.M.; Hassanien, A.E.; Gabel, T. MOGOA algorithm for constrained and unconstrained multi-objective optimization problems. Appl. Intell. 2018, 48, 2268–2283. [Google Scholar] [CrossRef]
  3. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multi-objective evolutionary algorithms: Empirical results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed]
  4. Von Lücken, C.; Barán, B.; Brizuela, C. A survey on multi-objective evolutionary algorithms for many-objective problems. Comput. Optim. Appl. 2014, 58, 707–756. [Google Scholar] [CrossRef]
  5. Manzoor, A.; Javaid, N.; Ullah, I.; Abdul, W.; Almogren, A.; Alamri, A. An intelligent hybrid heuristic scheme for smart metering-based demand side management in smart homes. Energies 2017, 10, 1258. [Google Scholar] [CrossRef]
  6. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  7. Tamaki, H.; Kita, H.; Kobayashi, S. Multi-objective optimization by genetic algorithms: A review. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; IEEE: Piscataway, NJ, USA; pp. 517–522. [Google Scholar]
  8. Zhang, Y.; Gong, D.-W.; Gao, X.-Z.; Tian, T.; Sun, X.-Y. Binary differential evolution with self-learning for multi-objective feature selection. Inf. Sci. 2020, 507, 67–85. [Google Scholar] [CrossRef]
  9. Wang, Y.; Yang, Y. Particle swarm optimization with preference order ranking for multi-objective optimization. Inf. Sci. 2009, 179, 1944–1959. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.D.S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
  11. Lv, L.; Zhao, J.; Wang, J.; Fan, T. Multi-objective firefly algorithm based on compensation factor and elite learning. Future Gener. Comput. Syst. 2019, 91, 37–47. [Google Scholar] [CrossRef]
  12. Wang, C.-N.; Yang, F.-C.; Nguyen, V.T.T.; Vo Nhut, T.M. CFD analysis and optimum design for a centrifugal pump using an effectively artificial intelligent algorithm. Micromachines 2022, 13, 1208. [Google Scholar] [CrossRef] [PubMed]
  13. Huynh, N.T.; Nguyen, T.V.T.; Nguyen, Q.M. Optimum Design for the Magnification Mechanisms Employing Fuzzy Logic-ANFIS. CMC-Comput. Mater. Contin. 2022, 73, 5961–5983. [Google Scholar]
  14. Huynh, N.-T.; Nguyen, T.V.T.; Tam, N.T.; Nguyen, Q.-M. Optimizing Magnification Ratio for the Flexible Hinge Displacement Amplifier Mechanism Design. In Proceedings of the 2nd Annual International Conference on Material, Machines and Methods for Sustainable Development (MMMS2020); Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 769–778. [Google Scholar]
  15. Ramalingam, R.; Saleena, B.; Basheer, S.; Balasubramanian, P.; Rashid, M.; Jayaraman, G. EECHS-ARO: Energy-efficient cluster head selection mechanism for livestock industry using artificial rabbits optimization and wireless sensor networks. Electron. Res. Arch. 2023, 31, 3123–3144. [Google Scholar] [CrossRef]
  16. Ramalingam, R.; Karunanidy, D.; Alshamrani, S.S.; Rashid, M.; Mathumohan, S.; Dumka, A. Oppositional Pigeon-Inspired Optimizer for Solving the Non-Convex Economic Load Dispatch Problem in Power Systems. Mathematics 2022, 10, 3315. [Google Scholar] [CrossRef]
  17. Kuppusamy, P.; Kumari, N.M.J.; Alghamdi, W.Y.; Alyami, H.; Ramalingam, R.; Javed, A.R.; Rashid, M. Job scheduling problem in fog-cloud-based environment using reinforced social spider optimization. J. Cloud Comput. 2022, 11, 99. [Google Scholar] [CrossRef]
  18. Thirugnanasambandam, K.; Ramalingam, R.; Mohan, D.; Rashid, M.; Juneja, K.; Alshamrani, S.S. Patron–Prophet Artificial Bee Colony Approach for Solving Numerical Continuous Optimization Problems. Axioms 2022, 11, 523. [Google Scholar] [CrossRef]
  19. Bao, C.; Xu, L.; Goodman, E.D.; Cao, L. A novel non-dominated sorting algorithm for evolutionary multi-objective optimization. J. Comput. Sci. 2017, 23, 31–43. [Google Scholar] [CrossRef]
  20. Ye, T.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary large-scale multi-objective optimization: A survey. ACM Comput. Surv. 2021, 54, 1–34. [Google Scholar]
  21. Luo, J.; Liu, Q.; Yang, Y.; Li, X.; Chen, M.-R.; Cao, W. An artificial bee colony algorithm for multi-objective optimization. Appl. Soft Comput. 2017, 50, 235–251. [Google Scholar] [CrossRef]
  22. Salazar-Lechuga, M.; Rowe, J.E. Particle swarm optimization and fitness sharing to solve multi-objective optimization problems. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; IEEE: Piscataway, NJ, USA; Volume 2, pp. 1204–1211. [Google Scholar]
  23. Zhang, P.; Qian, Y.; Qian, Q. Multi-objective optimization for materials design with improved NSGA-II. Mater. Today Commun. 2021, 28, 102709. [Google Scholar] [CrossRef]
  24. Yin, P.-Y.; Chiang, Y.-T. Cyber swarm algorithms for multi-objective nurse rostering problem. Int. J. Innov. Comput. Inf. Control 2013, 9, 2043–2063. [Google Scholar]
  25. Han, F.; Chen, W.-T.; Ling, Q.-H.; Han, H. Multi-objective particle swarm optimization with adaptive strategies for feature selection. Swarm Evol. Comput. 2021, 62, 100847. [Google Scholar] [CrossRef]
  26. Li, Y.; Huang, W.; Wu, R.; Guo, K. An improved artificial bee colony algorithm for solving multi-objective low-carbon flexible job shop scheduling problem. Appl. Soft Comput. 2020, 95, 106544. [Google Scholar] [CrossRef]
  27. Luo, R.-J.; Ji, S.-F.; Zhu, B.-L. A Pareto evolutionary algorithm based on incremental learning for a kind of multi-objective multi-dimensional knapsack problem. Comput. Ind. Eng. 2019, 135, 537–559. [Google Scholar] [CrossRef]
  28. Yuan, J.; Li, Y. Solving binary multi-objective knapsack problems with novel greedy strategy. Memetic Comput. 2021, 13, 447–458. [Google Scholar] [CrossRef]
  29. Alharbi, S.T. A hybrid genetic algorithm with tabu search for optimization of the traveling thief problem. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 276–287. [Google Scholar] [CrossRef]
  30. Fidanova, S. Hybrid Ant Colony Optimization Algorithm for Multiple Knapsack Problem. In Proceedings of the 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE), Jaipur, India, 1–3 December 2020; IEEE: Piscataway, NJ, USA; pp. 1–5. [Google Scholar]
  31. Beasley, J.E. OR-Library Collection of Test Data Sets for a Variety of OR Problems. World Wide Web. 2005. Available online: http://people.brunel.ac.uk/mastjjb/jeb/orlib/scpinfo.html (accessed on 20 December 2022).
Figure 1. Workflow of MBABC-NM.
Figure 1. Workflow of MBABC-NM.
Axioms 12 00395 g001
Figure 2. Performance of MBABC-NM w.r.t NRS.
Figure 2. Performance of MBABC-NM w.r.t NRS.
Axioms 12 00395 g002
Figure 3. Performance of MBABC-NM with respect to TNS.
Figure 3. Performance of MBABC-NM with respect to TNS.
Axioms 12 00395 g003
Figure 4. Performance of MBABC-NM with respect to average distance.
Figure 4. Performance of MBABC-NM with respect to average distance.
Axioms 12 00395 g004
Table 1. List of competitors’ methods of comparing an NSP dataset for MBABC-NM.
Table 1. List of competitors’ methods of comparing an NSP dataset for MBABC-NM.
TypeMethodReference
M1Multi-objective genetic algorithm: NSGA-II Zhang et al., 2021 [23]
M2Multi-objective cyber swarm optimization algorithmYin et al., 2013 [24]
M3Multi-objective particle swarm optimizationHan et al., 2021 [25]
M4Multi-objective ABCLi et al., 2015 [26]
Table 2. Configuration parameters of MBABC-NM for experimental evaluation.
Table 2. Configuration parameters of MBABC-NM for experimental evaluation.
TypeMethod
number of bees100
maximum iterations1000
initialization techniquebinary
stop criteriamaximum iterations
run20
heuristicNelder—Mead method
likeness factorα > 0
enlargement factorγ > 1
reduction factor0 > β > 1
shrinkage factor0 < δ < 1
Table 3. The features of MKP datasets for MBABC-NM.
Table 3. The features of MKP datasets for MBABC-NM.
InstanceNo. of ObjectivesNo. of Items
kn250_22250
kn250_33250
kn250_44250
kn500_22500
kn500_33500
kn500_44500
kn750_22750
kn750_33750
kn750_44750
Table 4. List of competitors’ techniques to associate MKP dataset for MBABC-NM.
Table 4. List of competitors’ techniques to associate MKP dataset for MBABC-NM.
TypeMethodReference
M1Pareto evolutionary algorithmLuo et al., 2019 [27]
M2GRASPYuan et al., 2021 [28]
M3Genetic Tabu search for MKPAlharbi et al., 2018 [29]
M4ACO for MKPFidanova et al., 2020 [30]
Table 5. Experimental result of NSP dataset in terms of ONGV.
Table 5. Experimental result of NSP dataset in terms of ONGV.
CaseTypeInstanceMBABC-NMM1M2M3M4
C-1N2511351219210453
C-1N2571321258610663
C-1N25121311199111551
C-1N25191281198810150
C-1N25251281228310456
C-2N252143118869967
C-2N2551421218011369
C-2N2591361248511669
C-2N25151491247811560
C-2N2527146124799963
C-3N251145123779761
C-3N253150125829765
C-3N2516151125779971
C-3N25271461219111373
C-3N25351511179310770
C-4N255139122929863
C-4N25101361178811271
C-4N25251501207811165
C-4N25381511219711059
C-4N2541135122799972
C-5N257150122789759
C-5N25111271189210770
C-5N25301351208011461
C-5N25421351219110471
C-5N25471481188310064
C-6N501192409010973
C-6N504229479110760
C-6N5012222358712573
C-6N5026244479611466
C-6N5029223418712676
C-7N50324236966557
C-7N50624842906060
C-7N501224634876766
C-7N502623336886265
C-7N503621439897261
C-8N50425143955571
C-8N50925548987455
C-8N501524934976558
C-8N504019637875757
C-8N504722847885773
C-9N60522536946361
C-9N601021049896058
C-9N602320733997363
C-9N602920341916572
C-9N6040183371007367
C-10N60619649947658
C-10N601418047906566
C-10N602020849955464
C-10N603218442916460
C-10N604121839926963
C-11N60234982137123129
C-11N60837498151126121
C-11N601431683144113111
C-11N602036496145118118
C-11N603229296139112134
C-12N60332798140121115
C-12N601233594151121125
C-12N601935198145120124
C-12N602338478144111118
C-12N603428998140121107
C-13N60145097138126108
C-13N60443887141118109
C-13N601944699149122133
C-13N602934781152126109
C-13N604046488152120121
C-14N605335100141108121
C-14N60942096139124107
C-14N601540090146116115
C-14N603039899144108108
C-14N604348394140117110
C-15N60638087150119136
C-15N601543387141123125
C-15N602648188151125108
C-15N603547790151124136
C-15N604446999149123115
Table 7. Experimental results of MKP dataset in terms of NRS and TNS.
Table 7. Experimental results of MKP dataset in terms of NRS and TNS.
InstanceNRSTNS
MBABC-NMM1M2M3M4MBABC-NMM1M2M3M4
kn250_2288.45125.77120.093.7761.9304.96156.16201.62194.21198
kn250_3549.19191.68183.630.6992.2663.75207.34376.821472.45925
kn250_4742.71215.63207.901.28104.6791.22254.71617.903958.622288
kn500_25019.361505.531500.141222.241361.25198.891543.831761.34257.311009
kn500_36751.662986.632980.581827.032403.86935.973032.853601.362368.152985
kn500_417,156.624282.564277.532258.973268.217,255.485455.924930.685705.945318
kn750_218,236.524247.754240.963765.414003.220,515.016087.354525.306362.165444
kn750_333,682.958035.348029.335336.956683.134,520.309102.708297.507915.188106
kn750_458,129.4611,307.3711,299.736515.648907.760,293.7013,065.3011,648.426976.399312
Table 8. Experimental results of MKP dataset in terms of the reference solution and Davg.
Table 8. Experimental results of MKP dataset in terms of the reference solution and Davg.
Instance|R|Davg
MBABC-NMM1M2M3M4
kn250_23202.10 × 10−49.70 × 10−33.20 × 10−31.48 × 10−27.50 × 10−3
kn250_35643.50 × 10−41.50 × 10−34.60 × 10−32.02 × 10−21.06 × 10−2
kn250_47781.00 × 10−43.14 × 10−33.10 × 10−33.24 × 10−26.32 × 10−3
kn500_288447.20 × 10−41.78 × 10−24.50 × 10−31.61 × 10−22.36 × 10−2
kn500_3119786.00 × 10−41.41 × 10−22.20 × 10−33.24 × 10−21.28 × 10−2
kn500_4333742.50 × 10−31.01 × 10−23.60 × 10−35.60 × 10−23.00 × 10−3
kn750_2348906.40 × 10−32.46 × 10−26.80 × 10−33.17 × 10−22.15 × 10−2
kn750_3745049.50 × 10−33.12 × 10−27.80 × 10−32.80 × 10−29.80 × 10−2
kn750_41051617.20 × 10−32.93 × 10−21.32 × 10−23.18 × 10−21.01 × 10−2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rajeswari, M.; Ramalingam, R.; Basheer, S.; Babu, K.S.; Rashid, M.; Saranya, R. Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem. Axioms 2023, 12, 395. https://doi.org/10.3390/axioms12040395

AMA Style

Rajeswari M, Ramalingam R, Basheer S, Babu KS, Rashid M, Saranya R. Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem. Axioms. 2023; 12(4):395. https://doi.org/10.3390/axioms12040395

Chicago/Turabian Style

Rajeswari, Muniyan, Rajakumar Ramalingam, Shakila Basheer, Keerthi Samhitha Babu, Mamoon Rashid, and Ramar Saranya. 2023. "Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem" Axioms 12, no. 4: 395. https://doi.org/10.3390/axioms12040395

APA Style

Rajeswari, M., Ramalingam, R., Basheer, S., Babu, K. S., Rashid, M., & Saranya, R. (2023). Multi-Objective ABC-NM Algorithm for Multi-Dimensional Combinatorial Optimization Problem. Axioms, 12(4), 395. https://doi.org/10.3390/axioms12040395

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop