[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Optimization of an Industrial Sector Regulated by an International Treaty. The Case for Transportation of Perishable Foodstuff
Previous Article in Journal
Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem

by
Alfonsas Misevičius
* and
Dovilė Verenė
Department of Multimedia Engineering, Kaunas University of Technology, Studentu st. 50-400/416a, LT-51368 Kaunas, Lithuania
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(1), 108; https://doi.org/10.3390/e23010108
Submission received: 11 December 2020 / Revised: 6 January 2021 / Accepted: 11 January 2021 / Published: 14 January 2021
(This article belongs to the Section Multidisciplinary Applications)
Figure 1
<p>Graphical illustration of a crossover.</p> ">
Figure 2
<p>Visual conceptual interpretation of hierarchicity.</p> ">
Figure 3
<p>Illustration of the mutation procedure (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>) (The mutation process steps are as follows: (1) element 3 is interchanged with element 4; (2) element 3 (in position 3) is interchanged with element 1; (3) element 3 (in position 4) is interchanged with element 8; (4) element 3 (in position 6) is interchanged with element 5 (element 3 is eventually in position 8)).</p> ">
Figure A1
<p>Histograms of the frequency of the objective function values for the instance bl64 for different examined scenarios: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>P</mi> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>τ</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>640</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>P</mi> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>τ</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>768</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>P</mi> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>τ</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>896</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>P</mi> <msup> <mi>S</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>150</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>τ</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1024</mn> </mrow> </semantics></math>. The histograms are developed in such a way that the frequency of the average deviation is visualized over <math display="inline"><semantics> <mrow> <mn>10</mn> </mrow> </semantics></math> discrete sub-intervals of the interval <math display="inline"><semantics> <mrow> <mrow> <mo stretchy="false">[</mo> <mrow> <mn>0.0</mn> <mo>,</mo> <mo> </mo> <mo> </mo> </mrow> </mrow> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mrow> <mo stretchy="false">[</mo> <mrow> <mn>0.0</mn> <mo>,</mo> <mo> </mo> <mrow> <mrow> <mn>0.1</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mrow> <mo stretchy="false">[</mo> <mrow> <mn>0.1</mn> <mo>,</mo> <mo> </mo> <mrow> <mrow> <mn>0.2</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mrow> <mo stretchy="false">[</mo> <mrow> <mn>0.2</mn> <mo>,</mo> <mo> </mo> <mrow> <mrow> <mn>0.3</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>; …; <math display="inline"><semantics> <mrow> <mrow> <mo stretchy="false">[</mo> <mrow> <mn>0.9</mn> <mo>,</mo> <mo> </mo> <mrow> <mrow> <mn>1.0</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </mrow> </mrow> </semantics></math>. It can be seen that the average deviation from (pseudo-)optimal solutions stably decreases by increasing the number of search iterations (<math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>h</mi> <mi>i</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>).</p> ">
Versions Notes

Abstract

:
In this paper, we present a hybrid genetic-hierarchical algorithm for the solution of the quadratic assignment problem. The main distinguishing aspect of the proposed algorithm is that this is an innovative hybrid genetic algorithm with the original, hierarchical architecture. In particular, the genetic algorithm is combined with the so-called hierarchical (self-similar) iterated tabu search algorithm, which serves as a powerful local optimizer (local improvement algorithm) of the offspring solutions produced by the crossover operator of the genetic algorithm. The results of the conducted computational experiments demonstrate the promising performance and competitiveness of the proposed algorithm.

1. Introduction

The quadratic assignment problem (QAP) [1,2,3,4,5,6] is mathematically formulated as follows: Given two non-negative integer matrices A = ( a i j ) n × n , B = ( b k l ) n × n , and the set Π n of all possible permutations of the integers from 1 to n , find a permutation p = ( p ( 1 ) , p ( 2 ) , , p ( n ) ) Π n that minimizes the following objective function:
z ( p ) = i = 1 n j = 1 n a i j b p ( i ) p ( j )
One of the examples of the applications of the quadratic assignment problem is the placement of electronic components on printed circuit boards [7,8]. In this context, the entries of the matrix A are associated with the numbers of the electrical connections between the pairs of components. The entries of the matrix B correspond to the distances between the fixed positions on the board. The permutation p = ( p ( 1 ) ,   p ( 2 ) ,   ,   p ( n ) ) can be interpreted as a separate configuration for the arrangement of components in the positions. The element p ( i ) in this case indicates the number of the position to which the i-th component is assigned. In this way, z (or more precisely, z / 2 ) can be understood as the total (weighted) length of the connections between the components, when all n components are placed into the corresponding n positions.
The other important areas of applications of the QAP are as follows: assigning runners in relay teams [9], clustering [10], computer/telephone keyboard design [11,12,13], planning of airport terminals [14], facility location [15], formation of chemical molecular compounds [16], formation of grey patterns [17], index assignment [18], microarray layout [19], numerical analysis [20], office assignment and planning of buildings [21,22], seed orchard design [23], turbine balancing [24], website structure design [25]. More examples of the practical applications of the QAP can be found in [4,26].
The quadratic assignment problem is also a complicated theoretical-mathematical problem. It is proved that the QAP belongs to the class of the NP-hard optimization problems [27]. The QAP can be solved exactly if the problem size ( n ) is quite small ( n < 30 ) [28,29,30,31,32,33,34,35]; although some special case QAP examples of larger sizes ( n = 36 [36], n = 64 [10]) have also been exactly solved. For this reason, heuristic optimization algorithms are widely used. Although these algorithms do not guarantee the optimality of the obtained solutions, they allow for a sufficiently high quality (near-optimal) solutions within a reasonable computation time [37].
Classical single-solution local search (LS) and related algorithms were intensively used for the QAP in the early period of application of heuristic algorithms (1960–1980) [38,39,40,41]. Later, improved local search algorithms have been employed [42,43,44,45]. The so-called breakout local search has been empirically proven to be quite efficient [46,47].
Simulated annealing (SA)-based algorithms usually provide better quality results, compared to pure, deterministic local search algorithms. This applies to both the early variants of SA algorithms [48,49,50] and improved SA algorithm modifications [51,52,53].
Even more performance was achieved by adopting the tabu search (TS) methodology-based algorithms. The fast-running tabu search algorithm developed by Taillard [54] in the early 1990s is still considered as one of the most successful single-solution-based heuristic algorithms for the QAP. Since then, a number of improved variations of TS algorithms have been proposed: reactive tabu search [55], concentric tabu search [56], enhanced tabu search [57], tabu search with hardware acceleration [58], self-controlling tabu search [59], repeated iterated tabu search [60,61], parallel tabu search [62], and other variants [58,63]. The performance of LS and TS algorithms can be increased by extending these algorithms to their ameliorated “siblings”, namely, the iterated LS (ILS) [64,65] and iterated TS (ITS) algorithms [66]. Iterated search algorithms have some similarities with multistart methods [63,67,68,69] as well as greedy adaptive search procedures (GRASPs) [70].
Population-based/evolutionary algorithms constitute another important class of efficient heuristic algorithms for the QAP. The advantage of this class of algorithms is that these algorithms operate with the sets of solutions instead of single solutions and this property is of prime importance when it comes to the solution of the QAP and related problems. In particular, it is found that, namely, the genetic algorithms (GA) seem to be very likely among the most powerful heuristic algorithms for solving the QAP, among them: greedy genetic algorithm [71], genetic-local search algorithm [72,73,74], genetic algorithm using cohesive crossover [75], improved genetic algorithm [76], parallel genetic algorithm [77], memetic algorithm [78], genetic algorithm on graphics processing units [79], quantum genetic algorithm [80], and other GA modifications [81,82,83,84,85,86,87,88,89]. Note that the population-based algorithms are usually hybridized with the single-solution-based algorithms (local search, tabu search, iterated local/tabu search, GRASP). Overall, a significant part of the algorithms for the QAP are, in essence, hybrid algorithms [71,73,75,76,78,79,82,83,88,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104]. It is the hybrid genetic-iterated tabu and genetic-breakout local search algorithms [76,78] that allowed to achieve the most promising results.
Swarm intelligence algorithms simulate collective intelligent behaviour of physical/biological entities (agents) (like particles (particle swarm optimization algorithms [105,106]), ants (ant colony optimization algorithms [107]), bees (artificial bee colony algorithms [108,109]). Finally, the algorithms inspired from real-world phenomena (including those using metaphors) are also applicable to the QAP [90,96,98,110,111,112,113,114,115,116,117,118]. For more extensive surveys and literature reviews on the QAP, the reader is referred to [4,119].
The main contribution of this article is that it presents an innovative hierarchicity-based genetic algorithm which is hybridized with a multi-level hierarchical iterated tabu search (HITS) algorithm serving as a powerful optimizer of the offspring solutions. The basic idea of HITS is, in turn, based on the multiple (re)use of the iterated tabu search (ITS) and, simultaneously, moving through many different locally optimal solutions. The other important novelty is that the original crossover and mutation operators are introduced. The crossover operator distinguishes for its universality and, at the same time, versatility and flexibility; while the mutation operation is integrated within the HITS algorithm and is based on combined deterministic and probabilistic (controlled random) moves between solutions during the tabu search process. Also, we have employed the modified population replacement rule. Finally, we have incorporated the population restart mechanism to avoid search stagnation. All these new features have led to the development of high-performance genetic algorithm with excellent results.
The remainder of the paper is structured as follows: In Section 2, some basic definitions are given. Then, in Section 3, the detailed description of the genetic-hierarchical algorithm and its constituent parts is provided. In Section 4, the results of the computational experiments with the proposed algorithm are presented. The paper is completed with concluding remarks.

2. Basic Definitions

At the beginning, we provide some preliminary (basic) formal definitions.
Suppose that p ( u ) ( u = 1 , ,   n ) and p ( v ) ( v = 1 , ,   n , u v ) are two elements of the permutation p . Then p u , v is defined in the following way:
p u , v ( i ) = { p ( i ) , i u , v p ( u ) , i = v p ( v ) , i = u ;   i = 1 ,   , n .
This means that the permutation p u , v is obtained from the permutation p by interchanging exactly the elements p ( u ) and p ( v ) in the permutation p . The formal operator (move, or transition operator) ϕ ( p , u , v ) : Π n × N × N Π n swaps the uth and vth elements in the given permutation such that p u , v = ϕ ( p , u , v ) . Note that the following equations hold: p u , u = p , p u , v = p v , u , ( p u , v ) u , v = p .
The difference in the objective function ( z ) values when the permutation elements p ( u ) and p ( v ) are interchanged is calculated according to the following formula:
Δ ( p u , v ,   p ) = z ( p u , v ) z ( p ) = ( a u u a v v ) ( b p ( v ) p ( v ) b p ( u ) p ( u ) ) + ( a u v a v u ) ( b p ( v ) p ( u ) b p ( u ) p ( v ) ) + k = 1 , k u , v n [ ( a u k a v k ) ( b p ( v ) p ( k ) b p ( u ) p ( k ) ) + ( a k u a k v ) ( b p ( k ) p ( v ) b p ( k ) p ( u ) ) ]
The difference in the objective function values can be calculated more faster—under condition that the previously calculated differences ( Δ ( p i , j ,   p ) ( i ,   j = 1 , , n )) are memorized (stored in a matrix Ξ ). The difference value is calculated using O ( 1 ) operations [54,120]:
Δ ( p u , v ,   p ) = z ( p u , v ) z ( p ) = z ( p ) + Ξ ( u , v ) .
After the interchange of the elements p ( u ) and p ( v ) has been performed, new differences Ξ ( i , j ) ( i ,   j = 1 , , n , i u , i v , j u , j v ) are calculated according this equation:
Ξ ( i , j ) = Ξ ( i , j ) + ( a i u a i v + a j v a j u ) ( b p ( i ) p ( u ) b p ( i ) p ( v ) + b p ( j ) p ( v ) b p ( j ) p ( u ) ) + ( a u i a v i + a v j a u j ) ( b p ( u ) p ( i ) b p ( v ) p ( i ) + b p ( v ) p ( j ) b p ( u ) p ( j ) ) .
If i = u or i = v or j = u or j = v , then the Formula (3) should be applied. So, all the differences are calculated using only O ( n 2 ) operations. Still, the initial calculation of the values of the matrix Ξ requires O ( n 3 ) operations (but only once before starting the optimization algorithm).
If the matrix A and/or matrix B are symmetric, then Formula (3) becomes simpler. Assume that the matrix B is symmetric. Then, the (asymmetric) matrix A can be transformed to symmetric matrix A . Thus, we get the following formula:
Δ ( p u , v ,   p ) = k = 1 ,   k u , v n ( a u k a v k ) ( b p ( v ) p ( k ) b p ( u ) p ( k ) ) ;
here, a u k = a u k + a k u , u = 1 , , n , v = 1 , , n , u v . Analogously, Formula (5) turns to the formula:
Ξ ( i , j ) = Ξ ( i , j ) + ( a i u a i v + a j v a j u ) ( b p ( i ) p ( u ) b p ( i ) p ( v ) + b p ( j ) p ( v ) b p ( j ) p ( u ) ) .
If i = u ( v ) or j = u ( v ) , Equation (6) must be applied.
In addition to this, suppose that we dispose of 3-dimensional matrices A = ( a u v k ) n × n × n and B = ( b l r t ) n × n × n . Also, let a u v k = a u k a v k , b l r t = b l t b r t , l = p ( j ) , r = p ( i ) , t = p ( k ) . Then, we can apply the following formulas for calculation of the differences in the objective function values:
Δ ( p u , v , p ) = k = 1 , k u , v n a u v k b p ( v ) k ( u ) p ( k ) ;
Ξ ( i , j ) = Ξ ( i , j ) + ( a i j u a i j v ) ( b p ( j ) p ( i ) p ( v ) b p ( j ) p ( i ) p ( u ) ) .
Using the matrices A and B allows to save up to 20 % of computation (CPU) time [66]. The distance (Hamming distance) between two permutations p and p is defined as:
δ ( p , p ) = | { i :   p ( i ) p ( i ) } | .
The following equations hold: δ ( p ,   p ) = 0 , δ ( p ,   p ) 1 , 0 δ ( p ,   p ) n , δ ( p ,   p ) = δ ( p ,   p ) , δ ( p ,   p u , v ) = 2 for any u , v ( u v ). In the case of disposing of k different numbers u 1 ,   u 2 ,   ,   u k , this equation holds: δ ( p , ( ( ( p u 1 ,   u 2 ) u 2 ,   u 3 ) ) u k 1 ,   u k ) = k .
The neighbourhood function Θ : Π n 2 Π n assigns for each p Π n its neighbourhood (the set of neighbouring solutions) Θ ( p ) Π n . The 2-exchange neighbourhood function Θ 2 is defined in the following way:
Θ 2 ( p ) = { p : p Π n ,   δ ( p ,   p ) = 2 } ;
where δ ( p ,   p ) is the Hamming distance between solutions p and p . The neighbouring solution p Θ 2 ( p ) can be obtained from the current solution p by using the operator ϕ ( p ,   ,   ) . The computational complexity of exploration of the neighbourhood Θ 2 is proportional to O ( n 2 ) .
Solution p l o c _ o p t Π n is said to be locally optimal with respect to the neighbourhood Θ if for every solution p from the neighbourhood Θ ( p l o c _ o p t ) the following inequality holds: z ( p l o c _ o p t ) z ( p ) .

3. Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem

Our proposed genetic algorithm (for more thorough information on the genetic algorithms, we refer the reader to [121]) is based on the hybrid genetic algorithm framework, where explorative (global) search is in tandem with the exploitative (local) search. The most important feature of our genetic algorithm is that it is hybridized with the so-called hierarchical (self-similar) iterated tabu search (HITS) algorithm (see Section 3.4).
The permutation elements p ( 1 ) ,   p ( 2 ) ,   ,   p ( n ) are directly linked to the individuals’ chromosomes—such that the chromosomes’ genes correspond to the single elements p ( 1 ) ,   p ( 2 ) ,   ,   p ( n ) of the solution p . No encoding is needed. The fitness of the individual is associated with the corresponding objective function value of the given solution, z ( p ) .
The following are the main essential components (parts) of our genetic-hierarchical algorithm: (1) initial population construction; (2) selection of parents for crossover procedure; (3) crossover procedure; (4) local improvement of the offspring; (5) population replacement; (6) population restart. The top-level pseudo-code of the genetic-hierarchical algorithm is presented in Algorithm 1 (Notes: (1) The subroutine GetBestMember returns the best solution of the given population. (2) The mutation process is integrated within the k-level hierarchical iterated tabu search algorithm. The mutation process depends on the mutation variant parameter MutVar.).
Algorithm 1. High-level pseudo-code of the genetic-hierarchical algorithm.
Genetic_Hierarchical_Algorithm Procedure;
/ input: n—problem size, A, B—data matrices,
/     PS—population size, Ngen—total number of generations,
/    InitPopVar—initial population variant, SelectVar—parents selection variant, CrossVar—crossover variant,
/     MutVar—mutation variant, RepVar—population replacement variant,
/    σ—selection factor, DT—distance threshold, Lidle_gen—idle generations limit
/ output: p—the best found solution (final solution)
begin
 / create a starting population P of size PS, depending on the initial population variant
InitPopVar
 switch (InitPopVar)
  1: create the initial population P by applying the algorithm
           k-Level_Hierarchical_Iterated_Tabu_Search;
  2: create the initial population P by applying a copy of
           Genetic_Hierarchical_Algorithm using InitPopVar = 1
endswitch;
p = GetBestMember(P); / initialization of the best so far solution
for i := 1 to Ngen do begin / main loop
  sort the members of the population P in the ascending order of the values of the
         objective function;
  select parents p′, p″ ∈ P for reproduction (crossover), depending on the selection
         variant SelectVar and the selection factor σ;
  perform the crossover operator on the solution-parents p′, p
         and get the offspring p″′, taking into account the crossover variant CrossVar;
  apply improvement procedure k-Level_Hierarchical_Iterated_Tabu_Search
         to the offspring p″′, get the (improved) offspring p;
  get new population P from the union of the existing parents’
         population and the offspring P ∪ {p} (such that |P| = PS)
         (the minimum distance criterion and population replacement variant RepVar
         are taken into account);
  if z(p) < z(p) then p = p; / the best found solution is memorized
  if number of idle generations exceeds the predefined limit Lidle_gen then begin
    perform the population restart process;
    if z(GetBestMember(P)) < z(p) then p = GetBestMember(P)
   endif
  endfor;
  return p
end.

3.1. Creation of Initial Population

There are two main population construction phases. In the first one, the pre-initial population is constructed and improved; in the second one, the culling of the improved population is performed. So, firstly, P S = P S × C 1 members of the pre-initial population P are created using the version of the GRASP algorithm [70] implemented by the authors. P S denotes the population size, and C 1 ( C 1 1 ) is the user-defined parameter and is to regulate the size of the pre-initial population.
There are several options of the population construction in the first phase controlled by the parameter I n i t P o p V a r . If I n i t P o p V a r = 1 , then every generated solution is improved by the hierarchical iterated tabu search algorithm. There are few conditions. If the improved solution ( p ) is better than the best so far found solution in the population P , then the improved solution replaces the best found solution. Otherwise, it is tested if the minimum mutual distance between the improved solution ( p ) and the existing population members ( min p P { δ ( p ,   p ) } ) is greater than or equal to the predefined distance threshold, D T . If this is the case, the improved solution is added to the population P . Otherwise, the improved solution is disregarded and simply a random solution is added instead. (Remind that the distance between solutions is calculated using Equation (10)). The distance threshold is obtained from the following equation: D T = max { 2 ,   ε n } , where ε denotes the distance threshold factor ( 0 < ε 1 ). This presented scheme is to ensure the high level of diversity of the population members and, at the same time, to enhance the searching ability of the genetic algorithm. To obtain better initial population, the HITS algorithm with the increased number of iterations is used during the initial population formation. This is similar to a compounded approach proposed in [122].
The second option ( I n i t P o p V a r = 2 ) is almost identical to the first one, except that the genetic algorithm itself (a de facto copy of the hybrid genetic-hierarchical algorithm) (instead of the HITS algorithm) is employed for the creation of the initial population. As an alternative option ( I n i t P o p V a r = 3 ) of the population improvement, two-level genetic-hierarchical algorithm (master-slave genetic algorithm) can be employed for the initial population improvement.
In the second phase—which is very simple— ( C 1 1 ) P S worst members of the pre-initial population are truncated and only P S best members survive for the subsequent generations.

3.2. Selection of Parents

The selection of parents is performed by using the parametrized rank-based selection rule [123]. In this case, the positions ( κ 1 , κ 2 ) of the parents within the sorted population are determined according to the following formulas: κ 1 = ( ς 1 ) σ , κ 2 = ( ς 2 ) σ , κ 1   κ 2 , where ς 1 , ς 2 are uniform (pseudo-)random numbers in the interval [ 1 ,   P S 1 σ ] , here P S denotes the population size, and σ is a real number from the interval [ 1 ,   2 ] (it is referred to as a selection factor). It is clear that the better the individual, the larger the selection probability.

3.3. Crossover Operators

Two-parent crossover is formally defined by using operator Ψ :   Π n × Π n Π n such that:
p ° = Ψ ( p ,   p ) p p ° = ψ ( p ,   p ) p ,   p p ;
where p , p , p ° denote parental solutions, and p ° is the offspring solution. (The child can coincide with one of the parents if, for example, the parents are very similar.) The crossover operator must ensure that the chromosome of the offspring necessarily inherits those genes that are common in both parent chromosomes, i.e., (also see Figure 1):
p ( i ) = p ( i ) p ° ( i ) = p ( i ) p ° ( i ) = p ( i ) ,   i = 1 , 2 , , n ;
here, p , p , p ° refer to the parents and the offspring, respectively.
In our work, the crossover procedure takes place at every generation of the genetic-hierarchical algorithm, i.e., the crossover probability is equal to 1 . Several crossover operators were implemented and examined. Short descriptions of the crossover procedures are provided below (see also [124,125]).

3.3.1. One-Point Crossover—1PX

1PX is among the most popular genetic crossover operators. Very briefly, the basic idea is as follows. A single crossover point (position, or locus) is chosen in one of the two chromosomes. The position x can be determined by generating a uniformly distributed (pseudo-)random number within the interval [ 1 ,   n 1 ] ( n is the chromosome length). The offspring is obtained by copying x genes from one parent, the rest of genes are copied from the opposite parent. If there are empty loci left, they are filled in randomly; in addition, the feasibility of the offspring must be preserved.

3.3.2. Two-Point Crossover—2PX

Two-point crossover works similarly to the one-point crossover, except that two crossover points x 1 and x 2 ( 1 x 1 < x 2 < n ) are used.

3.3.3. Uniform Crossover—UX

In this case, the common genes are copied to the offspring’s chromosome. Then, the unoccupied positions in the offspring’s chromosome are scanned form left to right and the empty loci are assigned the genes—one at a time—from one of the parents with probability 1 2 , i.e., p ° ( i ) = { p ( i ) , ς < 1 2 p ( i ) , otherwise ; here, ς is a (pseudo-)random number from the interval [ 0 ,   1 ] . The assigned gene must be unique.

3.3.4. Shuffle Crossover—SX

The shuffle crossover is obtained by randomly reordering the parents’ genes before applying the uniform crossover. The same rearrangement rule must be used for both parents. After the uniform crossover is finished, the same (initial) rearrangement rule is again applied.

3.3.5. Partially-Mapped Crossover—PMX

Partially-mapped crossover can be seen as a modified variant of the k-point (multi-point) crossover. The basic principle relies on the so-called mapping sections (the chromosome segments between mapping points). So, at first, the segments of the chromosome of one parent are moved to the offspring’s chromosome. The same is done for the other parent. At last, the empty loci (if any) are filled in in a random way.

3.3.6. Swap-Path Crossover (SPX)

3.3.6.1. Swap-Path Crossover (Basic Version)—SPX1

The main distinguishing feature of SPX is that instead of transferring genes from parents to a child, the genes are, so to speak, rearranged in the chromosomes of the parents. Let ( p ,   p ) be a pair of parents. Then, the process starts from an arbitrary position and the positions are scanned from left to right. The process continues until a predefined number of swaps, s ( s < n ), have been performed. If, in the current position, the genes are the same for both parents, then one moves to the next position; otherwise, a pairwise interchange of genes of the parents’ chromosomes is accomplished. The interchange is performed in both parents. For example, if the current position is i and a = p ( i ) , b = p ( i ) , then there exists a position j such that b = p ( j ) , a = p ( j ) ; then, after a swap, p ( i ) = b , p ( i ) = a and p ( j ) = a , p ( j ) = b . Consequently, new chromosomes, say p , p , are produced. In the next iteration, a pair ( p ,   p ) is considered, and so on.

3.3.6.2. Swap-Path Crossover (Modified Version I)—SPX2

This modification is achieved when the best offspring (with respect to the fitness of the offspring) is retained in the course of gene interchanges.

3.3.6.3. Swap-Path Crossover (Modified Version II)—SPX3

The essential feature this crossover procedure is that the offspring fitness is dynamically evaluated: only the gene interchanges that improve the value of the objective function are accepted.

3.3.7. Cycle Crossover—CX

The cycle crossover is based on the pairwise gene interchanges. The key property of CX is the ability to produce the offspring without distortion of the genetic code; in the other words, CX enables to create the chromosome with no random (foreign) genes. The negative aspect of CX is that the offspring may genetically be very close to their predecessors.

3.3.8. Cohesive Crossover—COHX

The cohesive crossover was proposed by Z. Drezner [75] to efficiently recombine individuals’ genes by taking into account the problem data, in particular, the distances between objects’ locations. From several recombinations of genes, the recombination is selected that minimizes the objective function.

3.3.9. Multi-Parent Crossover—MPX

In the multi-parent crossover, several (or all) members of a population participate in creation of the offspring. More precisely, the ith position (locus) of the offspring’s chromosome p ° is assigned the value j with the probability P ( p ( i ) = j ) (under condition that the value j has not been utilized before).
The probability that p ( i ) = j ( P ( p ( i ) = j ) ) is calculated according to the formula: P ( p ( i ) = j ) = q i j j = 1 n q i j ; where q i j is an element of the matrix Q = ( q i j ) n × n ; here, q i j denotes the number of times that the ith locus takes the value j in the parental chromosomes. If there exist several values ( j 1 , j 2 , …) with the same probability, then one of them is chosen randomly.

3.3.10. Universal Crossover—UNIVX

The universal crossover (UNIVX) [124] distinguishes for its versatility and the possibility of flexible usage depending on the specific needs of the user. It is somewhat similar to what is known as a simulated binary crossover [126].
Our operator is based on the use of a random mask. There are four controlling parameters: χ 1 , χ 2 , χ 3 , χ 4 . The mask length is equal to χ 1 , where χ 1 is a (pseudo-)random number within the interval [ ε 1 ,   n ] , n is the length of the chromosome, ε 1 = r × n , r is the user’s parameter close to 1 , for example, 0.9 . The mask contains binary values 0 and 1 , where 1 signals that the corresponding gene of the first parent’s chromosome must be chosen and 0 is to indicate that the second parent’s gene needs to be replicated. The degree of randomness of the mask is controlled by the parameters χ 2 , χ 3 . The parameter χ 2 ( χ 2 [ ε 2 ,   ε 3 ] , 0 < ε 2 ε 3 < 1 ) dictates how many 0 ’s and 1 ’s are there in the mask: the higher the value of χ 2 , the bigger total number of 1 ’s, and vice versa. The juxtaposition of bits is regulated by the parameter χ 3 . The bit generation itself is performed by using a kind of “anytime” min-max sorting algorithm. That is, if the sorting algorithm is interrupted at some random moment, this results in a randomized (“quasi-sorted”) sequence of bits. The moment of interruption is defined by the number η , where η = χ 3 w , here χ 3 is a (pseudo-)random real number from the interval [ 0 ,   1 ] , and w denotes the maximum number of iterations required to fully sort all the bits. (As an example, if the bits “ 0000001111 ” are to be sorted in the descending order and the algorithm is stopped at χ 3 = 0.9 , then the random mask similar, for example, to “ 101100010 0” would be generated.) Having the mask generated, the decision is made as to about what genes have to be transmitted to the offspring’s chromosome. The index of the starting locus of the transferred genes, χ 4 , is generated randomly—in such a way that χ 4 [ 1 ,   n ] . Eventually, the empty loci (if any) are filled in randomly.

3.4. Offspring Improvement

3.4.1. Hierarchical Iterated Tabu Search Algorithm

Every created offspring is improved by using the hierarchical iterated tabu search algorithm, which inspires from the philosophy of iterated local search [127] and also the spirit of self-similarity—one of the fundamental properties of nature (see Figure 2). Basically, this means that the algorithm is (almost) exactly similar to the part of itself. In the other words, the main idea is the repeated, cyclical adoption (reuse) of the iterated tabu search algorithm, that is, the iterated tabu search can be reused multiple times. This idea is not very new, and some variants of hierarchical-like algorithms have been already investigated [128,129,130,131,132,133,134].
The paradigm of the hierarchicity based (self-similar) algorithm is as follows:
(1)
Set the number of levels, k ( 1 k k m a x ).
(2)
Generate an initial solution p .
(3)
Apply k‒1-level algorithm to the solution p . Let p be the improved solution.
(4)
Memorize the best found solution.
(5)
Set p = p or select a solution p from the history of solutions.
(6)
Apply a perturbation procedure to the solution p . Let p ~ be the perturbed solution.
(7)
Set p = p ~ .
(8)
If the termination criterion is not satisfied, then go to Step 2; otherwise, stop the algorithm.
The k-level hierarchical iterated tabu search algorithm consists of three basic components: (1) invocation of the k–1-level hierarchical iterated tabu search algorithm to improve a given solution; (2) acceptance of the candidate (improved) solution for perturbation, i.e., mutation; (3) mutation of the accepted solution.
Perturbation (mutation) is applied to a chosen optimized solution that is selected by the defined candidate acceptance rule (see Section 3.4.3). The mutated solution serves as an input for the self-contained TS procedure. The TS procedure returns an improved solution, and so on. The overall process continues until a pre-defined number of iterations have been performed (see Algorithm 2 (Note. The iterated tabu search procedure (see Algorithm 3) is in the role of the 0-level hierarchical iterated tabu search algorithm.)). The best found solution is regarded as the resulting solution of HITS.
Algorithm 2. Pseudocode of the k-level hierarchical iterated tabu search algorithm.
k-Level_Hierarchical_Iterated_Tabu_Search procedure;
/ input: p—current solution
/ output: p—the best found solution
/ parameter: Qk—number of iterations of the k-level HITS algorithm
begin
p: = p;
for qk: = 1 to Qk do begin
  apply k–1-Level_Hierarchical_Iterated_Tabu_Search to p and get p;
  if z(p) < z(p) then p: = p; / the best found solution is memorized
  if qk < Qk then begin
   p: = Candidate_Acceptance(p, p);
   apply mutation procedure to p
  endif
endfor
end.
Algorithm 3. Pseudocode of the iterated tabu search algorithm.
Iterated_Tabu_Search procedure;
/ 0-level hierarchical iterated tabu search algorithm
/ input: p—current solution
/ output: p〈0〉—the best found solution
/ parameter: Q〈0〉—number of iterations of the ITS algorithm
begin
p〈0〉: = p;
for q〈0〉 := 1 to Q〈0〉 do begin
  apply Tabu_Search to p and get p;
  if z(p) < z(p〈0〉) then p〈0〉: = p; / the best found solution is memorized
  if q〈0〉 < Q〈0〉 then begin
   p: = Candidate_Acceptance(p, p〈0〉);
   apply mutation procedure to p
   endif
endfor
end.
The 0-level HITS algorithm is in fact simply iterated tabu search algorithm (for more details on the ITS algorithm, see [135]) (see Algorithm 3 ((Note. The tabu search procedure is in the role of the self-contained algorithm.))), which, in turn, uses a self-contained tabu search algorithm—the “kernel” tabu search procedure. It is this procedure that directly improves a given solution. This procedure is thus in the role of the search intensification mechanism, while the mutation procedure is responsible for the diversification of the search process. It can be seen that the structure of the individual hierarchical levels of the HITS algorithm is quite simple, but the overall efficacy of the resulting multi-level algorithm increases significantly, which is demonstrated by the computational experiments. Of course, the run time increases as well, but this is compensated by the higher quality of the final results.
The interesting analogy between intensification and diversification (on the one side) and the phenomenon of entropy (on the other side) can be perceived. Indeed, the intensification process can be thought of as a process of the decrease of the entropy; on the other hand, diversification could be viewed as the increase of the entropy. Actually, the similar processes are seen in the open real physical systems. An example is the process of evolution of stars, where formation (birth) of the stars (along with the planets, organic matter, etc.) can be linked to the apparent decrease of the entropy, while the death of the stars (supernovae) may be associated with the significant increase of the entropy.
The self-contained tabu search procedure (for a more detailed description of the principles of TS algorithms, the reader is referred to [136]) iteratively analyses the neighbourhood of the current solution p (in our case— Θ 2 ( p ) ) and performs the non-prohibited move that most improves the value of the objective function. If there are no improving moves, then the one that least degrades the value of the objective function is accepted. In order to eliminate search cycles, the return to recently visited solutions is disabled for a specified period. The list of prohibitions—the tabu list T —is implemented as a two-dimensional matrix of size n × n . In this case, the entry t i j stores the sum of the number of the current iteration and the tabu tenure, h ; in this way, this value indicates from which iteration the ith and jth elements of a given solution can be again interchanged. The value of the parameter h depends on the problem size, n , and is chosen to be equal to 0.3 n . The tabu status is ignored at random moments with a very low probability α ( α 0.05 ). This allows to slightly increase the number of non-tabu solutions and not to limit the available search directions too much. The tabu condition is also ignored when the aspiration criterion is met, i.e., the current obtained solution is better than the best so far found solution. The resulting formal tabu and aspiration criteria are thus as follows:
t a b u _ c r i t e r i o n ( i , j ) = { TRUE , ( t i j q )   and   ( ς α )   and   ( H T ( ( z ( p ) + Δ ( p i , j ,   p ) ) mod   H a s h S i z e ) = TRUE ) FALSE , otherwise , a s p i r a t i o n _ c r i t e r i o n ( i , j ) = { TRUE , z ( p ) + Δ ( p i , j ,   p ) < z FALSE , otherwise , where i , j are the current elements’ indices, q denotes the current iteration number, ς is a (pseudo-)random real number within the interval [ 0 ,   1 ] , and z denotes the best so far found value of the objective function. H T denotes the hash table, which is simply a one-dimensional array, and H a s h S i z e is the capacity of the hash table.
In addition, our tabu search procedure uses a so-called secondary memory Γ [137] to avoid stagnation manifestations during the search process. The purpose of this memory is to gather high-quality solutions, which although are rated as very good, but are not chosen. In particular, the secondary memory includes solutions left “second” after the exploration of the neighbourhood Θ 2 . So, if the best found solution does not change more than β τ iterations, then the tabu list is cleared and the search is restarted from one of the “second” solutions in the secondary memory Γ (here, τ denotes the number of iterations of the TS procedure, and β is a so-called idle iterations limit factor such that 1 β τ τ ). The TS procedure is completed as soon as the total number of iterations, τ , has been performed.
The time complexity of the TS algorithm is proportional to O ( n 2 ) for the reason that the exploration of the neighbourhood Θ 2 requires n ( n 1 ) 2 operations and also one needs to recalculate the differences of the objective function after each transition from a given solution to the new one.
The pseudo-code of the tabu search algorithm is shown in Algorithm 4 (Notes. (1) The immediate if function iif( x , y 1 , y 2 ) returns y 1 if x = TRUE , otherwise it returns y 2 . (2) The function random() returns a pseudo-random number uniformly distributed in [ 0 ,   1 ] . (3) The function random( x 1 , x 2 ) returns a pseudo-random number in [ x 1 ,   x 2 ] . (4) The values of the matrix Ξ are recalculated according to the Formula (9). (5) β denotes a random access parameter (we used β = 0.8 ).).
Algorithm 4. Pseudo-code of the tabu search algorithm.
Tabu_Search procedure;
/input: n—problem size,
/     p—current solution, Ξ—difference matrix
/output: p—the best found solution (along with the corresponding difference matrix)
/parameters: τ—total number of tabu search iterations, h—tabu tenure, α—randomization coefficient,
/          Lidle_iter—idle iterations limit
begin
 clear tabu list TabuList and hash table HashTable;
p: = p; k: = 1; k′: = 1; secondary_memory_index: = 0; improved: = FALSE;
while (kτ) or (improved = TRUE) then begin / main cycle
  Δ′min: = ∞; Δ″min: = ∞; u′: = 1; v′: = 1;
  for i: = 1 to n − 1 do
   for j: = i + 1 to n do begin / n(n − 1)/2 neighbours of p are scanned
    Δ: = Ξ(i, j);
    forbidden: = iif(((TabuList(i, j) ≥ k) and (HashTable((z(p) + Δ) mod HashSize) = TRUE) and
           (random() ≥ α)), TRUE, FALSE);
    aspired := iif(z(p) + Δ < z(p), TRUE, FALSE);
    if ((Δ < Δ′min) and (forbidden = FALSE)) or (aspired = TRUE) then begin
     if Δ < Δ′min then begin Δ″min: = Δ′min; u″: = u′; v″: = v′; Δ′min: = Δ; u′: = i; v′: = j; endif
     else if Δ < Δ″min then begin Δ″min: = Δ; u″: = i; v″: = j; endif
    endif
   endfor;
  if Δ″min < ∞ then begin / archiving second solution, Ξ, u″, v
   secondary_memory_index: = secondary_memory_index + 1; Γ(secondary_memory_index) ← p, Ξ, u″, v″
  endif;
  if Δ′min < ∞   then   begin   /   replacement   of   the   current   solution   and   recalculation   of   the   values   of   Ξ
    p :   =   ϕ ( p ,   u ,   v ) ;
   recalculate the values of the matrix Ξ;
   if z(p) < z(p) then begin p: = p; k′: = k endif; / the best so far solution is memorized
   TabuList(u′, v′): = k + h; / the elements p(u′), p(v′) become tabu
   HashTable((z(p) + Δ) mod HashSize): = TRUE
  endif;
  improved: = iif(Δ′min < 0, TRUE, FALSE);
  if (improved = FALSE) and (kk′ > Lidle_iter) and (k < τLidle_iter) then begin
   / retrieving solution from the secondary memory
   random_access_index: = random(β × secondary_memory_index, secondary_memory_index);
   p, Ξ, u″, v″ ← Γ(random_access_index);
    p :   =   ϕ ( p ,   u ,   v ) ;
   recalculate the values of the matrix Ξ;
   clear tabu list TabuList;
   TabuList(u″, v″): = k + h; / the elements p(u″), p(v″) become tabu
   k′: = k
  endif;
  k: = k + 1
endwhile
end.

3.4.2. Mutation

Each solution found by the tabu search algorithm is subject to perturbation in the mutation procedure. Remind that formally the mutation process can be defined by the use of the operator φ : Π n Π n . Thus, if p ~ = φ ( p ) , then p ~ Π n , p ~ p . More formalized operator can be described as follows: φ ( p ,   ξ ) : Π n × N Π n , which transforms the current solution p to the new solution p ~ such that δ ( p ,   p ~ ) = δ ( p ,   φ ( p ) ) = ξ . In this way, 100 ξ n per cent elements of the solution are affected. The parameter ξ ( 2 ξ n ) regulates the strength of mutation and is referred to as a mutation rate. (In our algorithm, ξ = 0.2 n .) It is clear that for any p , p ~ (such that p p ~ , δ ( p ,   p ~ ) = ξ ) there always exists a sequence of distinct integers u 1 , u 2 , ,   u ξ such that p ~ = ( ( ( p u 1 ,   u 2 ) u 2 ,   u 3 ) ) u ξ 1 ,   u ξ .
Choosing the right value of the mutation rate, ξ , plays a very important role in the mutation procedure and the HITS algorithm and, at the same time, the whole genetic algorithm. A proper compromise between two extreme cases must be achieved: (1) the value of ξ is (very) low (close to 0 ); (2) the value of ξ is (very) high (close to n ). In the first case, the mutation would not guarantee that the obtained mutated solution is “far” away enough from a given solution, which would lead to cycling search trajectories. In the second case, useful accumulated information would be lost and the algorithm would become very similar to a crude random multi-start method.
It should be stressed that the mutation processes are quite different from the crossover procedures. Mutation processes are by their nature purely random processes. Whilst crossover procedures only recombine the genetic code contained in the parents, the mutation processes generate, in essence, new information that hadn’t existed in predecessors earlier. It is the mutation process that implicitly is a true creative process and potentially produces a real novelty. In our work, twelve different mutation procedures and their modifications were proposed and tested.

3.4.2.1. Mutation Based on Random Pairwise Interchanges (M1)

In the beginning, the sequence r = ( r ( 1 ) ,   r ( 2 ) ,   ,   r ( ξ ) ) of random integers r ( i ) { 1 ,   . ,   n } is generated. Then, we start with the pair ( r ( 1 ) ,   r ( 2 ) ) , and the elements p ( r ( 1 ) ) , p ( r ( 2 ) ) are interchanged. Then, we exchange the elements p ( r ( 2 ) ) , p ( r ( 3 ) ) , and so on. This is repeated ξ 1 times, where ξ is the value of the mutation rate defined by the algorithm’s user. The result of the mutation procedure is thus the solution p ~ satisfying the conditions: p ~ Π n , δ ( p ,   p ~ ) = ξ (see Figure 3).
On the basis of the random pairwise interchanges, other modified mutation procedures can be developed [138].

3.4.2.2. Random Pairwise Interchanges Using Random Key (M2)

In this case, the mutation process consists of two basic steps: (1) random pairwise interchanges; (2) shuffling the interchanged elements using a random key. A random key, r k , is a list of random indices of size ξ r k ( 1 ) ,   r k ( 2 ) ,   ,   r k ( ξ ) . The random key values simply determine which elements are again interchanged. The intention is to get a more “deeply” mutated solution and avoid returning to previously visited solutions.

3.4.2.3. Mutation Using Opposite Values (M3)

In this mutation procedure, the position’s index, let’s say k , is randomly determined. Then, the element e = p ( k ) is replaced by the following opposite value: o = ( ( p ( k ) + n 2 1 )   mod   n ) + 1 , where mod denotes the modulo operation. After this replacement, the solution element that was previously equal to o must also be changed. After both changes, p ( k ) becomes equal to o , p ( l ) —equal to e ; l indicates the element which was equal to o . The process is repeated ξ 2 times, where ξ is the muation rate.

3.4.2.4. Distance-Based Mutation (M4)

In this procedure, the indices of the pairs of elements to be interchanged are generated in such a way that the “distance” ( d ) between those indices is as large as possible. The following formula for generating the indices k 1 ,   k 2 ,   ,   k ξ is used: k l = ( ( d q l + ς 1 )   mod   n ) + 1 , here d = n ξ , ς —(pseudo)random real number from the interval [ 0 ,   1 ] , q l = ( q l 1   mod   n ) + 1 , l = 1 , 2 , ,   ξ ; the initial value q 0 is a random integer from the interval [ 1 , n ] .

3.4.2.5. Modified Random Pairwise Interchanges—Variant I (M5)

This is similar to the random pairwise interchanges. The sequence of random real-coded values from the interval [ 0 ,     1 ] is generated. The generated numbers along with their corresponding indices—known as smallest positive values—are sorted in the ascending order. These values, in particular, determine the elements to be interchanged.

3.4.2.6. Modified Random Pairwise Interchanges—Variant II (M6)

The list of random indices is obtained by directly generating random integers from the interval [ 1 ,   n ] . The integers may duplicate each other. To avoid duplications, the integers are sorted according to the ascending order. Indices corresponding to the sorted numbers indicate the elements that are to be interchanged.

3.4.2.7. Combined Mutation (M7)

This mutation procedure consists of two combined mutation procedures. Initially, the list of indices of the pairs of elements to be interchanged is constructed (see Section 3.4.2.6). The selected elements are then changed using opposite values (see Section 3.4.2.3).

3.4.2.8. Greedy Adaptive Search Based Mutation (M8)

The basic principle of this mutation procedure is that the solution is disintegrated in some way, and then reconstructed. The mutation process consists of two steps: (1) disintegration of the solution, which is random; (2) reconstruction of the solution, which is greedy-deterministic. In the first step, ξ elements are disregarded. In the second step, a greedy constructive algorithm is applied, which tries to find the best possible solution out of ξ ! available options. The value of ξ in this case should be quite small to prevent large increase in the run time of the mutation procedure. This mutation procedure (and also other procedures described below) are no longer problem-independent as the problem domain-specific knowledge is taken into account.

3.4.2.9. Greedy Randomized Adaptive Search Based Mutation (M9)

This mutation procedure resembles the one described above. The difference is that a greedy randomized adaptive search procedure (GRASP) [70] is used in the partial solution reconstruction phase to obtain an improved solution.

3.4.2.10. Randomized Local Search Based Mutation—Variant I (M10)

In this case, quick procedure based on random pairwise interchanges is initially performed (see Section 3.4.2.1). Then, a set of randomly selected elements is formed. A local search is then performed using the constructed set, i.e., only transitions between solutions that improve the value of the objective function are accepted.

3.4.2.11. Randomized Local Search Based Mutation—Variant II (M11)

This mutation variant is similar to the previous randomized local search variant, except that the randomly constructed neighbourhood is fully explored in a systematic way. Again, only improving transitions between solutions are accepted.

3.4.2.12. Randomized Tabu Search Based Mutation (M12)

Let p ( 1 ) = argmin i = 1 ,   ,   n 1 ,   j = i + 1 ,   ,   n ,   m o v e _ a c c e p t a n c e _ c r i t e r i o n ( i , j ) = TRUE { z ( p i , j ) } and
p ( 2 ) = argmin i = 1 ,   ,   n 1 ,   j = i + 1 ,   ,   n ,   m o v e _ a c c e p t a n c e _ c r i t e r i o n ( i , j ) = TRUE { z ( p i , j ) | p i , j p ( 1 ) } , where: m o v e _ a c c e p t a n c e _ c r i t e r i o n ( i , j ) = { TRUE , ( t a b u _ c r i t e r i o n ( i , j ) = FALSE )   or   ( a s p i r a t i o n _ c r i t e r i o n ( i , j ) = TRUE ) FALSE , otherwise , t a b u _ c r i t e r i o n ( i , j ) = { TRUE , ( t i j q )   and   ( ς α ) FALSE , otherwise ,
a s p i r a t i o n _ c r i t e r i o n ( i , j ) = { TRUE , z ( p ) + Δ ( p i , j ,   p ) < z FALSE , otherwise , q denotes the current iteration number, ς is a (pseudo-)random number within the interval [ 0 ,   1 ] , α denotes the randomization coefficient, z denotes the best so far value of the objective function. Then, in the randomized tabu search procedure, the best achieved solution (“winner solution”) p ( 1 ) is accepted with the probability γ , meanwhile the second solution p ( 2 ) is chosen with the probability 1 γ (note that, in the case of γ = 1 , we get the standard (deterministic) tabu search.) In our algorithm, we use γ = 0.2 . So, the central idea of the randomized tabu search is just this quasi-random mixing between the “winner solution” and “next to the winner solution” in the course of the tabu search process. Based on the extensive experimentation, we found out that this type of mutation is the most promising mutation procedure among all the procedures examined. The explanation would be that this type of mutation rather is more gentle, moderate and controlled than the other mutation procedures.
In the end, note that the computational complexity of all our mutation procedures is proportional to O ( ξ n 2 ) . This is due to the fact that our mutation procedures recalculate the differences of the objective function (i.e., the values of the matrix Ξ ) approximately ξ times (see Algorithm 5 (Note. The values of the matrix Ξ are recalculated using Equation (9).)). So, the smaller the value of ξ , the faster is the mutation procedure. Also, note that the difference matrix Ξ is (permanently) stored in a RAM (operating memory), so there is no need to calculate the differences of the objective function from scratch.
Algorithm 5. Pseudo-code of the procedure for recalculation of the differences of the objective function.
Recalculation_of_the_Differences_of_the_Objective_Function procedure;
/ input: ξ—mutation rate, p—initial permutation before mutation, r—random index array, Ξ—current differences
/ output: Ξ—new differences
begin
for l: = 1 toξ − 1 do begin
   u :   =   r ( l ) ;   v :   =   r ( l   +   1 ) ;   p :   =   ϕ ( p ,   u ,   v ) ;
  recalculate values of the matrix Ξ
endfor
end.

3.4.3. Candidate Acceptance

Regarding the candidate solution acceptance rule, we always choose the most recently (newly) found improved solution (the latest result of the HITS (or TS) algorithm) instead of the overall best found solution. Such an approach is thought to allow to explore potentially larger regions of the solution space.

3.5. Population Replacement

For the population replacement, a modified rule is used to respect not only the quality of the solutions, but also the difference (distance) between solutions.
We have, in particular, implemented an extended variant of the well-known “ μ + λ “ update rule [139]. The new advanced replacement rule is denoted as “ μ + λ , ε ”. (This rule is also used for the initial population construction (see Section 3.1).) We remind that if the minimum mutual distance between population members and the new obtained offspring is less than the distance threshold, D T , then the offspring is omitted. The only exception is the case where the offspring appears better than the best population member. Otherwise, the offspring enters the current population, but only under condition that it is better than the worst population member. The worst individual is removed in this case. This our replacement strategy ensures that only individuals that are diverse enough survive for the further generations.
There are a few replacement variations (options), depending on the parameter R e p V a r . If R e p V a r = 1 , then exactly the above replacement strategy is adopted. If R e p V a r = 2 , then the new offspring replaces the best member of the current population if the offspring is better than the best population individual. If the offspring is worse than the best individual, then R e p V a r = 2 is identical to R e p V a r = 3 . If R e p V a r = 3 , then the offspring replaces the worst member of the population, ignoring the fitness of the worst individual. The minimum distance criterion must be taken into account.

3.6. Population Restart

The important feature of our genetic algorithm is the use of the population restart mechanism to try to avoid the premature convergence and search stagnation. The restart process is triggered in the situations where the solutions of the population do not change at all for some number of consecutive generations. This can be operationalized by the use of a priori parameter called an idle generations limit, L i d l e _ g e n   , where L i d l e _ g e n = max { L m i n ,   ω N g e n } , here L m i n is a constant (we use L m i n = 3 ), ω is to denote a stagnation control parameter and N g e n is the total number of generations of the genetic algorithm. (The standard value of ω is equal to 0.05 .) The restart itself is performed by applying a so-called multi-mutation, where the mutation process is applied to all the members of the stagnated population. Such approach is preferred to the complete destroying of the population, which seems to be too aggressive.

4. Computational Experiments

Our genetic-hierarchical algorithm is implemented by using C# programming language. The computational experiments have been carried out on a 3.1 GHz personal computer running Windows 7 Enterprise. The CPU model is an Intel Core i5-3450.
The algorithm is tested on the small-, medium-and large-scaled QAP benchmark instances of sizes from n = 10 to n = 128 . Most instances are from the online QAP library QAPLIB [29]. Other instances are from [14,19] (see also http:/mistic.heig-vd.ch/taillard/problemes.dir/qap.dir/qap.html).
In particular, the following benchmark instances taken from QAPLIB were examined:
(a)
random, unstructured instances (these instances are denoted by: rou20, tai10a, tai12a, tai15a, tai17a, tai20a, tai25a, tai30a, tai35a, tai40a, tai50a, tai60a, tai80a, tai100a);
(b)
randomly generated, grid-based instances (they are denoted by: had20, nug30, scr20, sko42, sko49, sko56, sko64, sko72, sko81, sko90, sko100a..sko100f, tho30, tho40, wil50, wil100);
(c)
real-life, structured instances from practical applications (denoted by: chr25a, els19, esc32a..esc32h, esc64a, esc128, kra30a, kra30b, ste36a. ste36c, tai64c);
(d)
real-life like (pseudo-random), structured instances (denoted by: tai10b, tai12b, tai15b, tai20b, tai25b, tai30b, tai35b, tai40b, tai50b, tai60b, tai80b, tai100b).
(e)
instances with known optimal solutions (denoted by: lipa20a, lipa20b, lipa30a, lipa30b, lipa40a, lipa40b, lipa50a, lipa50b, lipa60a, lipa60b, lipa70a, lipa70b, lipa80a, lipa80b, lipa90a, lipa90b).
In addition, the instances introduced by de Carvalho and Rahmann [19] are investigated. These instances are extremely difficult to solve. They are denoted by bl36, bl49, bl64, bl81, bl100 (aka. border length minimization instances) and ci36, ci49, ci64, ci81, ci100 (aka. conflict index minimization instances).
We also tested the instances dre15, dre18, dre21, dre24, dre28, dre30, dre42, dre56, dre72, dre90, tai27e1, tai27e2, tai27e3, tai45e1, tai45e2, tai45e3, tai75e1, tai75e2, tai75e3 proposed by Drezner and Taillard in [14].
In the initial computational experiments, we used the following “learning set” of the QAP benchmark instances of sizes from n = 35 to n = 70 : bl49, bl64, ci49, ci64, dre42, dre56, lipa70a, lipa70b, sko56, sko64, tai35a, tai35b, tai40a, tai40b, tai45e1, tai50a, tai50b, tai60a, tai60b, wil50. These particular instances were chosen based on our preliminary experience.
As a performance criterion, we adopt the average relative percentage deviation ( θ ¯ ) of the yielded solutions from the best known solution (BKS). It is calculated by the following formula: θ ¯ = 100 ( z ¯ z b k v ) z b k v [ % ] , where z ¯ is the average objective function value over 10 runs of the algorithm, while z b k v denotes the best known value (BKV) of the objective function that corresponds to the BKS (BKVs are from [14,29,86]).
At every run, the algorithm is applied to the particular instance. Each time, the algorithm starts from a new random initial population. The algorithm is terminated if either the maximum number of generations, N g e n , has been reached or the best known solution has been achieved.
In the experiments, the goal was to empirically test the performance of the basic setup of our algorithm and also its various variants in terms of the quality of solutions and the run time of the algorithm. To do so, we have identified some essential algorithm’s components (ingredients) (namely, “initial population”, “selection”, “crossover”, “local improvement (hierarchical tabu search)”, “mutation”, “population replacement”) to reveal their influence on the efficiency of GHA and to “synthesize” the preferable fine-tuned architecture of the algorithm. The following combination of the particular options (parameters) related to these components is declared as the basic version of GHA: { I n i t P o p V a r = 1 , P S = 10 , N g e n = 100 , σ = 1.5 , C r o s s V a r = 1 PX , τ = 20 , Q h i e r = 2 8 = 256 , M u t V a r = M 1 , R e p V a r = 1 } ; here, Q h i e r denotes the total cumulative number of hierarchical iterations ( Q h i e r = Q ( 0 ) × Q ( 1 ) × Q ( 2 ) × Q ( 3 ) × Q ( 4 ) × Q ( 5 ) × Q ( 6 ) × Q ( 7 ) ), Q ( 0 ) ,   ,   Q ( 7 ) denote, respectively, the corresponding numbers of iterations of the 0th-level, …, 7th-level hierarchical iterated tabu search algorithms. The prescribed default values of the control parameters corresponding to the basic version of GHA are shown in Table 1 and Table 2. (These values were later over-ridden in particular separate experiments).
In the initial experiments, twelve crossover procedures (1PX, 2PX, UX, SX, PMX, SPX1, SPX2, SPX3, CX, COHX, MPX, UNIVX) have been compared against each other. The obtained results (presented in Table 3) demonstrate that our proposed universal crossover (UNIVX) with the tuned control parameters yields the most promising results.
In the further experiments, the different mutation procedures (M1, M2, M3, M4, M5, M6, M7, M8, M9, M10, M11, M12) were examined. This time, we have found out that the randomized tabu search based mutation is clearly the best among the all tested mutation variants (see Table 4).
Further, we were interested in how various options (configurations) of the initial population construction affect the performance of the genetic-hierarchical algorithm. The particular separate configurations differ with respect to the option of the population construction ( I n i t P o p V a r ), the size of pre-initial population ( P S ), as well as the number of TS iterations during the population initialization ( τ ). In particular, the following variants were investigated: (1) I n i t P o p V a r = 1 , P S = 10 , τ = 20 ; (2) I n i t P o p V a r = 1 , P S = 20 , τ = 40 ; (3) I n i t P o p V a r = 1 , P S = 40 , τ = 80 ; (4) I n i t P o p V a r = 1 , P S = 100 , τ = 200 ; (5) I n i t P o p V a r = 2 , P S = 10 , τ = 20 ; (6) I n i t P o p V a r = 2 , P S = 20 , τ = 40 ; (7) I n i t P o p V a r = 2 , P S = 40 , τ = 80 ; (8) I n i t P o p V a r = 2 , P S = 100 , τ = 200 ; (9) I n i t P o p V a r = 3 , P S = 10 , τ = 20 ; (10) I n i t P o p V a r = 3 , P S = 20 , τ = 40 ; (11) I n i t P o p V a r = 3 , P S = 40 , τ = 80 ; (12) I n i t P o p V a r = 3 , P S = 100 , τ = 200 .
We have observed that maintaining the higher quality initial populations, in general, allows to significantly increase the overall efficiency of GHA when comparing to the lower quality initial populations (see Table 5).
Additionally, we experimented with some few population replacement options. The particular population replacement variants are as follows: (1) P S = 10 , τ = 20 , R e p V a r = 1 ; (2) P S = 10 , τ = 20 , R e p V a r = 2 ; (3) P S = 10 , τ = 20 , R e p V a r = 3 ; (4) P S = 20 , τ = 40 , R e p V a r = 1 ; (5) P S = 20 , τ = 40 , R e p V a r = 2 ; (6) P S = 20 , τ = 40 , R e p V a r = 3 ; (7) P S = 40 , τ = 80 , R e p V a r = 1 ; (8) P S = 40 , τ = 80 , R e p V a r = 2 ; (9) P S = 40 , τ = 80 , R e p V a r = 3 ; (10) P S = 100 , τ = 200 , R e p V a r = 1 ; (11) P S = 100 , τ = 200 , R e p V a r = 2 ; (12) P S = 100 , τ = 200 , R e p V a r = 3 .
It was observed that the aggressive strategy of replacement of the best population member ( R e p V a r = 2 ) seems to be superior to other options (see Table 6). Further, more extensive experiments are required to strengthen this conjecture.
In addition, we have tested some other different scenarios (regimes) in order to unveil some possible tendencies of the behaviour of the HITS algorithm. The following scenarios were investigated: (1) scenario of “quick search”: small value of τ —small value of Q h i e r ; (2) scenario of “diversified quick search”: small value of τ —large value of Q h i e r ; (3) scenario of “extensive search”: large value of τ —small value of Q h i e r ; (4) scenario of “diversified extensive search”: large value of τ —large value of Q h i e r . Note that, in these scenarios, the number of generations of GHA should be accordingly balanced in order to stay within the fixed run time. The corresponding values of the control parameters are as follows: (1-a) τ = 20 , Q h i e r = 2 8 = 256 , N g e n = 100 ; (1-b) τ = 25 , Q h i e r = 2 8 = 256 , N g e n = 80 ; (1-c) τ = 40 , Q h i e r = 2 8 = 256 , N g e n = 50 ; (2-a) τ = 50 , Q h i e r = 2 8 = 256 , N g e n = 40 ; (2-b) τ = 10 , Q h i e r = 2 9 = 512 , N g e n = 100 ; (2-c) τ = 20 , Q h i e r = 2 9 = 512 , N g e n = 50 ; (3-a) τ = 10 , Q h i e r = 5 × 2 7 = 640 , N g e n = 80 ; (3-b) τ = 20 , Q h i e r = 5 × 2 7 = 640 , N g e n = 40 ; (3-c) τ = 10 , Q h i e r = 2 10 = 1024 , N g e n = 50 ; (4-a) τ = 20 , Q h i e r = 2 10 = 1024 , N g e n = 25 ; (4-b) τ = 10 , Q h i e r = 10 × 2 7 = 1280 , N g e n = 40 ; (4-c) τ = 20 , Q h i e r = 10 × 2 7 = 1280 , N g e n = 20 .
The results of the experiments (see Table 7) demonstrate that the scenario of diversified extensive search is obviously preferable to other examined scenarios.
Additional scenarios have been examined to reveal the reaction of GHA when extensively increasing the cumulative number of iterations of the hierarchical iterated tabu search algorithm— Q h i e r . The computational budget is not constant (“balanced”) anymore, but grows as the value of Q h i e r increases. The following scenarios have been tried: (1) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 5 × 2 7 = 640 ; (2) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 6 × 2 7 = 768 ; (3) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 7 × 2 7 = 896 ; (4) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 8 × 2 7 = 1024 ; (5) P S = 20 , P S = 300 , τ = 300 , Q h i e r = 5 × 2 7 = 640 ; (6) P S = 20 , P S = 300 , τ = 300 , Q h i e r = 6 × 2 7 = 768 ; (7) P S = 20 , P S = 300 , τ = 300 , Q h i e r = 7 × 2 7 = 896 ; (8) P S = 20 , P S = 300 , τ = 300 , Q h i e r = 8 × 2 7 = 1024 ; (9) P S = 10 , P S = 200 , τ = 400 , Q h i e r = 7 × 2 7 = 896 ; (10) P S = 10 , P S = 200 , τ = 400 , Q h i e r = 7 × 2 7 = 1024 ; (11) P S = 20 , P S = 400 , τ = 400 , Q h i e r = 7 × 2 7 = 896 ; (12) P S = 20 , P S = 400 , τ = 400 , Q h i e r = 7 × 2 7 = 1024 ; here, τ denotes the number of TS iterations during the construction of the initial population.
The results confirm that, as expected, there exists a clear correlation between the number of improving iterations (the number of TS/HITS iterations) and the quality of the obtained solutions (see Table 8).
To have a reflection of the obtained results from a different perspective—in particular, a demonstration of the stability and robustness properties of our algorithm—we have constructed histograms of the frequency of the objective function values for one of the most difficult instances of the “learning set”—bl64 (see Figure A1 in the “Appendix A” Section). In fact, we have created the histograms of the frequency of the average percentage deviation, θ ¯ , over 10 algorithm runs within the interval [ 0.0 ,   1.0 ) , where 0.0 stands for zero deviation and 1.0 means the maximum possible deviation. (Note that the average deviation never exceeded 1.0 for the instance bl64 (see Table 8).)
(Regarding the selection factor, σ , the obtained results are quite “flat” and not statistically significant, so they are omitted).
On the whole, we have found the best known solutions in the 9191 cases (runs) out of 14400 cases ( 64 % of cases). The BKS was found at least once for all examined instances. The cumulative average percentage deviation is equal to 1.452 % and the cumulative average CPU time per run is equal to approximately 65.9 s. The average deviation is less than 0.5 in 73 % of cases, while the average deviation is less than 1.0 in 89 % of cases. 14 instances (ci49, ci64, dre42, lipa70a, lipa70b, sko56, sko64, tai35a, tai35b, tai40b, tai45e1, tai50b, tai60b, wil50) were solved to pseudo-optimality in more than 300 runs.
After experimenting with the “learning set” of instances, the other instances (the “testing set” of instances) were examined using the fine-tuned parameters in order to find out how quickly the genetic-hierarchical algorithm converges to the best known/optimal solutions. The obtained results are presented in Table 9. It can be seen that all tested instances ( 88 instances) are solved to pseudo-optimality within extremely small computation time.
We have also compared our algorithm with the memetic algorithm (MA) proposed be Benlic and Hao [78], which is most likely the best so far heuristic algorithm for the QAP, to the best of our knowledge. The results of comparison of the algorithms are presented in Table 10, Table 11 and Table 12. It seems that our genetic-hierarchical algorithm outperforms MA. Additionally, we used the genetic algorithms by Drezner et al. [14] and Drezner and Misevičius [86] in the further comparison (see Table 13, Table 14, Table 15 and Table 16). Again, our algorithm compares favourably to both the algorithm by Drezner et al. as well as Drezner and Misevičius.

5. Concluding Remarks

In this paper, we have presented the hybrid genetic-hierarchical algorithm for the solution of the quadratic assignment problem. The key feature of the proposed algorithm is that the genetic algorithm is hybridized with the hierarchicity-based (self-similar) iterated tabu search algorithm, which serves as a powerful local optimizer of the offspring solutions produced by the crossover operator.
The algorithm was examined on the QAP benchmark instances of various sizes and complexity. The results obtained from the experiments demonstrate the excellent performance of the genetic-hierarchical algorithm. Our algorithm seems to outperform other state-of-the-art heuristic algorithms for many examined QAP instances or is at least very much competitive with them. A more pronounced improvement in the quality of the results might be achieved by a thorough calibration of the algorithm’s parameters.
The following are some possible future research directions: balancing of the number of tabu search iterations and the number of hierarchical iterated tabu search iterations, as well as the number of hierarchical levels; extensive experimental analysis of the particular components and configurations of the genetic-hierarchical algorithm; designing and implementing a multi-level hierarchical (master-slave) genetic algorithm.

Author Contributions

The proposed algorithm was designed and implemented by A.M. All sections and experiments were described by both authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Faculty of Informatics of Kaunas University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Histograms of the frequency of the objective function values for the instance bl64 for different examined scenarios: (a) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 640 ; (b) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 768 ; (c) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 896 ; (d) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 1024 . The histograms are developed in such a way that the frequency of the average deviation is visualized over 10 discrete sub-intervals of the interval [ 0.0 ,     : [ 0.0 ,   0.1 ) ; [ 0.1 ,   0.2 ) ; [ 0.2 ,   0.3 ) ; …; [ 0.9 ,   1.0 ) . It can be seen that the average deviation from (pseudo-)optimal solutions stably decreases by increasing the number of search iterations ( Q h i e r ).
Figure A1. Histograms of the frequency of the objective function values for the instance bl64 for different examined scenarios: (a) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 640 ; (b) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 768 ; (c) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 896 ; (d) P S = 10 , P S = 150 , τ = 300 , Q h i e r = 1024 . The histograms are developed in such a way that the frequency of the average deviation is visualized over 10 discrete sub-intervals of the interval [ 0.0 ,     : [ 0.0 ,   0.1 ) ; [ 0.1 ,   0.2 ) ; [ 0.2 ,   0.3 ) ; …; [ 0.9 ,   1.0 ) . It can be seen that the average deviation from (pseudo-)optimal solutions stably decreases by increasing the number of search iterations ( Q h i e r ).
Entropy 23 00108 g0a1

References

  1. Burkard, R.E.; Çela, E.; Pardalos, P.M.; Pitsoulis, L.S. The quadratic assignment problem. In Handbook of Combinatorial Optimization; Du, D.Z., Pardalos, P.M., Eds.; Kluwer: Dordrecht, The Netherlands, 1998; Volume 3, pp. 241–337. [Google Scholar]
  2. Burkard, R.E.; Dell’amico, M.; Martello, S. Assignment Problems; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  3. Çela, E. The Quadratic Assignment Problem: Theory and Algorithms; Kluwer: Dordrecht, The Netherlands, 1998. [Google Scholar]
  4. Drezner, Z. The quadratic assignment problem. In Location Science; Laporte, G., Nickel, S., Saldanha da Gama, F., Eds.; Springer: Cham, Switzerland, 2015; pp. 345–363. [Google Scholar] [CrossRef]
  5. Koopmans, T.; Beckmann, M. Assignment problems and the location of economic activities. Econometrica 1957, 25, 53–76. [Google Scholar] [CrossRef]
  6. Rendl, F. The quadratic assignment problem. In Facility Location: Applications and Theory; Drezner, Z., Hamacher, H., Eds.; Springer: Berlin, Germany, 2002; pp. 439–457. [Google Scholar]
  7. Hanan, M.; Kurtzberg, J.M. Placement techniques. In Design Automation of Digital Systems: Theory and Techniques; Breuer, M.A., Ed.; Prentice-Hall: Englwood Cliffs, NJ, USA, 1972; Volume 1, pp. 213–282. [Google Scholar]
  8. Steinberg, L. The backboard wiring problem: A placement algorithm. SIAM Rev. 1961, 3, 37–50. [Google Scholar] [CrossRef]
  9. Heffley, D.R. Assigning runners to a relay team. In Optimal Strategies in Sports; Ladany, S.P., Machol, R.E., Eds.; North-Holland: Amsterdam, The Netherlands, 1977; pp. 169–171. [Google Scholar]
  10. Drezner, Z. Finding a cluster of points and the grey pattern quadratic assignment problem. OR Spectr. 2006, 28, 417–436. [Google Scholar] [CrossRef]
  11. Burkard, R.E.; Offermann, J. Entwurf von schreibmaschinentastaturen mittels quadratischer zuordnungsprobleme. Z. Oper. Res. 1977, 21, 121–132. [Google Scholar] [CrossRef]
  12. Dell’amico, M.; Díaz, J.C.D.; Iori, M.; Montanari, R. The single-finger keyboard layout problem. Comput. Oper. Res. 2009, 36, 3002–3012. [Google Scholar] [CrossRef] [Green Version]
  13. Herthel, A.B.; Subramanian, A. Optimizing single-finger keyboard layouts on smartphones. Comput. Oper. Res. 2020, 120, 104947. [Google Scholar] [CrossRef]
  14. Drezner, Z.; Hahn, P.M.; Taillard, E.D. Recent advances for the quadratic assignment problem with special emphasis on instances that are difficult for metaheuristic methods. Ann. Oper. Res. 2005, 139, 65–94. [Google Scholar] [CrossRef]
  15. Francis, R.L.; White, J.A. Facility Layout and Location: An Analytical Approach; Prentice Hall: Englewood Cliffs, NJ, USA, 1998. [Google Scholar]
  16. Phillips, A.T.; Rosen, J.B. A quadratic assignment formulation of the molecular conformation problem. J. Glob. Optim. 1994, 4, 229–241. [Google Scholar] [CrossRef]
  17. Taillard, E.D. Comparison of iterative searches for the quadratic assignment problem. Locat. Sci. 1995, 3, 87–105. [Google Scholar] [CrossRef]
  18. Ben-David, G.; Malah, D. Bounds on the performance of vector-quantizers under channel errors. IEEE Trans. Inf. Theory 2005, 51, 2227–2235. [Google Scholar] [CrossRef]
  19. De Carvalho, S.A., Jr.; Rahmann, S. Microarray layout as a quadratic assignment problem. In German Conference on Bioinformatics, GCB 2006, Lecture Notes in Informatics–Proceedings; Huson, D., Kohlbacher, O., Lupas, A., Nieselt, K., Zell, A., Eds.; Gesellschaft für Informatik: Bonn, Germany, 2006; Volume P-83, pp. 11–20. [Google Scholar]
  20. Brusco, M.J.; Stahl, S. Using quadratic assignment methods to generate initial permutations for least-squares unidimensional scaling of symmetric proximity matrices. J. Classif. 2000, 17, 197–223. [Google Scholar] [CrossRef]
  21. Dickey, J.W.; Hopkins, J.W. Campus building arrangement using TOPAZ. Transp. Res. 1972, 6, 59–68. [Google Scholar] [CrossRef]
  22. Elshafei, A.N. Hospital layout as a quadratic assignment problem. Oper. Res. Q. 1977, 28, 167–179. [Google Scholar] [CrossRef]
  23. Lstibůrek, M.; Stejskal, J.; Misevičius, A.; Korecký, J.; El-Kassaby, Y. Expansion of the minimum-inbreeding seed orchard design to operational scale. Tree Genet. Genomes 2015, 11, 1–8. [Google Scholar] [CrossRef]
  24. Laporte, G.; Mercure, H. Balancing hydraulic turbine runners: A quadratic assignment problem. Eur. J. Oper. Res. 1988, 35, 378–381. [Google Scholar] [CrossRef]
  25. Saremi, H.Q.; Abedin, B.; Kermani, A.M. Website structure improvement: Quadratic assignment problem approach and ant colony metaheuristic technique. Appl. Math. Comput. 2008, 195, 285–298. [Google Scholar] [CrossRef]
  26. Abdel-Basset, M.; Manogaran, G.; Rashad, H.; Zaied, A.N.H. A comprehensive review of quadratic assignment problem: Variants, hybrids and applications. J. Ambient Intell. Hum. Comput. 2018, 9, 1–24. [Google Scholar] [CrossRef]
  27. Sahni, S.; Gonzalez, T. P-complete approximation problems. J. ACM 1976, 23, 555–565. [Google Scholar] [CrossRef]
  28. Anstreicher, K.M.; Brixius, N.W.; Gaux, J.P.; Linderoth, J. Solving large quadratic assignment problems on computational grids. Math. Program. 2002, 91, 563–588. [Google Scholar] [CrossRef]
  29. Burkard, R.E.; Karisch, S.; Rendl, F. QAPLIB—A quadratic assignment problem library. J. Glob. Optim. 1997, 10, 391–403. [Google Scholar] [CrossRef]
  30. Date, K.; Nagi, R. Level 2 reformulation linearization technique–based parallel algorithms for solving large quadratic assignment problems on graphics processing unit clusters. INFORMS J. Comput. 2019, 31, 771–789. [Google Scholar] [CrossRef]
  31. Ferreira, J.F.S.B.; Khoo, Y.; Singer, A. Semidefinite programming approach for the quadratic assignment problem with a sparse graph. Comput. Optim. Appl. 2018, 69, 677–712. [Google Scholar] [CrossRef] [Green Version]
  32. Gonçalves, A.D.; Pessoa, A.A.; Bentes, C.; Farias, R.; De Drummond, A.L.M. A graphics processing unit algorithm to solve the quadratic assignment problem using level-2 reformulation-linearization technique. INFORMS J. Comput. 2017, 29, 676–687. [Google Scholar] [CrossRef]
  33. Hahn, P.M.; Zhu, Y.-R.; Guignard, M.; Hightower, W.L.; Saltzman, M.J. A level-3 reformulation-linearization technique-based bound for the quadratic assignment problem. INFORMS J. Comput. 2012, 24, 202–209. [Google Scholar] [CrossRef] [Green Version]
  34. Nyberg, A.; Westerlund, T. A new exact discrete linear reformulation of the quadratic assignment problem. Eur. J. Oper. Res. 2012, 220, 314–319. [Google Scholar] [CrossRef]
  35. Rendl, F.; Sotirov, R. Bounds for the quadratic assignment problem using the bundle method. Math. Program. 2007, 109, 505–524. [Google Scholar] [CrossRef] [Green Version]
  36. Nyström, M. Solving Certain Large Instances of the Quadratic Assignment Problem: Steinberg’s Examples; Tech. Rep.; California Institute of Technology: Pasadena, CA, USA, 1999. [Google Scholar]
  37. Martí, R.; Pardalos, P.M.; Resende, M.G.C. (Eds.) Handbook of Heuristics; Springer: Cham, Switzerland, 2018. [Google Scholar]
  38. Armour, G.C.; Buffa, E.S. A heuristic algorithm and simulation approach to relative location of facilities. Manag. Sci. 1963, 9, 294–304. [Google Scholar] [CrossRef]
  39. Buffa, E.S.; Armour, G.C.; Vollmann, T.E. Allocating facilities with CRAFT. Harv. Bus. Rev. 1964, 42, 136–158. [Google Scholar]
  40. Murtagh, B.A.; Jefferson, T.R.; Sornprasit, V. A heuristic procedure for solving the quadratic assignment problem. Eur. J. Oper. Res. 1982, 9, 71–76. [Google Scholar] [CrossRef]
  41. Nugent, C.E.; Vollmann, T.E.; Ruml, J. An experimental comparison of techniques for the assignment of facilities to locations. J. Oper. Res. 1968, 16, 150–173. [Google Scholar] [CrossRef]
  42. Angel, E.; Zissimopoulos, V. On the quality of local search for the quadratic assignment problem. Discret. Appl. Math. 1998, 82, 15–25. [Google Scholar] [CrossRef] [Green Version]
  43. Chiang, W.-C.; Chiang, C. Intelligent local search strategies for solving facility layout problems with the quadratic assignment problem formulation. Eur. J. Oper. Res. 1998, 106, 457–488. [Google Scholar] [CrossRef]
  44. Murthy, K.A.; Li, Y.; Pardalos, P.M. A local search algorithm for the quadratic assignment problem. Informatica 1992, 3, 524–538. [Google Scholar]
  45. Pardalos, P.M.; Murthy, K.A.; Harrison, T.P. A computational comparison of local search heuristics for solving quadratic assigment problems. Informatica 1993, 4, 172–187. [Google Scholar]
  46. Aksan, Y.; Dokeroglu, T.; Cosar, A. A stagnation-aware cooperative parallel breakout local search algorithm for the quadratic assignment problem. Comput. Ind. Eng. 2017, 103, 105–115. [Google Scholar] [CrossRef]
  47. Benlic, U.; Hao, J.-K. Breakout local search for the quadratic assignment problem. Appl. Math. Comput. 2013, 219, 4800–4815. [Google Scholar] [CrossRef] [Green Version]
  48. Burkard, R.E.; Rendl, F. A thermodynamically motivated simulation procedure for combinatorial optimization problems. Eur. J. Oper. Res. 1984, 17, 169–174. [Google Scholar] [CrossRef]
  49. Connolly, D.T. An improved annealing scheme for the QAP. Eur. J. Oper. Res. 1990, 46, 93–100. [Google Scholar] [CrossRef]
  50. Wilhelm, M.; Ward, T. Solving quadratic assignment problems by simulated annealing. IIE Trans. 1987, 19, 107–119. [Google Scholar] [CrossRef]
  51. Bölte, A.; Thonemann, U.W. Optimizing simulated annealing schedules with genetic programming. Eur. J. Oper. Res. 1996, 92, 402–416. [Google Scholar] [CrossRef]
  52. Misevičius, A. A modified simulated annealing algorithm for the quadratic assignment problem. Informatica 2003, 14, 497–514. [Google Scholar] [CrossRef]
  53. Paul, G. An efficient implementation of the simulated annealing heuristic for the quadratic assignment problem. arXiv 2011, arXiv:1111.1353. [Google Scholar]
  54. Taillard, E.D. Robust taboo search for the QAP. Parallel. Comput. 1991, 17, 443–455. [Google Scholar] [CrossRef]
  55. Battiti, R.; Tecchiolli, G. The reactive tabu search. ORSA J. Comput. 1994, 6, 126–140. [Google Scholar] [CrossRef]
  56. Drezner, Z. The extended concentric tabu for the quadratic assignment problem. Eur. J. Oper. Res. 2005, 160, 416–422. [Google Scholar] [CrossRef]
  57. Misevicius, A. A tabu search algorithm for the quadratic assignment problem. Comput. Optim. Appl. 2005, 30, 95–111. [Google Scholar] [CrossRef]
  58. Zhu, W.; Curry, J.; Marquez, A. SIMD tabu search for the quadratic assignment problem with graphics hardware acceleration. Int. J. Prod. Res. 2010, 48, 1035–1047. [Google Scholar] [CrossRef]
  59. Fescioglu-Unver, N.; Kokar, M.M. Self controlling tabu search algorithm for the quadratic assignment problem. Comput. Ind. Eng. 2011, 60, 310–319. [Google Scholar] [CrossRef]
  60. Sergienko, I.V.; Shylo, V.P.; Chupov, S.V.; Shylo, P.V. Solving the quadratic assignment problem. Cybern. Syst. Anal. 2020, 56, 53–57. [Google Scholar] [CrossRef]
  61. Shylo, P.V. Solving the quadratic assignment problem by the repeated iterated tabu search method. Cybern. Syst. Anal. 2017, 53, 308–311. [Google Scholar] [CrossRef]
  62. Abdelkafi, O.; Derbel, B.; Liefooghe, A. A parallel tabu search for the large-scale quadratic assignment problem. In Proceedings of the IEEE Congress on Evolutionary Computation, IEEE CEC 2019, Wellington, New Zealand, 10–13 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3070–3077. [Google Scholar] [CrossRef]
  63. Czapiński, M. An effective parallel multistart tabu search for quadratic assignment problem on CUDA platform. J. Parallel Distrib. Comput. 2013, 73, 1461–1468. [Google Scholar] [CrossRef]
  64. Ramkumar, A.S.; Ponnambalam, S.G.; Jawahar, N.; Suresh, R.K. Iterated fast local search algorithm for solving quadratic assignment problems. Robot. Comput. Integr. Manuf. 2008, 24, 392–401. [Google Scholar] [CrossRef]
  65. Stützle, T. Iterated local search for the quadratic assignment problem. Eur. J. Oper. Res. 2006, 174, 1519–1539. [Google Scholar] [CrossRef]
  66. Misevicius, A. An implementation of the iterated tabu search algorithm for the quadratic assignment problem. OR Spectr. 2012, 34, 665–690. [Google Scholar] [CrossRef]
  67. Dokeroglu, T.; Cosar, A. A novel multistart hyper-heuristic algorithm on the grid for the quadratic assignment problem. Eng. Appl. Artif. Intell. 2016, 52, 10–25. [Google Scholar] [CrossRef]
  68. Fleurent, C.; Glover, F. Improved constructive multistart strategies for the quadratic assignment problem using adaptive memory. INFORMS J. Comput. 1999, 11, 198–204. [Google Scholar] [CrossRef]
  69. Wang, J. A multistart simulated annealing algorithm for the quadratic assignment problem. In Proceedings of the 2012 Third International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA 2012, Kaohsiung, Taiwan, 26–28 September 2012; IEEE: Los Alamitos, CA, USA; Washington, DC, USA; Tokyo, Japan, 2012; pp. 19–23. [Google Scholar] [CrossRef]
  70. Li, Y.; Pardalos, P.M.; Resende, M.G.C. A greedy randomized adaptive search procedure for the quadratic assignment problem. In Quadratic Assignment and Related Problems. DIMACS Series in Discrete Mathematics and Theoretical Computer Science; Pardalos, P.M., Wolkowicz, H., Eds.; AMS: Providence, RI, USA, 1994; Volume 16, pp. 237–261. [Google Scholar]
  71. Ahuja, R.K.; Orlin, J.B.; Tiwari, A. A greedy genetic algorithm for the quadratic assignment problem. Comput. Oper. Res. 2000, 27, 917–934. [Google Scholar] [CrossRef] [Green Version]
  72. Lim, M.H.; Yuan, Y.; Omatu, S. Efficient genetic algorithms using simple genes exchange local search policy for the quadratic assignment problem. Comput. Optim. Appl. 2000, 15, 249–268. [Google Scholar] [CrossRef]
  73. Merz, P.; Freisleben, B. Fitness landscape analysis and memetic algorithms for the quadratic assignment problem. IEEE Trans. Evol. Comput. 2000, 4, 337–352. [Google Scholar] [CrossRef]
  74. Migkikh, V.V.; Topchy, A.A.; Kureichik, V.M.; Tetelbaum, A.Y. Combined genetic and local search algorithm for the quadratic assignment problem. In Proceedings of the First International Conference on Evolutionary Computation and Its Applications (EvCA’96); Russian Academy of Sciences: Moscow, Russia, 1996; pp. 335–341. [Google Scholar]
  75. Drezner, Z. A new genetic algorithm for the quadratic assignment problem. INFORMS J. Comput. 2003, 15, 320–330. [Google Scholar] [CrossRef]
  76. Misevicius, A. An improved hybrid genetic algorithm: New results for the quadratic assignment problem. Knowl.-Based Syst. 2004, 17, 65–73. [Google Scholar] [CrossRef]
  77. Tosun, U.; Dokeroglu, T.; Cosar, A. A robust island parallel genetic algorithm for the quadratic assignment problem. Int. J. Prod. Res. 2013, 51, 4117–4133. [Google Scholar] [CrossRef]
  78. Benlic, U.; Hao, J.-K. Memetic search for the quadratic assignment problem. Expert Syst. Appl. 2015, 42, 584–595. [Google Scholar] [CrossRef] [Green Version]
  79. Özçetin, E.; Öztürk, G. A hybrid genetic algorithm for the quadratic assignment problem on graphics processing units. Anadolu Univ. J. Sci. Technol. A Appl. Sci. Eng. 2016, 17, 167–180. [Google Scholar] [CrossRef]
  80. Chmiel, W.; Kwiecień, J. Quantum-inspired evolutionary approach for the quadratic assignment problem. Entropy 2018, 20, 781. [Google Scholar] [CrossRef] [Green Version]
  81. Ahmed, Z.H. A multi-parent genetic algorithm for the quadratic assignment problem. OPSEARCH 2015, 52, 714–732. [Google Scholar] [CrossRef]
  82. Ahmed, Z.H. A hybrid algorithm combining lexisearch and genetic algorithms for the quadratic assignment problem. Cogent Eng. 2018, 5, 1423743. [Google Scholar] [CrossRef]
  83. Baldé, M.A.M.T.; Gueye, S.; Ndiaye, B.M. A greedy evolutionary hybridization algorithm for the optimal network and quadratic assignment problem. Oper. Res. 2020, 1–28. [Google Scholar] [CrossRef]
  84. Chmiel, W. Evolutionary algorithm using conditional expectation value for quadratic assignment problem. Swarm Evol. Comput. 2019, 46, 1–27. [Google Scholar] [CrossRef]
  85. Drezner, Z.; Drezner, T.D. The alpha male genetic algorithm. IMA J. Manag. Math. 2019, 30, 37–50. [Google Scholar] [CrossRef]
  86. Drezner, Z.; Misevičius, A. Enhancing the performance of hybrid genetic algorithms by differential improvement. Comput. Oper. Res. 2013, 40, 1038–1046. [Google Scholar] [CrossRef]
  87. Harris, M.; Berretta, R.; Inostroza-Ponta, M.; Moscato, P. A memetic algorithm for the quadratic assignment problem with parallel local search. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 838–845. [Google Scholar] [CrossRef]
  88. Tang, J.; Lim, M.-H.; Ong, Y.S.; Er, M.J. Parallel memetic algorithm with selective local search for large scale quadratic assignment problems. Int. J. Innov. Comput. Inf. Control 2006, 2, 1399–1416. [Google Scholar]
  89. Tosun, U. A new recombination operator for the genetic algorithm solution of the quadratic assignment problem. Procedia Comput. Sci. 2014, 32, 29–36. [Google Scholar] [CrossRef] [Green Version]
  90. Abdel-Basset, M.; Manogaran, G.; El-Shahat, D.; Mirjalili, S. Integrating the whale algorithm with tabu search for quadratic assignment problem: A new approach for locating hospital departments. Appl. Soft Comput. 2018, 73, 530–546. [Google Scholar] [CrossRef]
  91. Acan, A.; Ünveren, A. A great deluge and tabu search hybrid with two-stage memory support for quadratic assignment problem. Appl. Soft Comput. 2015, 36, 185–203. [Google Scholar] [CrossRef]
  92. Drezner, Z.; Drezner, T.D. Biologically inspired parent selection in genetic algorithms. Ann. Oper. Res. 2020, 287, 161–183. [Google Scholar] [CrossRef]
  93. Fleurent, C.; Ferland, J.A. Genetic hybrids for the quadratic assignment problem. In Quadratic Assignment and Related Problems. DIMACS Series in Discrete Mathematics and Theoretical Computer Science; Pardalos, P.M., Wolkowicz, H., Eds.; AMS: Providence, RI, USA, 1994; Volume 16, pp. 173–188. [Google Scholar]
  94. James, T.; Rego, C.; Glover, F. Sequential and parallel path-relinking algorithms for the quadratic assignment problem. IEEE Intell. Syst. 2005, 20, 58–65. [Google Scholar] [CrossRef]
  95. Lalla-Ruiz, E.; Expósito-Izquierdo, C.; Melián-Batista, B.; Moreno-Vega, M.J. A hybrid biased random key genetic algorithm for the quadratic assignment problem. Inf. Process. Lett. 2016, 116, 513–520. [Google Scholar] [CrossRef]
  96. Lim, W.L.; Wibowo, A.; Desa, M.I.; Haron, H. A biogeography-based optimization algorithm hybridized with tabu search for the quadratic assignment problem. Comput. Intell. Neurosc. 2016, 2016, 5803893. [Google Scholar] [CrossRef] [Green Version]
  97. Misevičius, A.; Rubliauskas, D. Testing of hybrid genetic algorithms for structured quadratic assignment problems. Informatica 2009, 20, 255–272. [Google Scholar] [CrossRef]
  98. Ng, K.M.; Tran, T.H. A parallel water flow algorithm with local search for solving the quadratic assignment problem. J. Ind. Manag. Optim. 2019, 15, 235–259. [Google Scholar] [CrossRef] [Green Version]
  99. Oliveira, C.A.S.; Pardalos, P.M.; Resende, M.G.C. GRASP with path-relinking for the quadratic assignment problem. In Efficient and Experimental Algorithms, WEA 2004, Lecture Notes in Computer Science; Ribeiro, C.C., Martins, S.L., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3059, pp. 237–261. [Google Scholar] [CrossRef] [Green Version]
  100. Rodriguez, J.M.; Macphee, F.C.; Bonham, D.J.; Bhavsar, V.C. Solving the quadratic assignment and dynamic plant layout problems using a new hybrid meta-heuristic approach. In Proceedings of the 18th Annual International Symposium on High Performance Computing Systems and Applications (HPCS), Winnipeg, MB, Canada, 16–19 May 2004; Eskicioglu, M.R., Ed.; Curran Associates: Red Hook, NY, USA, 2004; pp. 9–16. [Google Scholar]
  101. Tseng, L.-Y.; Liang, S.-C. A hybrid metaheuristic for the quadratic assignment problem. Comput. Optim. Appl. 2006, 34, 85–113. [Google Scholar] [CrossRef]
  102. Vázquez, M.; Whitley, L.D. A hybrid genetic algorithm for the quadratic assignment problem. In Proceedings of the 2nd Annual Conference on Genetic and Evolutionary Computation, Las Vagas, NV, USA, 8–12 July 2000; pp. 135–142. [Google Scholar]
  103. Xu, Y.-L.; Lim, M.H.; Ong, Y.S.; Tang, J. A GA-ACO-local search hybrid algorithm for solving quadratic assignment problem. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 June 2006; Volume 1, pp. 599–605. [Google Scholar] [CrossRef]
  104. Zhang, H.; Liu, F.; Zhou, Y.; Zhang, Z. A hybrid method integrating an elite genetic algorithm with tabu search for the quadratic assignment problem. Inf. Sci. 2020, 539, 347–374. [Google Scholar] [CrossRef]
  105. Hafiz, F.; Abdennour, A. Particle swarm algorithm variants for the quadratic assignment problems—A probabilistic learning approach. Expert Syst. Appl. 2016, 44, 413–431. [Google Scholar] [CrossRef]
  106. Szwed, P.; Chmiel, W.; Kadłuczka, P. OpenCL implementation of PSO algorithm for the quadratic assignment problem. In Artificial Intelligence and Soft Computing, ICAISC 2015, Lecture Notes in Computer Science; Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J., Eds.; Springer: Cham, Switzerland, 2015; Volume 9120, pp. 223–234. [Google Scholar] [CrossRef]
  107. Gambardella, L.M.; Taillard, E.D.; Dorigo, M. Ant colonies for the quadratic assignment problem. J. Oper. Res. Soc. 1999, 50, 167–176. [Google Scholar] [CrossRef]
  108. Dokeroglu, T.; Sevinc, E.; Cosar, A. Artificial bee colony optimization for the quadratic assignment problem. Appl. Soft Comput. 2019, 76, 595–606. [Google Scholar] [CrossRef]
  109. Samanta, S.; Philip, D.; Chakraborty, S. A quick convergent artificial bee colony algorithm for solving quadratic assignment problems. Comput. Ind. Eng. 2019, 137, 106070. [Google Scholar] [CrossRef]
  110. Abdel-Baset, M.; Wu, H.; Zhou, Y.; Abdel-Fatah, L. Elite opposition-flower pollination algorithm for quadratic assignment problem. J. Intell. Fuzzy Syst. 2017, 33, 901–911. [Google Scholar] [CrossRef]
  111. Dokeroglu, T. Hybrid teaching–learning-based optimization algorithms for the quadratic assignment problem. Comput. Ind. Eng. 2015, 85, 86–101. [Google Scholar] [CrossRef]
  112. Duman, E.; Uysal, M.; Alkaya, A.F. Migrating birds optimization: A new metaheuristic approach and its performance on quadratic assignment problem. Inf. Sci. 2012, 217, 65–77. [Google Scholar] [CrossRef]
  113. Ismail, M.M.; Hezam, I.M.; El-Sharkawy, E. Enhanced cuckoo search algorithm with SPV rule for quadratic assignment problem. Int. J. Comput. Appl. 2017, 158, 39–42. [Google Scholar] [CrossRef]
  114. Kiliç, H.; Yüzgeç, U. Tournament selection based antlion optimization algorithm for solving quadratic assignment problem. Eng. Sci. Technol. Int. J. 2019, 22, 673–691. [Google Scholar] [CrossRef]
  115. Mzili, I.; Riffi, M.E.; Benzekri, F. Penguins search optimization algorithm to solve quadratic assignment problem. In Proceedings of the 2nd International Conference on Big Data, Cloud and Applications, Tetouan, Morocco, 29–30 March 2017; ACM: New York, NY, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  116. Qawqzeh, Y.K.; Jaradat, G.; Al-Yousef, A.; Abu-Hamdah, A.; Almarashdeh, I.; Alsmadi, M.; Tayfour, M.; Shaker, K.; Haddad, F. Applying the big bang-big crunch metaheuristic to large-sized operational problems. Int. J. Electr. Comput. Eng. 2020, 10, 2484–2502. [Google Scholar] [CrossRef]
  117. Riffi, M.E.; Saji, Y.; Barkatou, M. Incorporating a modified uniform crossover and 2-exchange neighborhood mechanism in a discrete bat algorithm to solve the quadratic assignment problem. Egypt. Inform. J. 2017, 18, 221–232. [Google Scholar] [CrossRef]
  118. Zamani, R.; Amirghasemi, M. A self-adaptive nature-inspired procedure for solving the quadratic assignment problem. In Frontier Applications of Nature Inspired Computation. Springer Tracts in Nature-Inspired Computing; Khosravy, M., Gupta, N., Patel, N., Senjyu, T., Eds.; Springer: Singapore, 2020; pp. 119–147. [Google Scholar] [CrossRef]
  119. Loiola, E.M.; De Abreu, N.M.M.; Boaventura-Netto, P.O.; Hahn, P.; Querido, T. A survey for the quadratic assignment problem. Eur. J. Oper. Res. 2007, 176, 657–690. [Google Scholar] [CrossRef]
  120. Frieze, A.M.; Yadegar, J.; El-Horbaty, S.; Parkinson, D. Algorithms for assignment problems on an array processor. Parallel Comput. 1989, 11, 151–162. [Google Scholar] [CrossRef]
  121. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
  122. Drezner, Z. Compounded genetic algorithms for the quadratic assignment problem. Oper. Res. Lett. 2005, 33, 475–480. [Google Scholar] [CrossRef]
  123. Tate, D.M.; Smith, A.E. A genetic approach to the quadratic assignment problem. Comput. Oper. Res. 1995, 22, 73–83. [Google Scholar] [CrossRef]
  124. Misevičius, A.; Kuznecovaitė, D.; Platužienė, J. Some further experiments with crossover operators for genetic algorithms. Informatica 2018, 29, 499–516. [Google Scholar] [CrossRef] [Green Version]
  125. Pavai, G.; Geetha, T.V. A survey on crossover operators. ACM Comput. Surv. 2017, 49, 1–43. [Google Scholar] [CrossRef]
  126. Deb, K.; Agrawal, R.B. Simulated binary crossover for continuous search space. Compl. Syst. 1995, 9, 115–148. [Google Scholar]
  127. Lourenco, H.R.; Martin, O.; Stützle, T. Iterated local search. In Handbook of Metaheuristics; Glover, F., Kochenberger, G., Eds.; Kluwer: Norwell, MA, USA, 2002; pp. 321–353. [Google Scholar] [CrossRef]
  128. Ahmed, A.K.M.F.; Sun, J.U. A novel approach to combine the hierarchical and iterative techniques for solving capacitated location-routing problem. Cogent Eng. 2018, 5, 1463596. [Google Scholar] [CrossRef]
  129. Battarra, M.; Benedettini, S.; Roli, A. Leveraging saving-based algorithms by master–slave genetic algorithms. Eng. Appl. Artif. Intell. 2011, 24, 555–566. [Google Scholar] [CrossRef]
  130. Garai, G.; Chaudhuri, B.B. A distributed hierarchical genetic algorithm for efficient optimization and pattern matching. Pattern Recognit. 2007, 40, 212–228. [Google Scholar] [CrossRef]
  131. Hauschild, M.; Bhatia, S.; Pelikan, M. Image segmentation using a genetic algorithm and hierarchical local search. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, Philadelphia, PA, USA, 7–11 July 2012; Soule, T., Ed.; ACM Press: New York, NY, USA, 2012; pp. 633–639. [Google Scholar] [CrossRef]
  132. Hussin, M.S.; Stützle, T. Hierarchical iterated local search for the quadratic assignment problem. In Hybrid Metaheuristics, HM 2009, Lecture Notes in Computer Science; Blesa, M.J., Blum, C., Di Gaspero, L., Roli, A., Sampels, M., Schaerf, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5818, pp. 115–129. [Google Scholar] [CrossRef] [Green Version]
  133. Rokbani, N.; Kromer, P.; Twir, I.; Alimi, A.M. A hybrid hierarchical heuristic-ACO with local search applied to travelling salesman problem. Int. J. Syst. Dyn. Appl. 2020, 9, 58–73. [Google Scholar] [CrossRef]
  134. Schaefer, R.; Byrski, A.; Kołodziej, J.; Smołka, M. An agent-based model of hierarchic genetic search. Comput. Math. Appl. 2012, 64, 3763–3776. [Google Scholar] [CrossRef] [Green Version]
  135. Smyth, K.; Hoos, H.H.; Stützle, T. Iterated robust tabu search for MAX-SAT. In Advances in Artificial Intelligence, Proceedings of the 16th Conference of the Canadian Society for Computational Studies of Intelligence. Lecture Notes in Artificial Intelligence, Halifax, NS, Canada, 11–13 June 2003; Xiang, Y., Chaib-Draa, B., Eds.; Springer: Berlin, Germany, 2003; Volume 2671, pp. 129–144. [Google Scholar] [CrossRef] [Green Version]
  136. Glover, F.; Laguna, M. Tabu Search; Kluwer: Dordrecht, The Netherlands, 1997. [Google Scholar]
  137. Dell’amico, M.; Trubian, M. Solution of large weighted equicut problems. Eur. J. Oper. Res. 1998, 106, 500–521. [Google Scholar] [CrossRef]
  138. Lim, S.M.; Sultan, A.B.M.; Sulaiman, M.N.; Mustapha, A.; Leong, K.Y. Crossover and mutation operators of genetic algorithms. Int. J. Mach. Learn. Comput. 2017, 7, 9–12. [Google Scholar] [CrossRef] [Green Version]
  139. Sivanandam, S.N.; Deepa, S.N. Introduction to Genetic Algorithms; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2008. [Google Scholar]
  140. Misevičius, A. Letter: New best known solution for the most difficult QAP instance “tai100a”. Memet. Comput. 2019, 11, 331–332. [Google Scholar] [CrossRef]
Figure 1. Graphical illustration of a crossover.
Figure 1. Graphical illustration of a crossover.
Entropy 23 00108 g001
Figure 2. Visual conceptual interpretation of hierarchicity.
Figure 2. Visual conceptual interpretation of hierarchicity.
Entropy 23 00108 g002
Figure 3. Illustration of the mutation procedure ( n = 9 , ξ = 5 ) (The mutation process steps are as follows: (1) element 3 is interchanged with element 4; (2) element 3 (in position 3) is interchanged with element 1; (3) element 3 (in position 4) is interchanged with element 8; (4) element 3 (in position 6) is interchanged with element 5 (element 3 is eventually in position 8)).
Figure 3. Illustration of the mutation procedure ( n = 9 , ξ = 5 ) (The mutation process steps are as follows: (1) element 3 is interchanged with element 4; (2) element 3 (in position 3) is interchanged with element 1; (3) element 3 (in position 4) is interchanged with element 8; (4) element 3 (in position 6) is interchanged with element 5 (element 3 is eventually in position 8)).
Entropy 23 00108 g003
Table 1. Values of the control parameters of the basic version of GHA used in the experiments.
Table 1. Values of the control parameters of the basic version of GHA used in the experiments.
ParameterValueRemarks
Population size, P S 10
Number of generations, N g e n 100
Initial population variant, I n i t P o p V a r 1
Selection factor, σ 1.5
Distance threshold, D T max { 2 ,   0.05 n } 0 < D T n
Idle generations limit, L i d l e _ g e n max { 3 ,   0.05 N g e n } 0 < L i d l e _ g e n N g e n
Population replacement variant, R e p V a r 1
Table 2. Standard values of the control parameters of the hierarchical iterated tabu search algorithm.
Table 2. Standard values of the control parameters of the hierarchical iterated tabu search algorithm.
ParameterValueRemarks
Number of hierarchical iterated tabu search iterations, Q h i e r 256 Q h i e r = Q ( 0 ) × Q ( 1 ) × Q ( 2 ) × Q ( 3 ) ×                                 Q ( 4 ) × Q ( 5 ) × Q ( 6 ) × Q ( 7 )
Number of tabu search iterations, τ 20
Idle iterations limit, L i d l e _ i t e r 0.2 τ 0 < L i d l e _ i t e r τ
Tabu tenure, h 0.3 n h > 0
Randomization coefficient for tabu search, α 0.02 0 α < 1
Mutation rate, ξ 0.2 n 2 ξ n
Q ( 0 ) = Q ( 1 ) = Q ( 2 ) = Q ( 3 ) = Q ( 4 ) = Q ( 5 ) = Q ( 6 ) = Q ( 7 ) = 2 .
Table 3. Comparison of crossover procedures.
Table 3. Comparison of crossover procedures.
InstanceBKV θ ¯ Time (s)
1PX2PXUXSXPMXSPX1SPX2SPX3CXCOHXMPXUNIVX
bl4945480.7300.7650.7120.7560.7560.8120.7920.7300.7650.7120.8530.68863.9
bl6459881.0090.7750.7681.2221.0620.7951.1361.0960.8820.9621.4160.962126.7
ci492363550340.0040.0020.0090.0000.0120.0040.0000.0000.0030.0130.0030.00510.7
ci643256710350.0920.0860.1030.0870.0810.0970.0660.0550.0850.0680.1100.09865.4
dre427648.77015.05213.66511.99011.12618.7709.7388.58614.05814.6077.6967.40818.1
dre56108628.78528.67436.20637.66135.47039.28238.12238.47129.96324.19946.33526.29859.6
lipa70a1697550.1650.1650.1650.1650.1650.1650.1650.1650.1650.1650.1650.16523.3
lipa70b46032000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0001.8
sko56344580.0020.0010.0180.0000.0170.0210.0020.0000.0020.0000.0140.00116.9
sko64484980.0060.0000.0220.0000.0000.0160.0000.0000.0000.0000.0150.0068.4
tai35a24220020.3550.4160.3320.2630.2330.5060.3130.2390.3860.2520.2150.48416.0
tai35b2833154450.0000.0000.0000.0000.0000.0190.0000.0000.0000.0000.0000.0001.5
tai40a31393700.4830.4170.5560.4770.4820.6860.4640.4620.5130.6220.5340.60139.1
tai40b6372509480.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0002.0
tai45e164123.2531.3662.9441.3071.2852.6820.1721.5973.5062.3276.3231.48229.2
tai50a49387960.8100.8340.8720.7420.7120.9600.7840.8840.8290.7970.8380.91267.1
tai50b4588215170.0000.0330.0330.0000.0000.0000.0000.0330.0330.0660.1130.0333.9
tai60a72059620.8190.8260.9040.8940.8580.9760.8990.8650.9710.7620.9010.888103.4
tai60b6082150540.0370.0000.0000.0000.0370.0000.0000.0000.0000.0000.0400.0006.8
wil50488160.0020.0030.0130.0070.0070.0110.0080.0000.0050.0070.0070.0116.2
Average:2.2662.4712.8662.7792.6153.2902.6332.6592.6082.2783.2792.001
Notes. Time denotes the average CPU time per one run. In all cases, the first mutation variant (M1) is used.
Table 4. Comparison of mutation procedures.
Table 4. Comparison of mutation procedures.
InstanceBKV θ ¯ Time (s)
M1M2M3M4M5M6M7M8M9M10M11M12
bl4945480.7300.8000.9940.7120.7390.6421.0990.9761.0201.0820.6240.79263.8
bl6459881.0091.0751.3360.8550.8621.0491.2291.3761.2292.0570.7551.336125.2
ci492363550340.0040.0040.0800.0210.0000.0010.0970.0030.0030.0000.0010.0019.6
ci643256710350.0920.0850.2180.0890.0740.0920.1870.2560.1100.0830.0750.04966.3
dre427648.77011.20422.9841.46615.0267.01625.91620.52414.34616.3358.0630.00018.9
dre56108628.78532.43140.82926.53829.70537.45941.19739.57634.49456.81426.20616.77758.1
lipa70a1697550.1650.1650.1650.1650.1650.1650.1650.1650.1650.1650.1650.16524.0
lipa70b46032000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0002.1
sko56344580.0020.0000.0820.0190.0000.0000.0960.0640.0030.0010.0000.00016.4
sko64484980.0060.0000.0020.0060.0000.0000.0070.0260.0010.0000.0060.0008.3
tai35a24220020.3550.3860.7070.3770.2400.3650.6720.5200.5130.0000.1970.03417.2
tai35b2833154450.0000.0000.0000.0840.0000.0000.0000.0000.0370.0000.0000.0281.3
tai40a31393700.4830.5010.7970.5080.4870.5200.7710.6520.6990.3370.4480.28940.1
tai40b6372509480.0000.2010.0000.0000.0000.0000.0000.2010.4020.0000.0000.6032.2
tai45e164123.2531.37610.23111.7842.7142.2469.4548.45617.0340.0001.30115.49030.5
tai50a49387960.8100.7181.1930.8760.8130.8461.0641.0440.8990.6010.7370.48767.5
tai50b4588215170.0000.0000.0780.2530.0000.0330.0190.0330.0350.0000.0330.1233.7
tai60a72059620.8190.8791.1230.9080.8830.8821.2001.0410.9350.6490.8300.487103.7
tai60b6082150540.0370.0000.0020.2010.0000.0000.0370.0050.0000.0000.0000.4096.9
wil50488160.0020.0050.0170.0180.0030.0030.0250.0110.0140.0000.0070.0006.6
Average:2.2662.4924.0422.2442.5863.2904.1623.7463.5973.9061.9721.854
Note. In all cases, the 1PX crossover is used.
Table 5. Comparison of different variants of the initial population construction.
Table 5. Comparison of different variants of the initial population construction.
InstanceBKV θ ¯ Time (s)
(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)
bl4945480.6070.5890.5540.6070.6680.5980.5890.6240.6160.5980.5190.493115.2
bl6459880.6010.8350.7410.6610.7410.5010.7680.6810.7350.6810.7150.661243.1
ci492363550340.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00016.3
ci643256710350.0550.0190.0290.0000.0510.0600.0280.0000.0080.0070.0000.00081.6
dre427644.2673.2726.4660.0006.3094.8694.2930.0001.3350.0000.0000.00016.7
dre56108623.46220.62612.3024.49413.13122.11813.94111.01320.16619.1534.8073.757129.8
lipa70a1697550.0550.0550.0550.0550.0550.0550.0550.0550.0550.0550.0550.05518.8
lipa70b46032000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0007.8
sko56344580.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0009.9
sko64484980.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00011.5
tai35a24220020.2010.1690.0760.0000.1270.0810.0000.0000.0000.0000.0000.00022.0
tai35b2833154450.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0009.4
tai40a31393700.4430.3770.3350.2190.5120.2770.2630.0830.3110.2310.0880.06766.8
tai40b6372509480.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0002.2
tai45e164120.7300.7300.0000.0000.0000.0000.0000.0000.0000.0000.0000.00036.6
tai50a49387960.6470.7150.6280.4500.6200.6100.5600.3520.5770.4880.3720.191120.4
tai50b4588215170.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0004.7
tai60a72059620.7210.6720.6430.5190.6670.6600.5490.4630.6330.5060.4600.353194.9
tai60b6082150540.0370.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00060.1
wil50488160.0030.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0005.4
Average:1.5911.4031.0910.3501.1441.4911.0520.6641.2221.0860.3510.279
Note. In all cases, the UNIVX crossover and the twelfth mutation variant (M12) are used.
Table 6. Comparison of different variants of population replacement.
Table 6. Comparison of different variants of population replacement.
InstanceBKV θ ¯ Time (s)
(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)
bl4945480.6070.6240.6770.5890.6420.5800.5540.6240.6160.6070.5890.58962.1
bl6459880.6010.6350.7150.8350.7150.6880.7410.6950.7680.6610.6350.755123.0
ci492363550340.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00010.2
ci643256710350.0550.0350.0400.0190.0510.0360.0290.0320.0270.0000.0000.00067.7
dre427644.2676.9638.2463.2721.4922.0946.4661.4665.4190.0000.0000.00019.6
dre56108623.46215.4889.68720.62611.32618.25012.3028.12210.0554.4945.7837.03557.0
lipa70a1697550.0550.0550.0550.0550.0550.0550.0550.0550.0550.0550.0550.05524.3
lipa70b46032000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0003.1
sko56344580.0000.0010.0000.0000.0000.0000.0000.0000.0000.0000.0000.00017.2
sko64484980.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0007.7
tai35a24220020.2010.2220.1420.1690.1300.1650.0760.0760.0320.0000.0000.00017.8
tai35b2833154450.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0002.1
tai40a31393700.4430.4100.4440.3770.4150.4380.3350.3260.2960.2190.2190.22238.5
tai40b6372509480.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0001.9
tai45e164120.7302.9202.4920.7301.0230.6050.0000.0000.4590.0000.0000.00033.2
tai50a49387960.6470.7010.6480.7150.6710.6850.6280.6110.6060.4500.4500.42466.7
tai50b4588215170.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0001.2
tai60a72059620.7210.7400.6870.6720.6900.7440.6430.6270.6270.5190.4690.501105.9
tai60b6082150540.0370.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0006.1
wil50488160.0030.0020.0030.0000.0000.0000.0000.0000.0000.0000.0000.0006.7
Average:1.5911.4401.1921.4030.8611.2171.0910.6320.9480.3500.4100.479
Note. In all cases, the UNIVX crossover and the mutation variant M12 are used. Also, the initial population construction option I n i t P o p V a r = 1 is used.
Table 7. Comparison of different variants of the hierarchical iterated tabu search algorithm.
Table 7. Comparison of different variants of the hierarchical iterated tabu search algorithm.
InstanceBKV θ ¯ Time (s)
(1-a)(1-b)(1-c)(2-a)(2-b)(2-c)(3-a)(3-b)(3-c)(4-a)(4-b)(4-c)
bl4945480.5980.5450.6330.6330.4400.5890.5280.4750.5280.5100.4750.510125.1
bl6459880.7350.6750.7880.9350.7210.7150.6610.8080.6950.6810.6750.755350.5
ci492363550340.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00015.3
ci643256710350.0310.0140.0190.0210.0110.0170.0040.0110.0070.0000.0040.00065.1
dre427643.6131.4660.0002.6700.0001.3350.0000.0000.0000.0000.0000.00027.2
dre5610868.69213.29714.51216.13317.01719.22713.2237.9749.8165.5066.8515.672180.7
lipa70a1697550.0520.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00023.4
lipa70b46032000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00018.4
sko56344580.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00029.3
sko64484980.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00028.4
tai35a24220020.1080.1360.0000.0000.0770.0200.0630.0780.0000.0000.0670.00037.0
tai35b2833154450.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00019.8
tai40a31393700.3420.3370.3500.2770.3620.2640.3150.2750.3220.2630.2670.201132.7
tai40b6372509480.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00011.9
tai45e164120.5860.7300.2932.0150.0000.2930.0000.0000.0000.0000.0000.00067.7
tai50a49387960.4870.5990.5590.4990.6130.6120.6140.5270.5040.5240.4810.491281.1
tai50b4588215170.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00033.3
tai60a72059620.6490.6070.6200.5820.5730.5640.5620.6350.5430.5840.5420.511345.5
tai60b6082150540.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00036.8
wil50488160.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0008.9
Average:0.7950.9200.8891.1880.9911.1820.7990.5390.6210.4030.4680.407
Note. In all cases, the UNIVX crossover and the mutation variant M12 are used. Also, the initial population construction option I n i t P o p V a r = 1 and population replacement option R e p V a r = 2 are used.
Table 8. Results of the experiments with the increased number of iterations of the hierarchical iterated tabu search algorithm.
Table 8. Results of the experiments with the increased number of iterations of the hierarchical iterated tabu search algorithm.
InstanceBKV θ ¯ Time (s)
(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)
bl4945480.4400.4660.4660.3960.4750.4840.3960.4400.4840.3960.4840.466230.5
bl6459880.6150.6080.5680.5410.5480.5280.5740.6350.5080.5210.6480.581650.0
ci492363550340.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00017.3
ci643256710350.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00066.8
dre427640.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00028.4
dre5610865.6541.3440.0001.7314.6961.2890.0004.6780.0002.9100.0002.910520.0
lipa70a1697550.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00023.7
lipa70b46032000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00019.3
sko56344580.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00029.8
sko64484980.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00029.6
tai35a24220020.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00040.2
tai35b2833154450.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00019.3
tai40a31393700.0150.0300.0370.0370.0220.0370.0300.0370.0370.0370.0220.022199.5
tai40b6372509480.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00012.9
tai45e164120.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00068.8
tai50a49387960.0840.1160.1170.0800.0920.1520.1190.0570.0360.0920.0520.011291.0
tai50b4588215170.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00034.1
tai60a72059620.2210.2580.2370.2380.2350.2360.2770.2210.2140.2110.2000.161554.0
tai60b6082150540.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00037.5
wil50488160.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.00010.1
Average:0.3510.1410.0710.1510.3030.1360.0700.3030.0640.2080.0700.208
Notes. In all cases, the UNIVX crossover and the mutation variant M12 are used. Also, the options I n i t P o p V a r = 3 , R e p V a r = 2 are used.
Table 9. Results of GHA for the set of 88 instances of QAPLIB [14,19,29].
Table 9. Results of GHA for the set of 88 instances of QAPLIB [14,19,29].
InstanceBKV θ ¯ Time (s)InstanceBKV θ ¯ Time (s)
bl3632960.0009.491lipa90b124904410.0000.803
chr25a37960.0001.936nug3061240.0000.122
ci361686119710.0001.279rou207255220.0000.089
ci492363550340.0006.864scr201100300.0000.022
ci643256710350.00046.818sko42158120.0000.592
ci814274478200.000250.470sko49233860.0007.989
dre153060.0000.003sko56344580.0007.145
dre183320.0000.034sko64484980.0007.833
dre213560.0000.033ste36a95260.0000.830
dre243960.0000.178ste36b158520.0000.175
dre284760.0000.470ste36c82391100.0000.513
dre305080.0000.875tai10a1350280.0000.005
dre427640.0009.809tai10b11837600.0000.003
dre5610860.00086.024tai12a2244160.0000.003
dre7214520.000489.877tai12b394649250.0000.003
els19172125480.0000.023tai15a3882140.0000.006
esc32a1300.0000.237tai15b517652680.0000.005
esc32b1680.0000.022tai17a4918120.0000.009
esc32c6420.0000.005tai20a7034820.0000.122
esc32d2000.0000.008tai20b1224553190.0000.014
esc32e20.0000.003tai25a11672560.0000.262
esc32f20.0000.005tai25b3443556460.0000.041
esc32g60.0000.003tai27e125580.0000.332
esc32h4380.0000.008tai27e228500.0000.399
esc64a1160.0000.026tai27e332580.0000.078
esc128640.0000.335tai30a18181460.0000.392
had2069220.0000.013tai30b6371171130.0000.176
kra30a889000.0000.304tai35a24220020.0001.527
kra30b914200.0000.643tai35b2833154450.0000.800
lipa20a36830.0000.009tai40b6372509480.0000.900
lipa20b270760.0000.002tai45e164120.0001.346
lipa30a131780.0000.038tai45e257340.0005.713
lipa30b1514260.0000.008tai45e374380.0002.471
lipa40a315380.0000.190tai50b4588215170.0005.488
lipa40b4765810.0000.017tai60b6082150540.0005.036
lipa50a620930.0000.473tai64c18559280.0000.022
lipa50b12102440.0000.062tai75e1144880.00052.287
lipa60a1072180.0004.446tai75e2144440.00025.134
lipa60b25201350.0000.153tai75e3141540.00036.677
lipa70a1697550.0006.915tai80b8184150430.00029.161
lipa70b46032000.0000.251tai100b11859961370.00083.515
lipa80a2531950.00022.615tho301499360.0000.097
lipa80b77639620.0000.579tho402405160.0003.928
lipa90a3606300.00081.371wil50488160.0003.133
Note. Time denotes the average CPU time per one run.
Table 10. Comparative results between GHA and memetic algorithm (MA) [78] (part I).
Table 10. Comparative results between GHA and memetic algorithm (MA) [78] (part I).
InstanceBKVGHAMA
θ ¯ Time (s) θ ¯ Time (s)
sko72662560.00029.3800.000240.000
sko81909980.00095.4210.000258.000
sko901155340.000229.4560.000918.000
sko100a1520020.000542.6400.0001338.000
sko100b1538900.000227.7740.000390.000
sko100c1478620.000400.6970.000720.000
sko100d1495760.000377.1080.0061254.000
sko100e1491500.000438.6320.000714.000
sko100f1490360.000790.5500.0001380.000
wil1002730380.000600.5660.000870.000
Note. Time denotes the average CPU time per one run.
Table 11. Comparative results between GHA and memetic algorithm (MA) [78] (part II).
Table 11. Comparative results between GHA and memetic algorithm (MA) [78] (part II).
InstanceBKVGHAMA
θ ¯ Time (s) θ ¯ Time (s)
tai40a31393700.052(3)204.9160.059(2)486.000
tai50a49387960.192(2)268.7050.131(2)2520.000
tai60a72059620.215(1)713.4550.144(2)4050.000
tai80a134991840.367(0)3040.0000.426(0)3948.000
tai100a210435600.311(0)6200.0000.447(0)2646.000
Notes. Time denotes the average CPU time per one run. In parentheses, we present the number of times that the BKS has been found. The best known value for tai100a is from [140].
Table 12. Comparative results between GHA and memetic algorithm (MA) [78] (part III).
Table 12. Comparative results between GHA and memetic algorithm (MA) [78] (part III).
InstanceBKVGHAMA
θ ¯ Time (s) θ ¯ Time (s)
tai50b4588215170.0005.4880.00072.000
tai60b6082150540.0005.0360.000312.000
tai80b8184150430.00029.1610.0001878.000
tai100b11859961370.00083.5150.000816.000
Note. Time denotes the average CPU time per one run.
Table 13. Comparative results between GHA and hybrid genetic algorithm (HGA) [14] (part I).
Table 13. Comparative results between GHA and hybrid genetic algorithm (HGA) [14] (part I).
InstanceBKVGHAHGA
θ ¯ Time (s) θ ¯ Time (s)
dre305080.000(10)0.8750.000143.400
dre427640.000(10)9.8091.340547.800
dre5610860.000(10)86.02417.4601810.800
dre7214520.000(10)489.87727.2805591.400
dre90183810.351(2)9999.97833.88011,557.800
Note. Time denotes the average CPU time per one run. In parentheses, we present the number of times that the BKS has been found.
Table 14. Comparative results between GHA and hybrid genetic algorithm (HGA) [14] (part II).
Table 14. Comparative results between GHA and hybrid genetic algorithm (HGA) [14] (part II).
InstanceBKVGHAHGA
θ ¯ Time (s) θ ¯ Time (s)
tai27e125580.0000.3320.000~60.000
tai27e228500.0000.3990.000~60.000
tai27e332580.0000.0780.000~60.000
tai45e164120.0001.3460.000~300.000
tai45e257340.0005.7130.000~300.000
tai45e374380.0002.4710.000~300.000
tai75e1144880.00052.2870.000~2220.000
tai75e2144440.00025.1340.339~2220.000
tai75e3141540.00036.6770.000~2220.000
Note. Time denotes the average CPU time per one run.
Table 15. Comparative results between GHA and hybrid genetic algorithm with differential improvement (HGA-DI) [86] (part I).
Table 15. Comparative results between GHA and hybrid genetic algorithm with differential improvement (HGA-DI) [86] (part I).
InstanceBKVGHAHGA-DI
θ ¯ Time (s) θ ¯ Time (s)
bl3632960.000(10)9.4910.000(10)51.000
bl4945480.229(2)217.5400.334( 0)125.000
bl6459880.294(1)550.0600.227( 0)356.000
bl8175320.490(0)1725.8000.494( 0)937.000
bl10092640.527(0)4070.8000.548( 0)2306.000
Note. Time denotes the average CPU time per one run. In parentheses, we present the number of times that the BKS has been found.
Table 16. Comparative results between GHA and hybrid genetic algorithm with differential improvement (HGA-DI) [86] (part II).
Table 16. Comparative results between GHA and hybrid genetic algorithm with differential improvement (HGA-DI) [86] (part II).
InstanceBKVGHAHGA-DI
θ ¯ Time (s) θ ¯ Time (s)
ci361686119710.000(10)1.2790.000(10)50.000
ci492363550340.000(10)6.8640.000(10)124.000
ci643256710350.000(10)46.8180.000(10)354.000
ci814274478200.000(10)250.4700.000(10)932.000
ci1005231463660.003(7)4270.3000.007( 3)2285.000
Note. Time denotes the average CPU time per one run. In parentheses, we present the number of times that the BKS has been found.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Misevičius, A.; Verenė, D. A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem. Entropy 2021, 23, 108. https://doi.org/10.3390/e23010108

AMA Style

Misevičius A, Verenė D. A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem. Entropy. 2021; 23(1):108. https://doi.org/10.3390/e23010108

Chicago/Turabian Style

Misevičius, Alfonsas, and Dovilė Verenė. 2021. "A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem" Entropy 23, no. 1: 108. https://doi.org/10.3390/e23010108

APA Style

Misevičius, A., & Verenė, D. (2021). A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem. Entropy, 23(1), 108. https://doi.org/10.3390/e23010108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop