1. Introduction
Inspired by natural and physical phenomena, evolution algorithms are experiencing rapid development. These algorithms prove to be highly effective during the search phase of problem optimization, significantly enhancing their capacity to explore global optimal solutions. They emulate biological behaviors and physical phenomena, which can be translated into efficient search mechanisms. Through multiple generations of continuous iteration and refinement, these algorithms exhibit exceptional search processes, greatly improving the performance in obtaining the best solutions. Evolutionary algorithms have been successfully used in a wide range of applications to solve various optimisation problems [1]-[3].
Currently, the field of optimization has been dominated by two main approaches: traditional mathematical optimization methods and meta-heuristic algorithms [4], [5]. The latter category includes popular algorithms such as the genetic algorithm [6], ladder sphere evolution search algorithm [7], and ant colony optimization [8]. Recently, the focus of many research labs has been on finding efficient solutions to complex optimization problems found in real-world scenarios, such as the dynamic location routing problem [9], wave energy converter position optimization problem [10], and artificial neural model training problem [11]. To tackle these challenging problems, researchers have looked to nature for inspiration, leading to the development of various evolution algorithms [12], [13]. These algorithms draw on the principles and behaviors observed in natural phenomena to inform their approach to solving optimization problems.
Most evolution algorithms typically use a panmictic population structure, where each individual has an equal chance to interact with others [14]. However, in recent years, a distributed population structure has become increasingly popular in parallel computing [15]. This approach employs multiple sub-populations to handle all individuals, with a panmictic approach usually performed within each sub-population. In contrast, a cellular structure only allows individuals to communicate with those in their predefined neighborhood [16], [17]. The hierarchical structure can facilitate information exchange among different sub-populations more frequently, thus improving the overall search efficiency of the algorithm [18]-[21]. Unlike homogeneous population structures, the small-world [22]-[24] and scale-free [25] structures consider the population as non-homogeneous networks, where some individuals have a higher probability of interacting with others. These structures aim to accelerate the convergence speed of the search algorithm.
The integration of population structure with metaheuristic optimization algorithms, incorporating mechanisms such as stratification, distribution, collaboration, and adaptive adjustment, enhances algorithmic performance by achieving a better balance between exploration and exploitation, fostering global search, and maintaining diversity. This approach has demonstrated impressive success when combined with various metaheuristic optimization algorithms, such as DWSA (Distributional Western Chicken Swarm Algorithm) [26] and MLSGSA (Multi-Level Gravitational Search Algorithm) [21]. The results indicate that the population structure mitigates issues related to local optima, significantly improving search capabilities.
The Gravitational Search Algorithm (GSA) [27] is an exceptionally efficient optimization algorithm that has garnered considerable attention in the field [28]-[34]. In order to enhance its performance and bolster stability, several new iterations of GSA have been proposed, such as [35], [36]. CGSA-M represents a successful approach that integrates multiple chaotic mappings into the GSA framework. The decision to employ diverse chaotic mappings for updates is determined through an adaptive approach, aimed at augmenting the search capability across different phases. However, despite these improvements, it still encounters challenges in avoiding local optima, resulting in suboptimal solutions.
However, this strategy of using chaotic mappings to enhance the local search capability of the algorithm, while increasing the search power, also raises the possibility of the algorithm getting trapped in local optima. in order to overcome the limitations of CGSA-M, we propose a novel memetic gravitational search algorithm with a hierarchical population structure called CGSA-H. Our algorithm introduces, for the first time, the concept of a multi-level population structure. By employing this novel design, we achieve a balance between exploration and exploitation during the search process. The larger subpopulations are well-suited for exploring the global solution space, while the smaller ones excel in the precise search of local solution spaces. This progressive optimization approach, facilitated by the hierarchical design, helps to prevent premature convergence to local optima. Through the utilization of the multi-level population structure, we are able to comprehensively explore the solution space of the problem, thereby enhancing the algorithm’s performance and efficiency when dealing with complex problems. The proposed hierarchical structure allows different sub-populations to exchange information more frequently, which improves the overall search efficiency of CGSA-H.
We test CGSA-H on the IEEE CEC2017 benchmark optimization functions, and our experimental results demonstrate that it significantly outperforms its counterparts in terms of solution quality and convergence speed. Additionally, the search trajectories of CGSA-H on unimodal, multimodal, and mixed search landscapes show well-maintained development and exploration capabilities. This study provides evidence that the incorporation of a multi-level population structure can effectively improve the performance of CGSA-M in solving complex optimization problems.
The main contributions of this study are as follows:
(1) To enhance the overall search efficiency of CGSA-M, we introduce a four-level hierarchical population structure. This design is intended to boost the frequency of information exchange among distinct subgroups, ultimately improving the algorithm’s performance.
(2) CGSA-H achieves improved accuracy without the need for parameter adjustments, showcasing its inherent capability to enhance algorithm performance.
2. A Succinct Overview of the Conventional Gravitational Search Algorithm (GSA)
The Gravitational Search Algorithm (GSA) is a population-based metaheuristic algorithm inspired by the gravitational law among objects. Within the GSA population, each individual is regarded as an object and is evaluated based on its mass as a measure of performance. The position of an individual corresponds to a solution of the optimization problem under consideration. Altering the position of an individual has the potential to lead to an enhancement in the quality of the solution.
In a formal context, each entity denoted as \(X_{i} =(x_{i}^{l}, \ldots, x_{i}^{d}, \ldots, x_{i}^{D}), (i=1,2,3,\ldots,N)\) within the system exerts gravitational forces upon other such entities in a \(D\)-dimensional exploration domain. Here, the variable \(x_{i}^{d}\) signifies the positional coordinates of the \(i\)-th entity along the \(d\)-th dimension. The velocity associated with entity \(X_{i}\) is denoted as \(V_{i} =(V_{i}^{l} ,\ldots,V_{i}^{d} ,\ldots,V_{i}^{D})\). During iteration \(t\), the mass of every entity, represented as \(M_{i} (t)\), is computed using a fitness-based mapping procedure outlined as follows:
\[\begin{equation*} \begin{aligned} M_{i} (t)=\frac{fit(X_{i}(t))-worst(t)}{best(t)-worst(t)} \\ \end{aligned} \tag{1} \end{equation*}\] |
In this context, the term denoted as \(fit(X_{i}(t))\) represents the fitness evaluation of agent \(X_{i}\), which is determined by the computation of the objective function. For a problem aiming at minimization, we establish the definitions of \(best(t)\) and \(worst(t)\) as provided below.
\[\begin{equation*} \begin{aligned} best(t)=\min_{j=1,2,3,\ldots,N} fit(X_{j} (t)) \\ wost(t)=\min_{j=1,2,3,\ldots,N} fit(X_{j} (t)) \\ \end{aligned} \tag{2} \end{equation*}\] |
The influence on the \(i\)-th agent by the \(j\)-th agent is described as:
\[\begin{equation*} \begin{aligned} F_{ij}^{d} =G(t)\tfrac{M_{i}(t)\times M_{j}(t) }{R_{ij}(t)+\varepsilon } (x_{j}^{d} (t)-x_{i}^{d} (t)) \\ \end{aligned} \tag{3} \end{equation*}\] |
where \(R_{ij} (t)\) represents the Euclidean distance between the positions of agents \(x_i\) and \(x_j\) at time \(t\), computed as \(R_{ij} (t)=\left \| x_{i}(t), x_{j_{} }(t) \right \|_{2}\). The parameter \(\varepsilon\) is a small constant introduced to avoid division by zero in the denominator of Eq. (3). Furthermore, the term \(G(t)\) denotes the gravitational constant at time \(t\), and it is defined by:
\[\begin{equation*} \begin{aligned} G(t)=G_{0} \cdot e^{(-\alpha \frac{t}{t_{max} } )} \\ \end{aligned} \tag{4} \end{equation*}\] |
The initial value of the gravitational constant is represented by \(G_{0}\), while \(\alpha\) serves as a shrinking constant, and \(t_{max}\) signifies the maximum number of iterations. Regarding the \(i\)-th agent, the collective force applied to it results from a summation of forces exerted by neighboring agents, with random weights.
\[\begin{equation*} \begin{aligned} F_{d}^{i} (t)=\sum_{j\in Kbest,j\ne i }rand_{j}F_{ij}^{d}(t) \\ \end{aligned} \tag{5} \end{equation*}\] |
where \(K_{best}\) denotes the subset containing the initial \(K\) agents with the highest fitness and greatest mass, and \(rand_{j}\) signifies a randomly generated number following a uniform distribution within the range [0, 1]. Additionally,
\[\begin{equation*} \begin{aligned} K=\left \lfloor \left ( \beta +(1-\frac{t}{t_{max} }(1-\beta ) \right ) N \right \rfloor \\ \end{aligned} \tag{6} \end{equation*}\] |
where the initial value of \(K\) is set to \(N\) and is progressively reduced in a linear manner, under the influence of a constant parameter represented by \(\beta\). The symbol \(\left \lfloor . \right \rfloor\) signifies the floor function. Adhering to the principles of motion, the acceleration of the \(i\)-th agent is determined through the following equation:
\[\begin{equation*} \begin{aligned} a_{i}^{d} (t)=\frac{F_{i}^{d}(t) }{M_{i}(t) } \\ \end{aligned} \tag{7} \end{equation*}\] |
Subsequently, the subsequent velocity of an agent is determined by adding a portion of its current velocity to the calculated acceleration. Consequently, updates to its position and velocity can be performed as outlined below:
\[\begin{eqnarray*} &&\!\!\!\!\! v_{i}^{d} (t+1)=rand_{i} v_{i}^{d}(t)+a_{i}^{d}(t) \tag{8} \\ &&\!\!\!\!\! x_{i}^{d} (t+1)= x_{i}^{d}(t)+v_{i}^{d}(t+1) \tag{9} \end{eqnarray*}\] |
Here, the variable \(rand_{i}\) represents a random value sampled from the interval [0, 1]. It’s essential to emphasize that both \(rand_{i}\) and \(rand_{j}\) are generated as uniformly distributed random numbers, and they typically exhibit distinct values. Indeed, they serve as a means to introduce randomized traits into the search process, contributing to its exploration capabilities.
3. Multiple Chaos Embedded Gravitational Search Algorithm
Chaos characterizes non-linear dynamic systems, displaying bounded dynamic instability, pseudo-randomness, thorough exploration, and aperiodic behavior tied to initial conditions and control parameters [37]. Chaotic systems exhibit random changes, yet over time, they encompass all possible states. This trait is useful for creating a search mechanism to optimize objective functions. However, chaotic optimization excels in smaller search spaces but becomes inefficient for larger ones [38], resulting in lengthy optimization periods. Hence, chaotic search is often integrated into other global optimizers like evolutionary algorithms to enhance their search efficiency [39]-[49]. Chaotic local search, unlike substituting random values with chaotic sequences for GSA’s control parameters, significantly improves GSA performance [50]. Notably, studies frequently employ chaotic local search for this purpose [40]-[49]. Consequently, CGSA-M adopts the approach of chaotic local search.
The definition of parallel chaotic local search involving multiple chaotic elements is as follows.
\[\begin{equation*} \begin{aligned} X_{g'}^{j} (t)=X_{g}(t)+r(t)(U-L)(z^{j}(t)-0.5) \\ \end{aligned} \tag{10} \end{equation*}\] |
Here, \(X_{g'}^{j}\) for \(j = 1, 2, \ldots, 12\) represents a provisional candidate solution generated through the utilization of parallel chaotic local search, signifying the simultaneous creation of twelve candidate solutions through distinct chaotic maps. Subsequently, the most optimal solution from the twelve candidates is selected for comparison with the current global best solution, denoted as \(X_{g}(t)\). If an enhancement in fitness is observed, the original solution is substituted; otherwise, it remains unchanged. This updating process can be precisely articulated as follows:
\[\begin{eqnarray*} &&\!\!\!\!\! X_g(t+1)=\left\{ \begin{array}{rl} X_{g'}^{j_{min}}, & \mbox{if}\quad fit (X_{g'}^{j_{min}}) \leq fit (X_g(t))\\ X_g(t), & \mbox{otherwise} \end{array} \right. \tag{11} \\ &&\!\!\!\!\! j_{min}=j\in \left \{ 1,2,\ldots,12 \right \}\ {s.t.} \min_{j=1,2,\ldots,12} fit(X_{g'}^{j}(t)) \tag{12} \end{eqnarray*}\] |
4. Memetic Gravitational Search Algorithm with Hierarchical Population Structure
Due to the presence of multiple chaos factors, CGSA-M tends to excessively emphasize exploration, often leading to the identification of suboptimal solutions in numerous scenarios. To achieve a more effective balance between algorithmic development and exploration, a novel four-layer hierarchical population structure, denoted as CGSA-H, has been introduced upon the foundation of the original CGSA-M algorithm. The incorporation of the most valuable individual layer aims to enhance the algorithm’s convergence speed. Furthermore, the introduction of a historical information storage layer serves to mitigate the potential impact of local optimality resulting from improved developmental capabilities. The descriptions of these layers are provided below, and the main process of the proposed CGSA-H algorithm is outlined in Algorithm 1
Within the population, the individual possessing the most valuable information is selected and referred to as the pivotal individual. This pivotal individual exerts influence on the individuals within the Dielectric layer, directing them to explore in the vicinity of the pivotal individual. This accelerates the convergence of the algorithm, contributing to the overall performance enhancement. In the most valuable individual layer, we conducted an effective perturbation search based on the results obtained from the Dielectric layer, which significantly enhanced the performance of the algorithm. In the Most Valuable Individuals layer, we utilize \(y_s\) to generate \(y_{s^{'}}\) based on its optimal characteristics. Here, \(y_s\) represents the best individual in the population. The formulation of this process is expressed as follows:
\[\begin{equation*} \begin{split} y_{s'} (t) = Y_s(t) + p \cdot (Z_{r2}(t) - Z_{r1}(t)) \cdot rand(0, 1)\\ \end{split} \tag{13} \end{equation*}\] |
where \(p\) is a constant value. In this study, \(p\) represents a constant value, specifically set to \(1\). Two individuals, denoted as \(Z_{r1}\) and \(Z_{r2}\) are randomly selected from the original information layer. We illustrate that the algorithm maintains robust performance even when the search step size remains unchanged. In many algorithms, parameter adjustment is a crucial process that can impact the performance and accuracy of the algorithm. However, parameter adjustment can often be a challenging task, particularly for complex algorithms. Therefore, an algorithm that performs well without requiring parameter adjustment is highly desirable. This means that researchers can utilize the algorithm without the need to invest additional time and effort into parameter tuning are randomly chosen from the historical information layer, enabling them to focus on other essential tasks.
Historical Information Layer: While the inclusion of \(y_s\) expedites perturbation search in the proximity of the Most Valuable Individuals, facilitating faster algorithm convergence, it also introduces the inherent risk of premature convergence to a local optimum. To mitigate this risk, we introduce a historical information layer capable of information exchange with the Most Valuable Individuals layer. This establishment aims to strike a balance between exploitation and exploration within the most valuable individual segment, thus averting premature convergence to a local optimum. Here, \(y_s\) represents the best individual in the population, and \(y_{s}'\) denotes a transient individual. \(f\) represents the fitness function. In the event that the fitness value of \(y_{s}'\) surpasses that of \(y_s\), \(y_{s}'\) supersedes \(y_s\); otherwise, \(y_s\) is retained and persists into subsequent iterations. The formulation is expressed as follows:
\[\begin{equation*} y_s(t)=\left\{ \begin{array}{rl} y_{s'}(t), & \mbox{if}\quad f(y_{s'}(t)) \leq f(y_s(t))\\ y_s(t), & \mbox{otherwise} \end{array} \right. \tag{14} \end{equation*}\] |
Within the population set \(Z\), assuming the individual with the optimal fitness is denoted as \(Z_{max}\). If the fitness of the individual \(y_{s}'\) surpasses that of \(Z_{max}\), then \(Z_{max}\) is replaced with \(y_{s}'\), thereby recording historical information. The formula expressing this process is as follows:
\[\begin{equation*} \begin{aligned} Z_{max}(t)= \begin{cases} y_{s^{'}}(t) & \mbox{if}\quad f(y_{s'}(t)) \leq f(Z_{max}\theta(t)),\\ Z_{max}(t) & \mbox{otherwise}\\ \end{cases} \end{aligned} \tag{15} \end{equation*}\] |
CGSA-H is an enhanced version of CGSA-M designed to address the issues of low search accuracy and susceptibility to falling into local optima by incorporating a multi-level population structure.
As depicted in Fig. 1, CGSA-H constitutes a hierarchical structure comprising four layers. The initial CGSA-M is positioned in the first and second layers, while the top layer of the most valuable individual layer and the historical information layer are situated in the third and fourth layers, respectively. It is evident that there exists a bidirectional information flow between the historical information layer and the top layer of the most valuable individual layer. It is crucial to emphasize that, between the historical information layer and the top layer of the most valuable individual layer, white arrows signify the impact of optimal individual components on the information exchange components. Specifically, if the fitness of \(y_{s}'\) exceeds that of \(Z_{max}\), \(y_{s}'\) is employed to replace \(Z_{max}\). Blue arrows indicate the update of the best individual component in the historical information storage layer.
In CGSA-H, we adjust the population structure based on the existing CGSA-M by adding two layers. The third layer, known as the most valuable individual layer, performs a perturbation search around the optimal solutions found in the second layer, thereby enhancing the performance of the algorithm. The fourth layer, the original information layer, is introduced due to the performance improvement from the most valuable individual layer, which inevitably increases the risk of the algorithm prematurely converging to local optima. To mitigate this risk, we introduce an original information layer capable of exchanging information with the most valuable individual layer.
The multi-level population structure in CGSA-H achieves a superior balance between exploitation and exploration by enabling information exchange between the most valuable individual layer and the original information layer. By facilitating more frequent information exchange among different subpopulations, the algorithm can attain higher search accuracy and avoid premature convergence. Algorithm 1 presents the pseudo-code of CGSA-H.
5. Experiment Results
We test the proposed CGSA-H on a set of benchmark problems from IEEE CEC2017 to validate its performance. The IEEE CEC2017 problem set consists of 29 test problems, including 24 simplex optimization problems and 5 nonlinear optimization problems. Nonlinear optimization problems are those in which the objective function is nonlinear, meaning it does not have a single peak. These test problems are designed to evaluate the performance of optimization algorithms on different types of optimization problems with different characteristics. It is worth noting that we exclude F2 from testing due to its instability, particularly in high-dimensional problems, and the significant differences in the performance of the same algorithm implemented in MATLAB.
The performance of CGSA-H is compared with several optimization algorithms, including the classical sine cosine algorithm (SCA), multiple chaos embedded gravitational search algorithm (CGSA-M), whale optimization algorithm (WOA), and GSA. We conduct the experiments using a problem dimension of \(D\) = 30/50/100, a population size of 100, and a maximum number of function computations of \({D*10^{4}}\) to ensure fair comparisons. The algorithms are run independently 51 times for each benchmark problem. The experiments are conducted on a computer with a 3.00 GHz Intel (R) Core i7-9700 processor, 8.00 GB of memory, and a 64-bit operating system.
In this study, we employ the Wilcoxon rank-sum test, a non-parametric statistical method, to compare the median differences between two independent samples. This test is particularly suitable for sample data that do not follow a normal distribution. We use this test to compare the performance of our proposed algorithm with existing methods on the CEC2017 problem set and 13 practical problems. To quantitatively assess the performance of the algorithms, we record the number of wins/ties/losses (W/T/L) on the specified problem set. A “win” indicates the number of problems where our proposed algorithm statistically outperforms the comparison algorithm, a “tie” denotes the number of problems where there is no significant difference in performance between the two algorithms, and a “loss” refers to the number of problems where the proposed algorithm underperforms compared to the comparison algorithm. This method allow us to determine the relative efficacy of our proposed algorithm against competitive algorithms across multiple performance metrics. The results of the test reveal that our algorithm demonstrates superior performance on most of the problems, as reflected by the number of wins. This statistical comparison provides a solid foundation for the performance evaluation of our research.
The results of the experiments are summarized in Tables 1, 2, and 3, which demonstrate that the proposed CGSA-H significantly outperforms its peers in terms of solution quality. It is also found to be more suitable for dealing with high-dimensional optimization problems. The convergence diagrams and box-whisker diagrams shown in Figs. 2 and 5 further demonstrate the algorithm’s fast convergence and search capabilities.
![]() |
Fig. 2 Box-whisker plots of algorithm performance metrics for functions F4 and F13 across multiple trials on IEEE CEC2017 test suite. |
In Table 4, we detail the specific content of thirteen real-world problems. Table 5 presents the comparative results of CGSA-H against other algorithms across these thirteen real-world scenarios. The results clearly demonstrate that CGSA-H exhibits excellent performance in practical applications.
Figure 3 illustrates the search trajectory graph of CGSA-H on F3 and F9. The individual trajectories on the function graph as the number of iterations increases, using the IEEE CEC2017 as the test function set, demonstrate CGSA-H’s ability to overcome local optima. Images are recorded for 2 and 200 iterations in F3 and F9, respectively. We observe that the populations narrowed the search range and eventually converged to the minimum range in both F3 and F9, indicating CGSA-H’s strong exploitation potential.
Figure 4 shows that CGSA-H displays significant population diversity during the initial stages of the search process, aiding the algorithm in breaking free from local optima and averting premature convergence. From F11 and F19, it can be seen that from the beginning of the iteration, the population diversity decreases rapidly, and in the middle of the iteration, the population diversity maintains a stable value, thus proving that the algorithm has the ability to explore along with the ability to exploit. From F25, it can be seen that the population diversity has been high from the beginning to the end of the iteration, thus proving the algorithm’s strong exploration capability.
Figure 6 illustrates the computational time required for running each algorithm once on 29 problems in the IEEE CEC2017 benchmark. Overall, CGSA-H exhibits a 6.50% reduction in computation time and a 55.17% increase in performance compared to CGSA-M. This validates the effectiveness of our algorithmic enhancements, particularly in higher dimensions, where CGSA-H not only performs better but also requires less computational time in comparison with other GSA variants.
6. Conclusions
To address the issue of CGSA-M falling into local optima, we propose a hierarchical multi-chaotic embedded gravitational search algorithm (CGSA-H). Two additional hierarchical components are added to the original CGSA-M, resulting in a four-layer hierarchical population structure. The most valuable individual layer improves the population’s interaction during the search process and records the optimal value of the main population. Experimental results demonstrate that the proposed hierarchical population structure significantly enhances the accuracy of CGSA-M, and the improved CGSA-H outperforms comparable algorithms in terms of solution quality.
To evaluate the effectiveness of CGSA-H, we compare it with other well-known heuristics. The population diversity plot of CGSA-H indicates that the algorithm possesses exploration ability while maintaining strong exploitation ability, demonstrating the efficacy of our modified scheme. In conclusion, CGSA-H is an algorithm with robust performance improvements. Future work may focus on the following important studies: 1) further improving the performance of the CGSA-H algorithm, 2) applying the population structure scheme to additional MHAs, and 3) conducting performance studies on practical applications, such as training neural networks and solving new energy optimization problems.
Acknowledgments
This research was partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant JP22H03643.
References
[1] A. Slowik and H. Kwasnicka, “Evolutionary algorithms and their applications to engineering problems,” Neural Comput. & Applic., vol.32, pp.12363-12379, 2020.
CrossRef
[2] P.A. Vikhar, “Evolutionary algorithms: A critical review and its future prospects,” 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), pp.261-265, IEEE, 2016.
CrossRef
[3] M.G. Castillo Tapia and C.A. Coello Coello, “Applications of multi-objective evolutionary algorithms in economics and finance: A survey,” 2007 IEEE Congress on Evolutionary Computation, pp.532-539, 2007.
CrossRef
[4] J. Tang, G. Liu, and Q. Pan, “A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends,” IEEE/CAA J. Autom. Sinica, vol.8, no.10, pp.1627-1643, 2021.
CrossRef
[5] Z.H. Zhan, L. Shi, K.C. Tan, and J. Zhang, “A survey on evolutionary computation for complex continuous optimization,” Artif. Intell. Rev., vol.55, no.1, pp.59-110, 2022.
CrossRef
[6] S. Forrest, “Genetic algorithms,” ACM Comput. Surv. (CSUR), vol.28, no.1, pp.77-80, 1996.
CrossRef
[7] H. Yang, S. Gao, R.L. Wang, and Y. Todo, “A ladder spherical evolution search algorithm,” IEICE Trans Inf. & Syst., vol.E104-D, no.3, pp.461-464, March 2021.
CrossRef
[8] M. Dorigo, M. Birattari, and T. Stutzle, “Ant colony optimization,” IEEE Comput. Intell. Mag., vol.1, no.4, pp.28-39, 2006.
CrossRef
[9] S. Gao, Y. Wang, J. Cheng, Y. Inazumi, and Z. Tang, “Ant colony optimization with clustering for solving the dynamic location routing problem,” Applied Mathematics and Computation, vol.285, pp.149-173, 2016.
CrossRef
[10] H. Yang, Y. Yu, J. Cheng, Z. Lei, Z. Cai, Z. Zhang, and S. Gao, “An intelligent metaphor-free spatial information sampling algorithm for balancing exploitation and exploration,” Knowledge-Based Systems, vol.250, p.109081, 2022.
CrossRef
[11] Z. Wang, S. Gao, J. Wang, H. Yang, and Y. Todo, “A dendritic neuron model with adaptive synapses trained by differential evolution algorithm,” Computational Intelligence and Neuroscience, vol.2020, 2020.
CrossRef
[12] A. Tzanetos and G. Dounias, “Nature inspired optimization algorithms or simply variations of metaheuristics?,” Artif. Intell. Rev., vol.54, no.3, pp.1841-1862, 2021.
CrossRef
[13] Y. Wang, Y. Yu, S. Cao, X. Zhang, and S. Gao, “A review of applications of artificial intelligent algorithms in wind farms,” Artif. Intell. Rev., vol.53, no.5, pp.3447-3500, 2020.
CrossRef
[14] Z. Wu, Q. Xu, G. Ni, and G. Yu, “The study of genetic information flux network properties in genetic algorithms,” Int. J. Mod. Phys. C, vol.26, no.07, p.1550076, 2015.
CrossRef
[15] Y.J. Gong, W.N. Chen, Z.H. Zhan, J. Zhang, Y. Li, Q. Zhang, and J.J. Li, “Distributed evolutionary algorithms and their models: A survey of the state-of-the-art,” Applied Soft Computing, vol.34, pp.286-300, 2015.
CrossRef
[16] E. Alba and B. Dorronsoro, “The exploration/exploitation tradeoff in dynamic cellular genetic algorithms,” IEEE Trans. Evol. Comput., vol.9, no.2, pp.126-142, 2005.
CrossRef
[17] Y. Shi, H. Liu, L. Gao, and G. Zhang, “Cellular particle swarm optimization,” Information Sciences, vol.181, no.20, pp.4460-4493, 2011.
CrossRef
[18] C.C. Lai and C.Y. Chang, “A hierarchical evolutionary algorithm for automatic medical image segmentation,” Expert Systems with Applications, vol.36, no.1, pp.248-259, 2009.
CrossRef
[19] G. Chen, Y. Li, K. Zhang, X. Xue, J. Wang, Q. Luo, C. Yao, and J. Yao, “Efficient hierarchical surrogate-assisted differential evolution for high-dimensional expensive optimization,” Information Sciences, vol.542, pp.228-246, 2021.
CrossRef
[20] Y. Wang, Y. Yu, S. Gao, H. Pan, and G. Yang, “A hierarchical gravitational search algorithm with an effective gravitational constant,” Swarm and Evolutionary Computation, vol.46, pp.118-139, 2019.
CrossRef
[21] Y. Wang, S. Gao, M. Zhou, and Y. Yu, “A multi-layered gravitational search algorithm for function optimization and real-world problems,” IEEE/CAA J. Autom. Sinica, vol.8, no.1, pp.94-109, 2021.
CrossRef
[22] H. Du, X. Wu, and J. Zhuang, “Small-world optimization algorithm for function optimization,” Advances in Natural Computation: Second International Conference, ICNC 2006, Xi’an, China, Sept. 2006. Proceedings, Part II 2, pp.264-273, Springer, 2006.
CrossRef
[23] M. Vora and T. Mirnalinee, “Dynamic small world particle swarm optimizer for function optimization,” Nat. Comput., vol.17, pp.901-917, 2018.
CrossRef
[24] H. Dai, S. Gao, Y. Yang, and Z. Tang, “Effects of “rich-gets-richer” rule on small-world networks,” Neurocomputing, vol.73, no.10-12, pp.2286-2289, 2010.
CrossRef
[25] C. Zhang and Z. Yi, “Scale-free fully informed particle swarm optimization algorithm,” Information Sciences, vol.181, no.20, pp.4550-4568, 2011.
CrossRef
[26] H. Li, H. Yang, B. Zhang, H. Zhang, and S. Gao, “Swarm exploration mechanism-based distributed water wave optimization,” Int. J. Comput. Intell. Syst., vol.16, no.1, pp.1-26, 2023.
CrossRef
[27] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: A gravitational search algorithm,” Information Sciences, vol.179, no.13, pp.2232-2248, 2009.
CrossRef
[28] W.S. Tan, M.Y. Hassan, H.A. Rahman, M.P. Abdullah, and F. Hussin, “Multi-distributed generation planning using hybrid particle swarm optimisation-gravitational search algorithm including voltage rise issue,” IET Generation, Transmission & Distribution, vol.7, no.9, pp.929-942, 2013.
CrossRef
[29] Z.k. Feng, S. Liu, W.j. Niu, S.s. Li, H.j. Wu, and J.y. Wang, “Ecological operation of cascade hydropower reservoirs by elite-guide gravitational search algorithm with Lévy flight local search and mutation,” J. Hydrology, vol.581, p.124425, 2020.
CrossRef
[30] E. Rashedi, E. Rashedi, and H. Nezamabadi-pour, “A comprehensive survey on gravitational search algorithm,” Swarm and Evolutionary Computation, vol.41, pp.141-158, 2018.
CrossRef
[31] Y. kumar and G. Sahoo, “A review on gravitational search algorithm and its applications to data clustering & classification,” IJ Intelligent Systems and Applications, vol.6, no.6, pp.79-93, 2014.
CrossRef
[32] N. Siddiquea and H. Adelib, “Applications of gravitational search algorithm in engineering,” J. Civil Engineering and Management, vol.22, no.8, pp.981-990, 2016.
CrossRef
[33] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “Filter modeling using gravitational search algorithm,” Engineering Applications of Artificial Intelligence, vol.24, no.1, pp.117-122, 2011.
CrossRef
[34] A. Hatamlou, S. Abdullah, and H. Nezamabadi-Pour, “Application of gravitational search algorithm on data clustering,” Rough Sets and Knowledge Technology: 6th International Conference, RSKT 2011, Banff, Canada, Oct. 2011. Proceedings 6, pp.337-346, Springer, 2011.
CrossRef
[35] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “BGSA: Binary gravitational search algorithm,” Nat. Comput., vol.9, pp.727-745, 2010.
CrossRef
[36] Y. Wang, S. Gao, Y. Yu, Z. Cai, and Z. Wang, “A gravitational search algorithm with hierarchy and distributed framework,” Knowledge-Based Systems, vol.218, p.106877, 2021.
CrossRef
[37] J. Gleick, Chaos: Making a New Science, Penguin, 2008.
[38] M.S. Tavazoei and M. Haeri, “Comparison of different one-dimensional maps as chaotic search pattern in chaos optimization algorithms,” Applied Mathematics and Computation, vol.187, no.2, pp.1076-1085, 2007.
CrossRef
[39] T. Xiang, X. Liao, and K. Wong, “An improved particle swarm optimization algorithm combined with piecewise linear chaotic map,” Applied Mathematics and Computation, vol.190, no.2, pp.1637-1645, 2007.
CrossRef
[40] S. Saremi, S. Mirjalili, and A. Lewis, “Biogeography-based optimisation with chaos,” Neural Comput. & Applic., vol.25, pp.1077-1097, 2014.
CrossRef
[41] A.A. Heidari, R. Ali Abbaspour, and A. Rezaee Jordehi, “An efficient chaotic water cycle algorithm for optimization tasks,” Neural Comput. & Applic., vol.28, pp.57-85, 2017.
CrossRef
[42] M. Mitić, N. Vuković, M. Petrović, and Z. Miljković, “Chaotic fruit fly optimization algorithm,” Knowledge-Based Systems, vol.89, pp.446-458, 2015.
CrossRef
[43] G.G. Wang, L. Guo, A.H. Gandomi, G.S. Hao, and H. Wang, “Chaotic Krill Herd algorithm,” Information Sciences, vol.274, pp.17-34, 2014.
CrossRef
[44] A.H. Gandomi and X.S. Yang, “Chaotic bat algorithm,” J. Computational Science, vol.5, no.2, pp.224-232, 2014.
CrossRef
[45] D. Jia, G. Zheng, and M. Khurram Khan, “An effective memetic differential evolution algorithm based on chaotic local search,” Information Sciences, vol.181, no.15, pp.3175-3187, 2011.
CrossRef
[46] R. Arul, G. Ravi, and S. Velusami, “Chaotic self-adaptive differential harmony search algorithm based dynamic economic dispatch,” International Journal of Electrical Power Energy Systems, vol.50, pp.85-96, 2013.
CrossRef
[47] A. Kazem, E. Sharifi, F.K. Hussain, M. Saberi, and O.K. Hussain, “Support vector regression with chaos-based firefly algorithm for stock market price forecasting,” Applied Soft Computing, vol.13, no.2, pp.947-958, 2013.
CrossRef
[48] Y. Li, Q. Wen, L. Li, and H. Peng, “Hybrid chaotic ant swarm optimization,” Chaos, Solitons & Fractals, vol.42, no.2, pp.880-889, 2009.
CrossRef
[49] S. Talatahari, B. Farahmand Azar, R. Sheikholeslami, and A. Gandomi, “Imperialist competitive algorithm combined with chaos for global optimization,” Communications in Nonlinear Science and Numerical Simulation, vol.17, no.3, pp.1312-1319, 2012.
CrossRef
[50] S. Gao, C. Vairappan, Y. Wang, Q. Cao, and Z. Tang, “Gravitational search algorithm combined with chaos for unconstrained numerical optimization,” Applied Mathematics and Computation, vol.231, pp.48-62, 2014.
CrossRef