[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115222006A - Numerical function optimization method based on improved particle swarm optimization algorithm - Google Patents

Numerical function optimization method based on improved particle swarm optimization algorithm Download PDF

Info

Publication number
CN115222006A
CN115222006A CN202110403163.4A CN202110403163A CN115222006A CN 115222006 A CN115222006 A CN 115222006A CN 202110403163 A CN202110403163 A CN 202110403163A CN 115222006 A CN115222006 A CN 115222006A
Authority
CN
China
Prior art keywords
algorithm
particles
particle swarm
population
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110403163.4A
Other languages
Chinese (zh)
Inventor
熊聪聪
杨晓艺
王丹
赵青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Science and Technology
Original Assignee
Tianjin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Science and Technology filed Critical Tianjin University of Science and Technology
Priority to CN202110403163.4A priority Critical patent/CN115222006A/en
Publication of CN115222006A publication Critical patent/CN115222006A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of intelligent calculation, in particular to a numerical function optimization algorithm based on an improved particle swarm optimization algorithm. The main technical characteristics are as follows: the particle swarm optimization algorithm is improved, the diversity of the swarm is enhanced, and the optimization range of the particles is expanded. According to the optimization mode of the particle swarm optimization algorithm, the initial search space loses guiding significance for the whole algorithm after a few iterations, and the problems that the algorithm is easy to converge to a local minimum value, the diversity is reduced too fast, the parameters are sensitive and the like are easily caused. The algorithm is considered from the aspects of expanding the optimizing range and enhancing the diversity of the particle swarm, and a space search strategy combining local search and global search is added, namely, the initialization stage, the speed and the position iterative formula of the particle swarm algorithm are modified by using Bernstein particles and a reverse learning strategy.

Description

一种基于改进粒子群优化算法的数值函数优化方法A Numerical Function Optimization Method Based on Improved Particle Swarm Optimization Algorithm

技术领域technical field

本发明属于智能计算领域,尤其是一种基于改进粒子群优化算法的数值函数优化算法。The invention belongs to the field of intelligent computing, in particular to a numerical function optimization algorithm based on an improved particle swarm optimization algorithm.

背景技术Background technique

近年来,一种新兴的演化计算技术被称为群体智能,已成为越来越多研究者的关注焦点,它与人工生命,特别是进化策略以及遗传算法有着极为特殊的关系。群体智能利用群体优势,在没有集中控制,不提供全局模型的前提下,为寻找复杂问题的解决方案提供了新的思路。目前,群体智能算法已经被应用于实际场景。并且大量文献已经表明,利用空间局部搜索策略和空间全局搜索策略结合的空间并行搜索方式来进行算法优化的研究在群体智能方面并不常见。不难发现,对于遗传算法而言,初始的搜索空间在经过少数几次迭代之后便对整个算法失去了指导意义,究其原因是由于种群内的杂交使得个体仅限于种群已知个体所限定的空间内进行了进化。也正是因为这一原因,利用空间局部搜索策略和全局搜索策略结合的方式进行算法的优化会在群体智能优化算法中得到一定的体现。In recent years, an emerging evolutionary computing technology called swarm intelligence has become the focus of more and more researchers. It has a very special relationship with artificial life, especially evolutionary strategies and genetic algorithms. Swarm intelligence takes advantage of swarms and provides new ideas for finding solutions to complex problems without centralized control and without a global model. At present, swarm intelligence algorithms have been applied to practical scenarios. And a large number of literatures have shown that the research on algorithm optimization using the spatial parallel search method combined with the spatial local search strategy and the spatial global search strategy is not common in swarm intelligence. It is not difficult to find that for the genetic algorithm, the initial search space loses its guiding significance to the entire algorithm after a few iterations. The reason is that due to the hybridization within the population, the individuals are limited to those limited by the known individuals in the population. evolution in space. It is also for this reason that the optimization of the algorithm by the combination of the spatial local search strategy and the global search strategy will be reflected in the swarm intelligence optimization algorithm.

本发明以代表性群体智能算法——粒子群优化算法为基础,改进其存在的缺点,从扩大寻优范围和增强种群多样性两个方面进行考虑,加入了局部搜索和全局搜索相结合的空间搜索策略,提出了一种改进的粒子群优化算法应用于数值函数优化,通过标准测试函数对改进后的粒子群优化算法进行了可行性和有效性的验证。The invention is based on the representative swarm intelligence algorithm-particle swarm optimization algorithm, improves its existing shortcomings, considers the expansion of the optimization range and the enhancement of population diversity, and adds a space combining local search and global search. Search strategy, an improved particle swarm optimization algorithm is proposed for numerical function optimization, and the feasibility and effectiveness of the improved particle swarm optimization algorithm are verified by standard test functions.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于通过提出一种基于改进粒子群算法的数值函数优化方法,解决在数值函数优化过程中收敛速度过慢,收敛精度不高,容易收敛到局部最小值,多样性下降过快、参数敏感等问题。在现有的粒子群优化算法中加入空间局部搜索策略和空间全局搜索策略结合的空间并行的搜索方式来对粒子的更新公式进行了修改,在一定程度上缓解了在数值函数优化过程中易于陷入局部最优值的问题。该算法的收敛性优于传统的粒子群优化算法,在一定任务规模下,完成数值函数优化所需成本更低,时间更少。The purpose of the present invention is to solve the problem that in the process of numerical function optimization, the convergence speed is too slow, the convergence accuracy is not high, it is easy to converge to a local minimum value, the diversity declines too fast, and the parameter sensitivity, etc. In the existing particle swarm optimization algorithm, the space parallel search method combining the spatial local search strategy and the spatial global search strategy is added to modify the update formula of particles, which alleviates the easy trapping in the process of numerical function optimization to a certain extent. The problem of local optima. The convergence of the algorithm is better than that of the traditional particle swarm optimization algorithm. Under a certain task scale, the cost and time required to complete the numerical function optimization are lower.

粒子群优化算法(PSO)是一种基于群体智能的全局随机寻优算法,1995年由Eberhart博士和 kennedy博士提出。它是模仿鸟类的觅食行为,将问题的搜索空间类比于鸟类的飞行空间,将每只鸟抽象成一个粒子,用以表征问题的一个候选解,所需要的寻找的最优解等同于要寻找的食物。算法为每个粒子给定初始的位置和速度,每个粒子通过更新速度来更新其自身位置。通过迭代搜索,种群可以不断地找到更好的粒子的位置,从而得到优化问题的较优解。粒子群优化算法中粒子具有两个属性:速度和位置,速度代表移动的快慢,位置代表移动的方向。每个粒子在搜索空间中单独的寻找最优解,并将其记为当前个体极值,并将个体极值与整个粒子群里的其他粒子共享,找到最优的那个个体极值作为整个粒子群的当前全局最优解,粒子群中所有粒子根据自己找到的当前个体极值和整个粒子群共享的当前全局最优解来调整自己的速度和位置。粒子群优化算法的公式如下:Particle Swarm Optimization (PSO) is a global stochastic optimization algorithm based on swarm intelligence, which was proposed by Dr. Eberhart and Dr. Kennedy in 1995. It imitates the foraging behavior of birds, compares the search space of the problem to the flight space of birds, and abstracts each bird into a particle to represent a candidate solution to the problem, and the optimal solution to be found is equivalent to for the food to be found. The algorithm gives each particle an initial position and velocity, and each particle updates its own position by updating the velocity. Through iterative search, the population can continuously find better particle positions, thereby obtaining a better solution to the optimization problem. In the particle swarm optimization algorithm, particles have two properties: speed and position, speed represents the speed of movement, and position represents the direction of movement. Each particle searches for the optimal solution independently in the search space, and records it as the current individual extremum, and shares the individual extremum with other particles in the entire particle swarm, and finds the optimal individual extremum as the entire particle The current global optimal solution of the swarm, all particles in the particle swarm adjust their speed and position according to the current individual extremum found by themselves and the current global optimal solution shared by the entire particle swarm. The formula of particle swarm optimization algorithm is as follows:

Figure RE-GSB0000193773970000011
Figure RE-GSB0000193773970000011

其中:ω为惯性权重;r1和r2为均匀分布在(0,1)之间的随机数;c1和c2为学习因子。

Figure RE-GSB0000193773970000012
Figure RE-GSB0000193773970000013
为粒子i在第t次迭代时的速度和历史最优位置;
Figure RE-GSB0000193773970000014
是第t次迭代时整个种群的最优位置。Among them: ω is the inertia weight; r 1 and r 2 are random numbers uniformly distributed between (0, 1); c 1 and c 2 are learning factors.
Figure RE-GSB0000193773970000012
and
Figure RE-GSB0000193773970000013
is the velocity and historical optimal position of particle i at the t-th iteration;
Figure RE-GSB0000193773970000014
is the optimal position of the entire population at the t-th iteration.

在粒子群算法寻优模式中vi表示速度,能够对当前位置的方向和位置随机产生一定的影响,使得算法在给定区域上进行搜索。如果将算法的进化迭代理解为一个自适应过程,则粒子位置xi就不是被新的粒子所代替,而是根据速度向量vi进行自适应变化,它的独特之处在于,每一次迭代算法的每个粒子只朝群体经验认为是好的方向飞行,也就是说基本的粒子群算法执行一种有“意识”的进化。In the optimization mode of particle swarm optimization, vi represents the speed, which can have a certain influence on the direction and position of the current position randomly, so that the algorithm can search in a given area. If the evolutionary iteration of the algorithm is understood as an adaptive process, the particle position x i is not replaced by new particles, but adaptively changes according to the velocity vector v i . Its uniqueness is that each iteration algorithm Each particle of s only flies in the direction that the swarm experience thinks is good, which means that the basic particle swarm algorithm performs a kind of "conscious" evolution.

在标准粒子群优化算法中,粒子的运动方向主要由粒子的历史最优位置和全局最优位置来决定。提出的算法为了增加种群多样性,扩大了寻优范围,引入空间局部搜索策略和空间全局搜索策略相结合的方式,即在初始化阶段加入反向学习的思想,在粒子的进化阶段加入在第t次迭代时种群最优粒子的伯恩斯坦粒子和随机选择种群中的任一粒子的反向粒子作为引导粒子,增加了种群的多样性,扩大了寻优范围,帮助算法快速找到全局最优位置。In the standard particle swarm optimization algorithm, the movement direction of the particle is mainly determined by the historical optimal position and the global optimal position of the particle. In order to increase the diversity of the population, the proposed algorithm expands the scope of optimization, and introduces a combination of spatial local search strategy and spatial global search strategy, that is, the idea of reverse learning is added in the initialization stage, and the t-th In the next iteration, the Bernstein particle of the optimal particle of the population and the reverse particle of any particle in the population are randomly selected as the guiding particle, which increases the diversity of the population, expands the optimization range, and helps the algorithm to quickly find the global optimal position. .

提出的算法基于粒子群优化算法的寻优模式进行调整,速度迭代公式即位置迭代公式修整为如下公式:The proposed algorithm is adjusted based on the optimization mode of the particle swarm optimization algorithm, and the velocity iteration formula, that is, the position iteration formula, is modified as follows:

Figure RE-GSB0000193773970000021
Figure RE-GSB0000193773970000021

改进后的粒子群优化算法在初始化阶段增加了种群的多样性,提高了种群的收敛速度;在进化阶段速度和位置迭代公式能够保证粒子遵循算法的寻优模式,位置根据速度进行调整,扩大了寻优的范围,收敛速度得到提高。The improved particle swarm optimization algorithm increases the diversity of the population in the initialization stage and improves the convergence speed of the population; in the evolution stage, the speed and position iteration formula can ensure that the particles follow the optimization mode of the algorithm, and the position is adjusted according to the speed. The range of optimization, the convergence speed is improved.

在实验里为了验证改进是否有效,选择了单峰和多峰两类Benchmark国际标准函数进行测验,验证改进算法的优化效果。本发明通过增加种群的多样性,提高寻优概率,帮助算法逃离局部最优位置。实验表明,本发明对比其他的几种算法收敛速度更快、精度更高。In the experiment, in order to verify whether the improvement is effective, two types of Benchmark international standard functions, unimodal and multimodal, are selected for testing to verify the optimization effect of the improved algorithm. The invention helps the algorithm to escape from the local optimum position by increasing the diversity of the population and improving the probability of finding an optimum. Experiments show that the present invention has faster convergence speed and higher precision than other algorithms.

附图说明Description of drawings

图1为本发明粒子群算法的流程图Fig. 1 is the flow chart of the particle swarm algorithm of the present invention

图2为本发明与原算法在单峰测试函数上的结果对比图Fig. 2 is the result comparison diagram of the present invention and the original algorithm on the single peak test function

图3为本发明与原算法在多峰测试函数上的结果对比图Fig. 3 is the result comparison diagram of the present invention and the original algorithm on the multi-peak test function

具体实施方式Detailed ways

本发明是一种基于改进粒子群优化算法的数值函数优化方法,所述方法包括以下步骤:The present invention is a numerical function optimization method based on an improved particle swarm optimization algorithm, and the method comprises the following steps:

步骤1:生成初始种群,初始种群的好坏关系到搜索速度的快慢,好的初始种群有利于算法快速找到最优解。通常我们对问题进行求解x时,初始值x是我们通过经验积累或者是单纯随机的猜想。在这样的基础上,我们可以同时使用x的相反值来尝试得到更好的解,通过如此使得下一代x更快的逼近最优解。采用空间全局搜索策略即反向学习的思想,即在种群进化过程中,每个粒子每次找到一个当前最优位置时均产生一个对应的反向位置,如果反向位置的适应值较优,则选择适应度更好的粒子组成初始种群。设为种群中第i个粒子在第t次迭代时的位置,则对应的反向粒子的位置上可定义为:Step 1: Generate the initial population. The quality of the initial population is related to the speed of the search. A good initial population is helpful for the algorithm to quickly find the optimal solution. Usually when we solve the problem of x, the initial value of x is a guess that we have accumulated through experience or simply random. On this basis, we can use the opposite value of x to try to get a better solution at the same time, so that the next generation of x can approach the optimal solution faster. The idea of space global search strategy, namely reverse learning, is adopted, that is, in the process of population evolution, each particle generates a corresponding reverse position every time it finds a current optimal position. If the fitness value of the reverse position is better, Then select particles with better fitness to form the initial population. Assuming the position of the i-th particle in the population at the t-th iteration, the position of the corresponding reverse particle can be defined as:

Figure RE-GSB0000193773970000022
Figure RE-GSB0000193773970000022

其中xij∈[aj,bj],k、k1和k2属于(0,1)之间随机数,[aj,bj]为xij第j维的区间,具体表示为: aj(t)=min(xij(t),bj(t)-min(xij(t))Where x ij ∈[a j , b j ], k, k 1 and k 2 belong to random numbers between (0, 1), [a j , b j ] is the interval of the jth dimension of x ij , specifically expressed as: a j (t)=min(x ij (t), b j (t)-min(x ij (t))

步骤2:惯性权重及学习因子可以控制粒子在寻优过程中的表现。其中权重的存在可以有效的控制粒子群的收敛速度,而学习因子的存在则充分利用了整个种群和个体的“遗传知识”,通过粒子间的社交活动进行有效的引导粒子运动方向。提出的算法将定值学习因子改为半定值学习因子,其中学习因子c3由伯恩斯坦多项式产生控制,使得c1、c2保证了粒子的独立性,同时减少第二个学习因子降低粒子聚集的可能。伯恩斯坦多项式表示为:Step 2: Inertial weights and learning factors can control the performance of particles in the optimization process. The existence of weights can effectively control the convergence speed of particle swarms, and the existence of learning factors makes full use of the “genetic knowledge” of the entire population and individuals, and effectively guides the direction of particle movement through social activities between particles. The proposed algorithm changes the fixed learning factor to a semi-fixed learning factor, in which the learning factor c 3 is controlled by the Bernstein polynomial generation, so that c 1 and c 2 ensure the independence of the particles, while reducing the second learning factor to reduce Possibility of particle aggregation. Bernstein polynomials are expressed as:

Figure RE-GSB0000193773970000031
Figure RE-GSB0000193773970000031

其中β~U(0,1),k1~U(0,1),k∈U{1:3},

Figure RE-GSB0000193773970000032
where β~U(0,1), k 1 ~U(0,1), k∈U{1:3},
Figure RE-GSB0000193773970000032

步骤3:在标准粒子群优化算法中,粒子的运动方向主要由粒子的历史最优位置和全局最优位置来决定。提出的算法为了增加种群多样性,引入空间局部搜索策略,即加入在第t次迭代时种群最优粒子的伯恩斯坦粒子和随机选择种群中的任一粒子的反向粒子作为引导粒子,扩大了寻优范围,帮助算法快速找到全局最优位置。提出的基于粒子群算法的寻优模式进行调整,所以在对PSO 算法进行改进中,速度迭代公式以及位置迭代公式修整为如下公式:Step 3: In the standard particle swarm optimization algorithm, the movement direction of the particle is mainly determined by the historical optimal position and the global optimal position of the particle. In order to increase the diversity of the population, the proposed algorithm introduces a spatial local search strategy, that is, adding the Bernstein particle of the optimal particle of the population at the t-th iteration and the reverse particle of any particle in the population randomly selected as the guiding particle to expand It can help the algorithm to quickly find the global optimal position. The proposed optimization mode based on the particle swarm algorithm is adjusted, so in the improvement of the PSO algorithm, the velocity iteration formula and the position iteration formula are modified as follows:

Figure RE-GSB0000193773970000033
Figure RE-GSB0000193773970000033

其中:c1、c2和c3是学习因子;ω为惯性权重;r1、r2和r3均在(0,1)之间;

Figure RE-GSB0000193773970000034
Figure RE-GSB0000193773970000035
为粒子i在第t次迭代时的速度和历史最优位置;
Figure RE-GSB0000193773970000036
是第t次迭代时整个种群的最优位置;
Figure RE-GSB0000193773970000037
表示第t次迭代时种群最优粒子的伯恩斯坦粒子。Among them: c 1 , c 2 and c 3 are learning factors; ω is the inertia weight; r 1 , r 2 and r 3 are all between (0, 1);
Figure RE-GSB0000193773970000034
and
Figure RE-GSB0000193773970000035
is the velocity and historical optimal position of particle i at the t-th iteration;
Figure RE-GSB0000193773970000036
is the optimal position of the entire population at the t-th iteration;
Figure RE-GSB0000193773970000037
The Bernstein particle representing the optimal particle of the population at iteration t.

步骤4:在实验里为了验证改进是否有效,选择了单峰和多峰两类Benchmark国际标准函数进行测验,验证改进算法的优化效果。Step 4: In the experiment, in order to verify whether the improvement is effective, two types of Benchmark international standard functions, unimodal and multimodal, are selected for testing to verify the optimization effect of the improved algorithm.

Claims (4)

1. A numerical function optimization method based on an improved particle swarm optimization algorithm is characterized by comprising the following steps:
step 1: the quality of the initial population is related to the speed of the search speed, and the good initial population is beneficial to an algorithm to quickly find an optimal solution. In general, when we solve the problem x, the initial value x is a guess that we accumulated through experience or is purely random. On this basis, we can simultaneously use the opposite value of x to try to get a better solution, by doing so making the next generation x approach the optimal solution faster. And (3) adopting a space global search strategy, namely the idea of reverse learning, namely, in the population evolution process, each particle generates a corresponding reverse position every time when finding a current optimal position, and if the adaptive value of the reverse position is better, selecting the particles with better fitness to form an initial population.
Step 2: the inertial weight and the learning factor can control the performance of the particles in the optimizing process. The weight can effectively control the convergence speed of the particle swarm; and the presence of a learning factorThe 'genetic knowledge' of the whole population and individuals is fully utilized, and the learning factors are used for effectively guiding the movement direction of the particles through social activities among the particles. The proposed algorithm changes the constant learning factor to a semi-constant learning factor, wherein the learning factor c 3 Control is generated by Bernstein polynomials such that c 1 、c 2 The independence of the particles is ensured, and the possibility of particle aggregation is reduced by reducing the second learning factor.
And step 3: in the standard particle swarm optimization algorithm, the movement direction of the particles is mainly determined by the historical optimal position and the global optimal position of the particles. In order to increase the diversity of the population, the proposed algorithm introduces a space local search strategy, namely adding Bernstein particles of population optimal particles in the t-th iteration and randomly selecting reverse particles of any particles in the population as guide particles, so that the optimization range is expanded, and the algorithm is helped to quickly find the global optimal position.
And 4, step 4: in the experiment, in order to verify whether the improvement is effective, unimodal and multimodal two types of Benchmark international standard functions are selected for testing, and the optimization effect of the improved algorithm is verified.
2. The numerical function optimization algorithm based on the improved particle swarm optimization algorithm according to claim 1, further comprising generating an initial population using a reverse learning strategy in step 1. Let X i (t)=(x i1 ,x i2 ,...,x iD ) The position of the ith particle in the population at the time of the t iteration is the position of the corresponding reverse particle
Figure FSA0000239252480000011
The above can be defined as:
Figure FSA0000239252480000012
wherein x ij ∈[a j ,b j ],k、k 1 And k 2 Belongs to random number between (0,1) [ a ] j ,b j ]Is x ij The j-th dimension is specifically represented as follows:
a j (t)=min(x ij (t),b j (t)-min(x ij (t)) (2)。
3. the numerical function optimization algorithm based on the improved particle swarm optimization algorithm according to claim 1, further comprising learning factor c in step 2 3 Generated by a bernstein polynomial expressed as:
Figure FSA0000239252480000013
wherein beta-U (0,1), k 1 ~U(0,1),k∈U{1∶3},
Figure FSA0000239252480000014
4. The improved particle swarm optimization algorithm-based numerical function optimization algorithm according to claim 1, further comprising that, in the step 3, the proposed particle swarm optimization-based optimization mode is adjusted, so that in the improvement of the PSO algorithm, the velocity iteration formula and the position iteration formula are modified to the following formulas:
Figure FSA0000239252480000015
wherein, c 1 、c 2 And c 3 Is a learning factor; omega is the inertial weight; r is 1 、r 2 And r 3 Are all between (0,1);
Figure FSA0000239252480000016
and
Figure FSA0000239252480000017
and particle i at the t-th iterationThe speed of the time and the historical optimal position,
Figure FSA0000239252480000018
is the optimal position of the whole population during the t iteration;
Figure FSA0000239252480000019
bernstein particles representing the best population of particles at the t-th iteration.
CN202110403163.4A 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm Pending CN115222006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110403163.4A CN115222006A (en) 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110403163.4A CN115222006A (en) 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm

Publications (1)

Publication Number Publication Date
CN115222006A true CN115222006A (en) 2022-10-21

Family

ID=83605430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110403163.4A Pending CN115222006A (en) 2021-04-15 2021-04-15 Numerical function optimization method based on improved particle swarm optimization algorithm

Country Status (1)

Country Link
CN (1) CN115222006A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116362521A (en) * 2023-05-29 2023-06-30 天能电池集团股份有限公司 Intelligent factory application level production scheduling method for high-end battery
CN116432687A (en) * 2022-12-14 2023-07-14 江苏海洋大学 Group intelligent algorithm optimization method
CN117575002A (en) * 2023-11-13 2024-02-20 昆明理工大学 A two-stage competitive group optimization method based on multi-strategy

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116432687A (en) * 2022-12-14 2023-07-14 江苏海洋大学 Group intelligent algorithm optimization method
CN116362521A (en) * 2023-05-29 2023-06-30 天能电池集团股份有限公司 Intelligent factory application level production scheduling method for high-end battery
CN116362521B (en) * 2023-05-29 2023-08-22 天能电池集团股份有限公司 Intelligent factory application level production scheduling method for high-end battery
CN117575002A (en) * 2023-11-13 2024-02-20 昆明理工大学 A two-stage competitive group optimization method based on multi-strategy

Similar Documents

Publication Publication Date Title
Qolomany et al. Parameters optimization of deep learning models using particle swarm optimization
CN115222006A (en) Numerical function optimization method based on improved particle swarm optimization algorithm
Bacanin et al. Artificial bee colony (ABC) algorithm for constrained optimization improved with genetic operators
CN107272679B (en) Path Planning Method Based on Improved Ant Colony Algorithm
CN107506821A (en) A kind of improved particle group optimizing method
Luo et al. Novel grey wolf optimization based on modified differential evolution for numerical function optimization
CN111553469A (en) A wireless sensor network data fusion method, device and storage medium
CN113408610B (en) Image identification method based on adaptive matrix iteration extreme learning machine
CN115100864A (en) A Traffic Signal Control Optimization Method Based on Improved Sparrow Search Algorithm
CN112469050A (en) WSN three-dimensional coverage enhancement method based on improved wolf optimizer
CN106162663A (en) A kind of based on the sensing node covering method improving ant colony algorithm
CN109657147A (en) Microblogging abnormal user detection method based on firefly and weighting extreme learning machine
CN109753680A (en) A Particle Swarm Intelligence Method Based on Chaos Optimizing Mechanism
Talal Comparative study between the (ba) algorithm and (pso) algorithm to train (rbf) network at data classification
CN106447022A (en) Multi-objective particle swarm optimization based on globally optimal solution selected according to regions and individual optimal solution selected according to proximity
Silva et al. Chasing the swarm: a predator prey approach to function optimisation
CN114662638A (en) Mobile robot path planning method based on improved artificial bee colony algorithm
CN112381271B (en) A Distributed Multi-objective Optimization Acceleration Method for Fast Against Deep Belief Networks
CN107169594B (en) An optimization method and device for vehicle routing problem
CN110852435A (en) Neural evolution calculation model
CN113487870A (en) Method for generating anti-disturbance to intelligent single intersection based on CW (continuous wave) attack
Basu Improved particle swarm optimization for global optimization of unimodal and multimodal functions
CN109492744A (en) A kind of mixed running optimal control method that discrete binary particle swarm algorithm is coupled with fuzzy control
CN102479338A (en) Particle Swarm Optimization Algorithm Using Sine Functions to Describe Nonlinear Inertial Weights
Hongru et al. A hybrid PSO based on dynamic clustering for global optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20221021