CN103676649A - Local self-adaptive WNN (Wavelet Neural Network) training system, device and method - Google Patents
Local self-adaptive WNN (Wavelet Neural Network) training system, device and method Download PDFInfo
- Publication number
- CN103676649A CN103676649A CN201310466382.2A CN201310466382A CN103676649A CN 103676649 A CN103676649 A CN 103676649A CN 201310466382 A CN201310466382 A CN 201310466382A CN 103676649 A CN103676649 A CN 103676649A
- Authority
- CN
- China
- Prior art keywords
- wnn
- module
- matrix
- parameter
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000012549 training Methods 0.000 title claims abstract description 71
- 238000013528 artificial neural network Methods 0.000 title claims description 35
- 239000011159 matrix material Substances 0.000 claims description 100
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 239000013598 vector Substances 0.000 claims description 24
- 238000013401 experimental design Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 238000012986 modification Methods 0.000 claims description 2
- 230000004048 modification Effects 0.000 claims description 2
- 241000238876 Acari Species 0.000 claims 1
- 238000013480 data collection Methods 0.000 claims 1
- 239000003550 marker Substances 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 3
- 230000002035 prolonged effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 76
- 230000003044 adaptive effect Effects 0.000 description 22
- 238000005457 optimization Methods 0.000 description 21
- 238000012706 support-vector machine Methods 0.000 description 11
- 210000002569 neuron Anatomy 0.000 description 10
- 238000000354 decomposition reaction Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000003050 experimental design method Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000006386 neutralization reaction Methods 0.000 description 1
- 238000005312 nonlinear dynamic Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域 technical field
本发明涉及一种局部自适应小波神经网络训练系统、设备及方法,尤其是一种系统平稳性高的局部自适应小波神经网络训练系统、设备及方法。 The invention relates to a local adaptive wavelet neural network training system, equipment and method, in particular to a local adaptive wavelet neural network training system, equipment and method with high system stability.
背景技术 Background technique
设有个样本点,其输入-输出关系由小波神经网络(WNN)模型表示 (1) There are sample points, and its input-output relationship is represented by a wavelet neural network (WNN) model (1)
这里,是径向基小波函数,其形式为,和分别是的平移参数和尺度参数,为WNN的隐含节点数目。在训练WNN时,首先建立小波神经元候选集,候选集中小波神经元参数,小波函数参数初始值由数据集聚类的结果进行确定,细节请参考[Stephen A. Billings, Hua-Liang Wei. A new class of wavelet networks for nonlinear system identification. IEEE Transaction on Neural Networks, 16(4):862-870, 2005]。 Here, is the radial basis wavelet function, its form is, and are respectively the translation parameter and the scale parameter, and is the hidden node number of WNN. When training WNN, first establish a wavelet neuron candidate set, the wavelet neuron parameters in the candidate set, and the initial value of the wavelet function parameters are determined by the results of data set clustering. For details, please refer to [Stephen A. Billings, Hua-Liang Wei. A new class of wavelet networks for nonlinear system identification. IEEE Transaction on Neural Networks, 16(4):862-870, 2005].
显然,WNN是一种3层结构神经网络,其模型参数包括隐含层节点数量、隐含层-输出层连接权重及WNN隐含节点参数。WNN具有良好的多个尺度逼近特性和推广性能,并且小波节点参数具有明确的物理含义,小波函数具有优良的局部支撑性,广泛应用于非线性动态系统的建模、非线性分类器建模等领域。推广性能(即对新样本的预测精度)是衡量WNN性能的重要指标,而推广性能直接取决于WNN的结构复杂性(小波网络隐含层节点数目)、小波网络隐含节点参数的选取。 Obviously, WNN is a 3-layer neural network, and its model parameters include the number of hidden layer nodes, hidden layer-output layer connection weights and WNN hidden node parameters. WNN has good multi-scale approximation characteristics and generalization performance, and wavelet node parameters have clear physical meanings, wavelet functions have excellent local support, and are widely used in nonlinear dynamic system modeling, nonlinear classifier modeling, etc. field. Generalization performance (that is, the prediction accuracy of new samples) is an important index to measure the performance of WNN, and the generalization performance directly depends on the structural complexity of WNN (the number of nodes in the hidden layer of the wavelet network) and the selection of hidden node parameters of the wavelet network.
目前WNN训练方法主要分为如下几类方法: At present, WNN training methods are mainly divided into the following categories:
(1)基于AIC、BIC等模型准则的WNN训练方法。 (1) WNN training method based on model criteria such as AIC and BIC.
该类方法主要有两种方法: There are mainly two methods of this type of method:
a. 首先选取大量冗余小波节点,保持小波节点参数不变条件下使用上述模型选择准则确定WNN结构。由于选取合适的小波节点参数是一个非常困难的问题,这种方法难以求取最优的WNN模型; a. First select a large number of redundant wavelet nodes, and use the above model selection criteria to determine the WNN structure while keeping the wavelet node parameters unchanged. Since it is a very difficult problem to select the appropriate wavelet node parameters, this method is difficult to find the optimal WNN model;
b. 使用遗传算法(GA)基于上述模型选择准则选取最优的小波节点参数和WNN结构。这种方法计算代价太大,在实际中很难应用。 b. Use the genetic algorithm (GA) to select the optimal wavelet node parameters and WNN structure based on the above model selection criteria. This method is computationally expensive and difficult to apply in practice.
(2)基于梯度下降方法的经验风险最小化WNN训练方法。 (2) Empirical risk minimization WNN training method based on gradient descent method.
由于该优化问题是多变量、非线性、非凸优化问题,该方法与常规的基于梯度下降的神经网络训练方法的缺陷相同,均存在训练速度慢、易于收敛到局部极小点、难以确定合适的WNN结构等缺陷。尽管有研究者使用共轭梯下降法能在一定程度上提高WNN训练速度,但是上述弊端仍然没有解决。 Since the optimization problem is a multi-variable, nonlinear, non-convex optimization problem, this method has the same defects as the conventional neural network training method based on gradient descent, such as slow training speed, easy convergence to local minimum points, and difficulty in determining the appropriate WNN structure and other defects. Although some researchers use the conjugate ladder descent method to improve the WNN training speed to a certain extent, the above drawbacks are still not resolved.
(3)使用支持向量机(SVM)理论方法提高WNN的泛化性能。 (3) Use the support vector machine (SVM) theoretical method to improve the generalization performance of WNN.
是统计学习理论发展的标志性成果,它不仅具有坚实的理论基础,而且具有良好的推广性能,受到学术界和工业界的极大关注。由于SVM与WNN具有相同的结构,申请人证明了径向基向基核函数就是一种满足Mercer条件的核函数,并提出了一种类似WNN的多尺度小波SVM(WSVM)建模方法,申请者相关成果发表在2008年4期《电路与系统学报》。由于受到计算量的限制,该方法只给出了两个尺度上的WNN建模方法。 It is a landmark achievement in the development of statistical learning theory. It not only has a solid theoretical foundation, but also has good generalization performance, and has attracted great attention from academia and industry. Since SVM and WNN have the same structure, the applicant proved that the radial basis kernel function is a kind of kernel function satisfying the Mercer condition, and proposed a multi-scale wavelet SVM (WSVM) modeling method similar to WNN. The relevant results of the author were published in the 4th issue of "Journal of Circuits and Systems" in 2008. Due to the limitation of calculation amount, this method only gives the WNN modeling method on two scales.
(4)基于多核学习理论的WNN改进方法。 (4) WNN improvement method based on multi-core learning theory.
近年来有学者提出了多核SVM方法。该方法使用多个核函数的线性组合表示SVM的核函数,通过求解优化问题得到最优的核函数的权重以及SVM模型参数。当核函数为径向基小波函数时,多尺度SVM及多核SVM本质上具有WNN的多尺度逼近特性。虽然该方法继承了SVM和多尺度逼近的优点,但是无法调整核函数参数,严重影响模型的性能。 In recent years, some scholars have proposed a multi-core SVM method. This method uses a linear combination of multiple kernel functions to represent the kernel function of SVM, and obtains the weight of the optimal kernel function and the parameters of the SVM model by solving an optimization problem. When the kernel function is radial basis wavelet function, multi-scale SVM and multi-kernel SVM essentially have the multi-scale approximation characteristics of WNN. Although this method inherits the advantages of SVM and multi-scale approximation, it cannot adjust the parameters of the kernel function, which seriously affects the performance of the model.
应该指出的是,上述方法均无法在线调整WNN模型结构和参数,在实际应用中会出现样本分布不均匀(有些模态的建模训练数据不是充分的或完备的)、设备工况变化、输入干扰、外界环境、设备老化等不确定因素,导致已经训练好的WNN模型预测精度下降,因而需要在线进行训练WNN模型。 It should be pointed out that none of the above methods can adjust the structure and parameters of the WNN model online. In practical applications, there will be uneven distribution of samples (the modeling training data of some modalities is not sufficient or complete), changes in equipment working conditions, input Uncertain factors such as interference, external environment, and equipment aging lead to a decline in the prediction accuracy of the trained WNN model, so it is necessary to train the WNN model online.
虽然建模对象在各个工作模态(工作区域)的过程特性表现形式有所差异,但是综合看来,所有过程特性具有类似的潜在特性,即“共同模式(Common part)”部分,以及描述工作模态特有的潜在特性的“变化模式(Specific part)”部分组成。因此本发明根据过程系统的模式变化,采取不同的模型更新策略,可以大大减少模型学习的时间和计算复杂度,从而实现模型的在线学习。 Although the process characteristics of modeling objects in each working mode (working area) are different, but in general, all process characteristics have similar potential characteristics, namely the "common part" part, and the description work Modality-specific latent properties of the "variation pattern (Specific part)" part. Therefore, the present invention adopts different model update strategies according to the mode change of the process system, which can greatly reduce the time and computational complexity of model learning, thereby realizing online learning of models.
附录:相关背景知识。 Appendix: Relevant background knowledge.
最优实验设计 Optimum准则。 Optimum criteria for optimal experimental design.
对给定的数据集,假设它们存在下面的线性关系; For a given data set, assume that they have the following linear relationship;
其中,是高斯噪声(注意,是随机变量,其均值为0,方差为)。最优实验设计就是选择包含最多信息的实验数据学习预测函数,使得预测误差最小。令,,则。最优的估计方法就是使用最小均方误差作为代价函数,其最优解为。 Among them, is Gaussian noise (note that it is a random variable with a mean of 0 and a variance of ). The optimal experimental design is to choose the experimental data that contains the most information to learn the prediction function, so that the prediction error is the smallest. order, then. The optimal estimation method is to use the minimum mean square error as the cost function, and its optimal solution is .
为了保证回归模型的泛化性,我们还期望模型参数的方差最小。由于 In order to ensure the generalization of the regression model, we also expect the variance of the model parameters to be minimal. because
可以看出上式所示的模型参数估计是无偏估计,其协方差可以表示为 It can be seen that the model parameter estimation shown in the above formula is an unbiased estimation, and its covariance can be expressed as
因此,预测模型的预测方差为 Therefore, the forecast variance of the predictive model is
由上式可见,预测方差最小等价于模型估计参数方差最小,即。目前已经出现了多个优化准则度量模型参数方差,其中和Optimum优化准则引起人们广泛的关注。Optimum优化准则就是最小化参数协方差矩阵的迹,其等价平均方差。Optimum优化准则方法就是使得参数协方差矩阵的行列式(determinant)最小。 It can be seen from the above formula that the minimum prediction variance is equivalent to the minimum variance of the model estimated parameters, ie. At present, there have been multiple optimization criteria to measure the variance of model parameters, among which and Optimum optimization criterion has aroused widespread concern. The Optimum optimization criterion is to minimize the trace of the parameter covariance matrix, and its equivalent mean variance. The Optimum optimization criterion method is to minimize the determinant of the parameter covariance matrix.
由于WNN输出与隐含节点输出呈线性关系,如果把WNN的隐含节点看出数据特征,显然控制WNN模型复杂度问题转化为特征选择问题。因此,使用Optimum方法消除WNN冗余隐含节点可以保证WNN的推广性能。 Since the output of WNN has a linear relationship with the output of hidden nodes, if the hidden nodes of WNN can be seen as data characteristics, it is obvious that the problem of controlling the complexity of the WNN model is transformed into a feature selection problem. Therefore, using the Optimum method to eliminate WNN redundant hidden nodes can guarantee the generalization performance of WNN.
流形正则化。 Manifold regularization.
流形学习算法能够从高维数据揭示低维数据的内蕴几何结构,每个内蕴维对应某个解释变量,这样可以根据少量的隐藏变量解释高维数据。根据帕累托定律可知,非线性系统的绝大部分重要特性都是由非线性系统的局部性刻画的。因此,考虑数据集的局部几何性质(如距离,角度)是提高WNN性能的有效途径。流形学习算法与线性投影方法一样均依赖相似度矩阵计算,但其计算复杂度并没有很大的增加。 Manifold learning algorithms can reveal the intrinsic geometric structure of low-dimensional data from high-dimensional data, and each intrinsic dimension corresponds to an explanatory variable, so that high-dimensional data can be explained according to a small number of hidden variables. According to Pareto's law, most of the important characteristics of nonlinear systems are described by the locality of nonlinear systems. Therefore, considering the local geometric properties (such as distance, angle) of the dataset is an effective way to improve the performance of WNN. Like the linear projection method, the manifold learning algorithm relies on the calculation of the similarity matrix, but its computational complexity does not increase greatly.
对回归数据集,定义.相似度矩阵使用图矩阵定义,图边权重矩阵定义如下: For regression datasets, define .The similarity matrix is defined using a graph matrix, and the graph edge weight matrix is defined as follows:
这里,为的类标号;为的K-近邻集合。假设样本 Here, is the class label of ; is the K-nearest neighbor set of . hypothetical sample
在嵌入低维流形上的表示为 The representation on the embedded low-dimensional manifold is
根据谱图理论,谱图正则化因子可以使用矩阵度量低维表示的平滑性。Laplacian 流形正则化因子可以表示为 According to the spectrogram theory, the spectrogram regularization factor can use a matrix to measure the smoothness of the low-dimensional representation. The Laplacian manifold regularization factor can be expressed as
其中,对角矩阵,,。通过最小化Laplacian 流形正则化因子,低维空间内的数据集保持了高维原始数据集的局部几何结构,有效提高学习机的性能。 where, the diagonal matrix, , . By minimizing the Laplacian manifold regularization factor, the data set in the low-dimensional space maintains the local geometric structure of the high-dimensional original data set, effectively improving the performance of the learning machine.
发明内容 Contents of the invention
为了使系统的平稳性更高,本发明提供了一种系统平稳性高的局部自适应小波神经网络训练设备、系统及方法。 In order to make the system more stable, the invention provides a local adaptive wavelet neural network training device, system and method with high system stability.
为实现该目的,本发明提供了一种局部自适应小波神经网络训练系统, To achieve this purpose, the present invention provides a local adaptive wavelet neural network training system,
该局部自适应小波神经网络训练系统由信号连接的离线WNN训练模块和在线更新WNN模块组成。 The local adaptive wavelet neural network training system is composed of offline WNN training module connected by signal and online updating WNN module.
优选地,该离线WNN训练模块建立WNN初始模型; Preferably, the offline WNN training module establishes a WNN initial model;
该在线更新WNN模块根据新来数据的分布特性,采用不同WNN模型更新策略对数据进行预测。 The online update WNN module uses different WNN model update strategies to predict the data according to the distribution characteristics of the new data.
本发明还提供了一种局部自适应小波神经网络训练设备,该局部自适应小波神经网络训练设备由信号连接的数据预处理模块、在线满意G-K模糊聚类模块、小波函数参数设置模块、WNN更新策略选择模块、隐含节点选择模块、扩展卡尔曼滤波(EKF)训练模块、Laplacian 流形正则化LSSVM模块、最优实验设计Optimum模块、样本增加WNN权重更新模块、样本移除WNN权重更新模块、WNN预测模块组成。 The present invention also provides a local self-adaptive wavelet neural network training device, the local self-adaptive wavelet neural network training device is composed of a signal-connected data preprocessing module, an online satisfactory G-K fuzzy clustering module, a wavelet function parameter setting module, and a WNN updater Strategy selection module, hidden node selection module, extended Kalman filter (EKF) training module, Laplacian manifold regularization LSSVM module, optimal experimental design Optimum module, sample increase WNN weight update module, sample removal WNN weight update module, WNN prediction module composition.
优选地,该数据预处理模块的功能与作用:函数输入参数为数据集,输出参数为规格化数据集; Preferably, the function and function of the data preprocessing module: the function input parameter is a data set, and the output parameter is a normalized data set;
该在线满意G-K模糊聚类模块的功能与作用: The functions and functions of the online satisfaction G-K fuzzy clustering module:
输入参数:数据集,初始隶属度矩阵,聚类数量; Input parameters: dataset, initial membership matrix, number of clusters;
输出参数:聚类数量,隶属度矩阵; Output parameters: number of clusters, membership matrix;
该小波函数参数设置模块的功能与作用: The functions and functions of the wavelet function parameter setting module:
输入参数:隶属度矩阵,数据集,聚类数量,节点函数生成策略; Input parameters: membership matrix, data set, number of clusters, node function generation strategy;
输出参数:小波函数参数矩阵; Output parameter: wavelet function parameter matrix;
该WNN更新策略选择模块的功能与作用: The functions and functions of the WNN update strategy selection module:
输入参数:过去时刻隶属度矩阵,过去时刻聚类数量,当前隶属度矩阵,当前聚类数量; Input parameters: membership degree matrix at past time, number of clusters at past time, current membership degree matrix, current number of clusters;
输出参数:隶属度矩阵,聚类数量; Output parameters: membership matrix, number of clusters;
该隐含节点选择模块的功能与作用: The functions and functions of the implicit node selection module:
输入参数:候选节点集合,WNN权重向量,拟合数据集; Input parameters: candidate node set, WNN weight vector, fitting data set;
输出参数:小波节点参数,对应权重; Output parameters: wavelet node parameters, corresponding weights;
该扩展卡尔曼滤波(EKF)训练模块的功能与作用: The functions and functions of the Extended Kalman Filter (EKF) training module:
输入参数:小波节点参数,算法终止阈值,训练数据集; Input parameters: wavelet node parameters, algorithm termination threshold, training data set;
输出参数:小波节点参数,对应权重; Output parameters: wavelet node parameters, corresponding weights;
该Laplacian流形正则化LSSVM模块的功能与作用: The functions and functions of the Laplacian manifold regularization LSSVM module:
输入参数:训练数据集,模型参数、,矩阵L; Input parameters: training data set, model parameters, matrix L;
输出参数:权重向量; Output parameter: weight vector;
该最优实验设计Optimum模块的功能与作用: The optimal experimental design The functions and functions of the Optimum module:
输入参数:训练数据集,权重向量,WNN隐含节点组成的参数矩阵,待选节点标记向量; Input parameters: training data set, weight vector, parameter matrix composed of WNN hidden nodes, candidate node label vector;
输出参数:节点选择标记向量; Output parameters: node selection flag vector;
该样本增加WNN权重更新模块的功能与作用: This sample increases the functions and functions of the WNN weight update module:
滑动宽口数据集,新加数据,Q矩阵,R矩阵WNN隐含节点参数矩阵; Sliding wide-mouth data set, newly added data, Q matrix, R matrix WNN implicit node parameter matrix;
输出参数:更新Q矩阵,更新R矩阵; Output parameters: update Q matrix, update R matrix;
该样本移除WNN权重更新模块的功能与作用: This sample removes the functions and functions of the WNN weight update module:
输入参数:Q矩阵,R矩阵,移除数据编号; Input parameters: Q matrix, R matrix, remove data number;
输出参数:更新Q矩阵,更新R矩阵; Output parameters: update Q matrix, update R matrix;
该WNN预测模块的功能与作用: The functions and functions of the WNN prediction module:
输入数据,WNN隐含节点参数矩阵,权重矩阵,输入数据向量; Input data, WNN hidden node parameter matrix, weight matrix, input data vector;
输出参数:预测输出数据。 Output parameter: Predicted output data.
本发明又提供了一种局部自适应小波神经网络训练方法,该局部自适应小波神经网络训练方法包括: The present invention provides a kind of local adaptive wavelet neural network training method again, this local adaptive wavelet neural network training method comprises:
S31、在线局部自适应WNN结构调整; S31. Online local adaptive WNN structure adjustment;
S32、在线更新WNN权重; S32. Updating the WNN weight online;
S33、WNN更新选择策略。 S33. The WNN updates the selection strategy.
优选地,S31、在线局部自适应WNN结构调整,具体包括: Preferably, S31, online local adaptive WNN structure adjustment, specifically includes:
S311、选取WNN隐含节点; S311. Select WNN hidden nodes;
S312、控制WNN模型复杂度。 S312. Control the complexity of the WNN model.
优选地,该S312、控制WNN模型复杂度,具体包括: Preferably, the S312, controlling the WNN model complexity, specifically includes:
S3121、基于Laplacian流形正则化LSSVM的WNN权重估计; S3121, WNN weight estimation based on Laplacian manifold regularization LSSVM;
S3122、基于Optimum的WNN隐含节点序贯选择。 S3122, based on Optimum's WNN implies sequential selection of nodes.
优选地,该S32、在线更新WNN权重,具体包括: Preferably, the S32, updating the WNN weight online, specifically includes:
S321、样本增加更新阶段; S321, sample adding and updating stage;
S322、样本移除更新阶段。 S322. Sample removal and update phase.
优选地,该S33、WNN更新选择策略,具体包括: Preferably, the S33, WNN update selection strategy specifically includes:
S331、初始化; S331, initialization;
S332、根据隶属度矩阵进行聚类; S332. Perform clustering according to the membership degree matrix;
S333、判断是否结束; S333, judging whether to end;
S334、寻找与样本中心最不相似的样本作为新聚类中心 S334. Find the sample least similar to the sample center as the new cluster center
S335、计算相应的新的初始隶属度矩阵; S335. Calculate a corresponding new initial membership degree matrix;
S336、令, 转S332。 S336, order, transfer to S332.
本发明实施例提供的技术方案带来的有益效果是: The beneficial effects brought by the technical solution provided by the embodiments of the present invention are:
1). 基于上述思想,在WNN建模过程中描述“共同模式”部分的WNN节点保持不变,只需调整局部WNN结构拟合“变化模式”,大大减少训练WNN所需的计算量,适合在线WNN建模; 1). Based on the above ideas, in the WNN modeling process, the WNN nodes describing the "common mode" part remain unchanged, and only need to adjust the local WNN structure to fit the "changing mode", which greatly reduces the amount of calculation required for training WNN, suitable for Online WNN modeling;
2). 迭代地从WNN隐含节点候选集选取小波神经元加入到WNN,并采用扩展卡尔曼滤波(EKF)方法调整新增加小波节点的参数及相关权重; 2). Iteratively select wavelet neurons from the WNN hidden node candidate set to add to WNN, and use the Extended Kalman Filter (EKF) method to adjust the parameters and related weights of the newly added wavelet nodes;
3). 基于滑窗QR分解的在线更新WNN权重算法, 通过样本增加和消减的递推算法在线修正模型权重; 3). The online update WNN weight algorithm based on the sliding window QR decomposition, and the model weight is corrected online through the recursive algorithm of sample increase and decrease;
4). 引入流形学习思想,首次提出Laplacian正则化与Optimum优化准则结合的WNN复杂度控制方法,较好地考虑了训练数据集的几何结构,保证了WNN的推广性能。 4). Introduce the idea of manifold learning, and propose Laplacian regularization and The WNN complexity control method combined with the Optimum optimization criterion better considers the geometric structure of the training data set and ensures the generalization performance of WNN.
这样实际应用中结合预测误差和先验知识,通过在线调整WNN结构以及控制模型复杂度或者WNN权重更新,保证了WNN的预测精度,有效地克服了现有WNN算法难以在线学习、推广性能难以保证的问题。 In this way, in practical applications, combining prediction error and prior knowledge, by adjusting the WNN structure online and controlling the model complexity or WNN weight update, the prediction accuracy of WNN is guaranteed, which effectively overcomes the difficulty of online learning and generalization performance of existing WNN algorithms. The problem.
附图说明 Description of drawings
通过下面结合附图对本发明的一个优选实施例进行的描述,本发明的技术方案及其技术效果将变得更加清楚,且更加易于理解。其中: Through the following description of a preferred embodiment of the present invention in conjunction with the accompanying drawings, the technical solution and technical effects of the present invention will become clearer and easier to understand. in:
图1示出了本发明的实施例的局部自适应小波神经网络训练系统的结构示意图; Fig. 1 shows the structural representation of the local adaptive wavelet neural network training system of the embodiment of the present invention;
图2示出了本发明的实施例的局部自适应小波神经网络训练设备的结构示意图; Fig. 2 shows the structural representation of the local adaptive wavelet neural network training equipment of the embodiment of the present invention;
图3示出了本发明的实施例的局部自适应小波神经网络训练方法的方法流程图; Fig. 3 shows the method flowchart of the local adaptive wavelet neural network training method of the embodiment of the present invention;
图4示出了图3中的控制WNN模型复杂度的方法流程图; Fig. 4 shows the flow chart of the method for controlling the complexity of the WNN model in Fig. 3;
图5示出了图3中的在线更新WNN权重的方法流程图; Fig. 5 shows the flow chart of the method for updating WNN weight online in Fig. 3;
图6示出了图3中的WNN更新选择策略的方法流程图。 FIG. 6 shows a flow chart of the method for WNN updating and selecting a strategy in FIG. 3 .
具体实施方式 Detailed ways
以下将结合所附的附图对本发明的优选实施例进行描述。需要指明的是,其中“左”、“右”仅仅是为了结合附图便于说明和描述的目的,并非仅限于此。 Preferred embodiments of the present invention will be described below with reference to the accompanying drawings. It should be pointed out that the "left" and "right" are only for the purpose of illustration and description in conjunction with the accompanying drawings, and are not limited thereto.
第一实施例 first embodiment
图1示出了本发明的第一实施例的局部自适应小波神经网络训练系统的结构示意图。 Fig. 1 shows a schematic structural diagram of a local adaptive wavelet neural network training system according to the first embodiment of the present invention.
该局部自适应小波神经网络训练系统由信号连接的离线WNN训练模块和在线更新WNN模块组成。 The local adaptive wavelet neural network training system is composed of offline WNN training module connected by signal and online updating WNN module.
离线WNN训练模块主要建立WNN初始模型。 The offline WNN training module mainly establishes the WNN initial model.
其步骤是,首先使用基于在线满意G-K模糊聚类方法对训练数据集进行聚类,并根据聚类结果确定多个小波函数的参数。小波函数尺度参数和平移参数根据聚类的中心、半径和方差结果随机生成,这些小波节点函数组成WNN隐含节点候选集合;然后使用现有WNN训练算法建立初始WNN模型作为当前WNN。 The steps are as follows: firstly cluster the training data set by using the online satisfactory G-K fuzzy clustering method, and determine the parameters of multiple wavelet functions according to the clustering results. The wavelet function scale parameters and translation parameters are randomly generated according to the cluster center, radius and variance results, and these wavelet node functions form the WNN hidden node candidate set; then use the existing WNN training algorithm to establish the initial WNN model as the current WNN.
在线更新WNN模块根据新来数据的聚类结果,采用不同WNN模型更新策略对WNN更新并对新数据进行预测。 The online update WNN module uses different WNN model update strategies to update WNN and predict new data according to the clustering results of new data.
其步骤是,使用在线满意G-K模糊聚类方法对新来数据进行聚类,根据聚类结果,采用如下3种策略对数据进行预测; The steps are to use the online satisfactory G-K fuzzy clustering method to cluster the new data, and use the following three strategies to predict the data according to the clustering results;
S11、如果新数据与以前时刻数据均属于相同聚类且隶属度>0.5,则仍然使用当前WNN对新数据进行预测。 S11. If the new data and the previous time data belong to the same cluster and the membership degree is >0.5, the current WNN is still used to predict the new data.
、如果新数据聚类结果是新增的聚类或者新数据转移到其它聚类(隶属度>0.5)或对原来聚类隶属度<0.2(偏离了当前工况模型),则使用在线局部自适应WNN更新算法,即首先从候选集中选择最优小波函数递归添加到当前WNN模型的隐含层,并使用EKF训练新增隐含节点的参数,直至满足误差阈值;然后Laplacian 正则化与Optimum优化准则结合选择WNN隐含节点,并把从WNN模型删除的冗余的节点加入到WNN隐含节点候选集。 , If the new data clustering result is a newly added cluster or the new data is transferred to other clusters (membership degree>0.5) or the original cluster membership degree<0.2 (deviating from the current working condition model), use the online local automatic Adapt to the WNN update algorithm, that is, first select the optimal wavelet function from the candidate set and recursively add it to the hidden layer of the current WNN model, and use EKF to train the parameters of the newly added hidden nodes until the error threshold is met; then Laplacian regularization and Optimum optimization Criteria combine to select WNN hidden nodes, and add redundant nodes deleted from the WNN model to the candidate set of WNN hidden nodes.
、如果新数据对原来聚类的隶属度≥0.2且≤0.5时,表明对象受到了系统动态不确定因素的影响,只需更新WNN模型权重参数。该方法通过在滑动窗口内增加样本更新WNN权重阶段以及老样本移除后WNN权重更新阶段实现WNN权重更新。如果使用S11或者S12更新WNN模型时,需要把更新后的WNN模型替换已有的WNN模型作为当前WNN预测数据。 . If the membership degree of the new data to the original cluster is ≥0.2 and ≤0.5, it indicates that the object is affected by system dynamic uncertainties, and only the weight parameters of the WNN model need to be updated. This method realizes the WNN weight update by adding samples in the sliding window to update the WNN weight stage and the WNN weight update stage after the old samples are removed. If using S11 or S12 to update the WNN model, it is necessary to replace the existing WNN model with the updated WNN model as the current WNN prediction data.
本发明实施例提供的技术方案可以充分利用历史知识(候选WNN隐含节点集合)快速更新WNN模型,便于在线训练WNN。 The technical solution provided by the embodiment of the present invention can make full use of historical knowledge (candidate WNN hidden node set) to quickly update the WNN model, so as to facilitate online training of WNN.
第二实施例 second embodiment
图2示出了本发明的第二实施例的局部自适应小波神经网络训练设备的结构示意图。 Fig. 2 shows a schematic structural diagram of a locally adaptive wavelet neural network training device according to a second embodiment of the present invention.
该局部自适应小波神经网络训练设备由信号连接的数据预处理模块、在线满意G-K模糊聚类模块、小波函数参数设置模块、WNN更新策略选择模块、隐含节点选择模块、扩展卡尔曼(EKF)训练模块、Laplacian 正则化LSSVM模块、实验设计Optimum模块、样本增加WNN权重更新模块、样本移除WNN权重更新模块、WNN预测模块组成。 The local adaptive wavelet neural network training equipment consists of a signal-connected data preprocessing module, an online satisfactory G-K fuzzy clustering module, a wavelet function parameter setting module, a WNN update strategy selection module, a hidden node selection module, and an extended Kalman (EKF) It is composed of training module, Laplacian regularization LSSVM module, experimental design Optimum module, sample addition WNN weight update module, sample removal WNN weight update module, and WNN prediction module.
数据预处理模块的功能与作用:函数输入参数为数据集,输出参数为规格化数据集。 The function and role of the data preprocessing module: the input parameter of the function is a data set, and the output parameter is a normalized data set.
在线满意G-K模糊聚类模块的功能与作用: The functions and functions of the online satisfaction G-K fuzzy clustering module:
输入参数:数据集,初始隶属度矩阵,聚类数量; Input parameters: dataset, initial membership matrix, number of clusters;
输出参数:聚类数量,隶属度矩阵。 Output parameters: number of clusters, membership matrix.
小波函数参数设置模块的功能与作用: The functions and functions of the wavelet function parameter setting module:
输入参数:隶属度矩阵,数据集,聚类数量,节点函数生成策略; Input parameters: membership matrix, data set, number of clusters, node function generation strategy;
输出参数:小波函数参数矩阵。 Output parameter: wavelet function parameter matrix.
更新策略选择模块的功能与作用: The functions and functions of the update strategy selection module:
输入参数:过去时刻隶属度矩阵,过去时刻聚类数量,当前隶属度矩阵,当前聚类数量; Input parameters: membership degree matrix at past time, number of clusters at past time, current membership degree matrix, current number of clusters;
输出参数:隶属度矩阵,聚类数量。 Output parameters: membership matrix, number of clusters.
隐含节点选择模块的功能与作用: The functions and functions of the implicit node selection module:
输入参数:候选WNN隐含节点集合,WNN权重向量,拟合数据集; Input parameters: Candidate WNN hidden node set, WNN weight vector, fitting data set;
输出参数:小波节点参数,对应权重。 Output parameters: wavelet node parameters, corresponding weights.
扩展卡尔曼(EKF)训练模块的功能与作用: Functions and functions of the Extended Kalman (EKF) training module:
输入参数:小波节点参数,算法终止阈值,训练数据集; Input parameters: wavelet node parameters, algorithm termination threshold, training data set;
输出参数:小波节点参数,对应权重。 Output parameters: wavelet node parameters, corresponding weights.
正则化LSSVM模块的功能与作用: The functions and functions of the regularized LSSVM module:
输入参数:训练数据集,模型参数(交叉验证方法进行选择),(建议选择范围为[2,20]),矩阵(近邻数量在[3,5]内选择); Input parameters: training data set, model parameters (selected by cross-validation method), (recommended selection range is [2,20]), matrix (the number of neighbors is selected within [3,5]);
输出参数:权重向量。 Output parameter: weight vector.
最优实验设计Optimum模块的功能与作用: optimal experimental design The functions and functions of the Optimum module:
输入参数:训练数据集,权重向量,WNN隐含节点组成的参数矩阵,待选节点标记向量(0-该节点移除,1-该节点待选); Input parameters: training data set, weight vector, parameter matrix composed of WNN hidden nodes, candidate node label vector (0-the node is removed, 1-the node is to be selected);
输出参数:节点选择标记向量。 Output Parameters: Vector of node selection flags.
样本增加WNN权重更新模块的功能与作用: The functions and functions of the sample increase WNN weight update module:
滑动宽口数据集,新加数据,Q矩阵,R矩阵WNN隐含节点参数矩阵; Sliding wide-mouth data set, newly added data, Q matrix, R matrix WNN implicit node parameter matrix;
输出参数:更新Q矩阵,更新R矩阵。 Output parameters: update Q matrix, update R matrix.
样本移除WNN权重更新模块的功能与作用: The functions and functions of the sample removal WNN weight update module:
输入参数:Q矩阵,R矩阵,移除数据编号; Input parameters: Q matrix, R matrix, remove data number;
输出参数:更新Q矩阵,更新R矩阵。 Output parameters: update Q matrix, update R matrix.
预测模块的功能与作用: The functions and functions of the prediction module:
输入参数:WNN隐含节点参数矩阵,权重矩阵,输入数据向量; Input parameters: WNN hidden node parameter matrix, weight matrix, input data vector;
输出参数:预测输出数据。 Output parameter: Predicted output data.
根据图2中模块的关系,结合上述模块的输入和输出,很容易分析出各个模块之间的信息交换和处理关系,技术人员无需创造性劳动即可实现本发明方法。 According to the relationship of the modules in Fig. 2, combined with the input and output of the above modules, it is easy to analyze the information exchange and processing relationship between each module, and the technical personnel can realize the method of the present invention without creative labor.
该设备由上述程序模块完成,无需用户设置系统参数,使用非常方便,具有较好的维护性,在实际应用中无需增加硬件设备,方便对现有系统进行改造。 The device is completed by the above program modules, and does not require users to set system parameters. It is very convenient to use and has good maintainability. It does not need to add hardware devices in practical applications, and it is convenient to modify the existing system.
第三实施例 third embodiment
图3示出了本发明的第三实施例的局部自适应小波神经网络训练方法的方法流程图。 Fig. 3 shows a method flow chart of a locally adaptive wavelet neural network training method according to a third embodiment of the present invention.
该局部自适应小波神经网络训练方法包括: The local adaptive wavelet neural network training method includes:
S31、在线局部自适应WNN结构调整; S31. Online local adaptive WNN structure adjustment;
S32、在线更新WNN权重; S32. Updating the WNN weight online;
S33、WNN更新选择策略。 S33. The WNN updates the selection strategy.
该S31、在线局部自适应WNN结构调整,具体包括: The S31, online local adaptive WNN structure adjustment, specifically includes:
S311、选取WNN隐含节点; S311. Select WNN hidden nodes;
S312、控制WNN模型复杂度。 S312. Control the complexity of the WNN model. the
S311、选取WNN隐含节点,即逐步增加WNN的隐含节点并调整节点参数,直至拟合误差满足事先设定阈值;具体包括: S311. Select WNN hidden nodes, that is, gradually increase the hidden nodes of WNN and adjust node parameters until the fitting error meets the preset threshold; specifically include:
在线局部自适应WNN调整方法用于克服因系统工作模态变换导致系统非线性结构变化而引起的WNN模型与实际系统失配问题。其方法就是从小波节点候选集中逐步选取能够使得WNN逼近误差下降最大的小波节点加入WNN,然后使用EKF方法调整新加隐含节点参数及权重值。其详细描述如下: The online local adaptive WNN adjustment method is used to overcome the mismatch between the WNN model and the actual system caused by the nonlinear structure change of the system due to the transformation of the system working mode. The method is to gradually select wavelet nodes that can reduce the WNN approximation error from the wavelet node candidate set to join WNN, and then use the EKF method to adjust the parameters and weights of the newly added hidden nodes. Its detailed description is as follows:
假设当前WNN已经有个小波神经元,对于训练样本集,令,为该WNN关于的输出,为WNN逼近误差,,记。首先估计出候选节点集中每个节点的重要性,其重要性通过在上投影产生的误差下降程度进行估计。在上的投影为 Assuming that the current WNN already has a wavelet neuron, for the training sample set, let , be the output of the WNN, and be the WNN approximation error, , denote. Firstly, the importance of each node in the candidate node set is estimated, and its importance is estimated by the degree of error reduction generated by the upper projection. The projection on
(2) (2)
其中,为规格化向量。注意,上式(2)可以看作在向量上的逼近,因此其相应的投影(逼近)误差为 where is a normalized vector. Note that the above formula (2) can be regarded as an approximation on the vector, so its corresponding projection (approximation) error is
(3) (3)
很显然,选择误差减少最大的节点,即 Obviously, choose the node with the largest error reduction, that is,
(4) (4)
这里,是很小的正数,可以看作是正则化因子,目的是防止因太小而导致过拟合问题。很显然,增加小波神经元会减小WNN对样本的逼近误差。可以证明,该算法是收敛的(分析过程省略),其收敛速度由下面的定理1(证明省略)给出。 Here, is a very small positive number, which can be regarded as a regularization factor, and the purpose is to prevent overfitting problems caused by being too small. Obviously, adding wavelet neurons will reduce the approximation error of WNN to samples. It can be proved that the algorithm is convergent (the analysis process is omitted), and its convergence rate is given by Theorem 1 below (the proof is omitted).
定理1. 设样本集是满足某种有界未知的有界函数关系,若存在正数,使得对所有的成立,则对于任意的逼近误差阈值,本文算法至多经过 Theorem 1. Suppose the sample set satisfies some bounded and unknown bounded function relationship. If there is a positive number, so that it is true for all, then for any approximation error threshold, the algorithm in this paper passes at most
(5) (5)
次迭代增加小波神经元后,拟合误差不超过,即。其中,;为取整算子。其中,(修改本意是表示节点在数据集上的输出,为与前面一致,故改为)为节点与样本集之间的相关程度,越大表示函数与之间关联程度越大。 After increasing the wavelet neurons for 2 iterations, the fitting error does not exceed, ie. Among them, ; is the rounding operator. Among them, (the intention of the modification is to indicate the output of the node on the data set, which is consistent with the previous one, so it is changed to) is the degree of correlation between the node and the sample set, and the larger the value, the greater the degree of correlation between the function and the sample set.
定理1表明,可以通过递增WNN隐含节点使得误差达到设定值。然而在实际中,难以事先设定候选集中小波函数的参数,不可避免地导致隐含节点过多。由奥卡姆剃须刀原理可知,冗余的隐含节点会降低WNN模型的泛化性能。令为具有个隐含节点WNN的拟合误差,增加隐含小波节点后WNN的拟合误差以速率减少,即 Theorem 1 shows that the error can reach the set value by increasing the hidden nodes of WNN. However, in practice, it is difficult to pre-set the parameters of the wavelet function in the candidate set, which inevitably leads to too many hidden nodes. According to the principle of Occam's razor, redundant hidden nodes will reduce the generalization performance of the WNN model. Let be the fitting error of WNN with hidden nodes, and the fitting error of WNN decreases at a rate after adding hidden wavelet nodes, namely
(6) (6)
由于相关程度决定误差的下降速度,进而影响WNN隐含节点的数量。因此,有必要对当前选择小波神经元参数进行局部调整,最大程度提高局部小波节点的相关性,从而降低样本拟合误差。本发明采用EKF方法搜索新增加小波节点的最优参数和相关权重。局部小波神经元的平移和尺度以及权重,记。为加快训练算法的收敛速度,本文增加参数学习率因子,基于EKF算法的参数训练方式如下: Since the degree of correlation determines the speed of error reduction, it affects the number of hidden nodes in WNN. Therefore, it is necessary to locally adjust the parameters of the currently selected wavelet neurons to maximize the correlation of local wavelet nodes, thereby reducing the sample fitting error. The invention adopts the EKF method to search for the optimal parameters and related weights of the newly added wavelet nodes. The translation, scale and weight of the local wavelet neuron, remember. In order to speed up the convergence speed of the training algorithm, this paper increases the parameter learning rate factor. The parameter training method based on the EKF algorithm is as follows:
(7) (7)
(8) (8)
(9) (9)
(10) (10)
其中,与分别为局部小波神经元输出和期望输出;是训练误差;;为学习速率;是Kalman增益向量;估计的噪声协方差,可以由进行递归求取;是状态估计误差协方差矩阵;是EKF算法的迭代次数;。为自适应调整学习速率以加快算法的收敛性,定理2给出了学习速率须要满足的条件,较好地解决了速率自适应选择的问题。 Among them, and are respectively the local wavelet neuron output and the expected output; is the training error; is the learning rate; is the Kalman gain vector; the estimated noise covariance can be obtained by recursion; is the state estimation error covariance matrix; is The number of iterations of the EKF algorithm; . In order to adaptively adjust the learning rate to speed up the convergence of the algorithm, Theorem 2 gives the conditions that the learning rate must satisfy, which better solves the problem of adaptive selection of the rate.
定理2. 令为隐含节点的参数,对给定的新样本为隐含节点的预测输出,是隐含节点的期望输出,是相应参数的学习速率,是EKF增益向量。如果学习速率满足 Theorem 2. Let be the parameter of the hidden node, the predicted output of the hidden node for a given new sample, is the expected output of the hidden node, is the learning rate of the corresponding parameter, and is the EKF gain vector. If the learning rate satisfies
(11) (11)
那么基于EKF的WNN训练算法是一致收敛的,其中。 Then the WNN training algorithm based on EKF is consistent convergence, among them.
我们选取Lyapunov函数很容易证明定理2,其中,为WNN对样本的预测输出。 It is easy to prove Theorem 2 by choosing the Lyapunov function, where is the predicted output of the sample by WNN.
由于该方法逐步增加新的隐含节点并使用变步长EKF算法调整局部小波节点参数,收敛速度很快,算法平均运行 40s就能取得满意的建模误差。 Because the method gradually adds new hidden nodes and uses the variable step size EKF algorithm to adjust the local wavelet node parameters, the convergence speed is very fast, and the algorithm can obtain a satisfactory modeling error in an average of 40 seconds.
在实际应用中,逐步增加WNN隐含节点数量虽然能够有效降低建模误差,但是也不可避免会会出现冗余隐含节点。根据奥姆卡剃须刀原理,WNN过多的冗余节点会降低模型的泛化性能。利用数据局部几何结构信息是提高WNN性能的重要手段。借鉴流形学习理论,本发明提出Laplacian正则化与Optimum结合的WNN模型复杂度控制方法。 In practical applications, although gradually increasing the number of hidden nodes in WNN can effectively reduce the modeling error, redundant hidden nodes will inevitably appear. According to the principle of Omka's razor, too many redundant nodes in WNN will reduce the generalization performance of the model. Utilizing the local geometric structure information of data is an important means to improve the performance of WNN. Drawing on manifold learning theory, the present invention proposes Laplacian regularization and A WNN model complexity control method combined with Optimum.
、控制WNN模型复杂度,即基于Laplacian正则化与Optimum优化准则结合的WNN复杂度控制方法。 , Control the complexity of the WNN model, that is, based on Laplacian regularization and WNN complexity control method combined with Optimum optimization criterion.
根据最优实验设计方法,在保持WNN隐含节点参数不变的前提下,最小化期望模型参数的方差等价于选择数据集的重要特征。基于最优实验设计optimum准则,考虑到数据集的局部几何结构,本发明提出两步WNN隐含节点选择方法:首先基于Laplacian流形正则化的最小二乘支持向量机(LSSVM)估计回归模型WNN权重参数,然后根据估计参数方差最小的Optimum优化准则序贯地选择WNN隐含节点。 According to the optimal experimental design method, under the premise of keeping the hidden node parameters of WNN unchanged, minimizing the variance of the expected model parameters is equivalent to selecting important features of the data set. Based on optimal experimental design Optimum criteria, considering the local geometric structure of the data set, the present invention proposes a two-step WNN hidden node selection method: firstly estimate the weight parameters of the regression model WNN based on the least squares support vector machine (LSSVM) of Laplacian manifold regularization, and then according to Estimated parameters with the smallest variance The Optimum optimization criterion sequentially selects WNN hidden nodes.
如图4所示,该S312、控制WNN模型复杂度,具体包括: As shown in Figure 4, the S312, controlling the complexity of the WNN model, specifically includes:
S3121、基于Laplacian正则化LSSVM的WNN权重估计; S3121, WNN weight estimation based on Laplacian regularization LSSVM;
对上面数据集,假设WNN存在个隐含节点,设输入样本,,,WNN输出及其隐含节点输出记为和,。记,令为数据集第个特征组成的向量,特征集。很显然,数据集的第个特征对应WNN的第个隐含节点。因此选择个重要的WNN节点就等价于从数据集选择个重要特征。假设选择个特征为。令为选择特征后组成的样本集,其中,。令,,,其定义如下: For the above data set, it is assumed that there are hidden nodes in WNN, and the input samples,,, WNN output and its hidden node output are recorded as sum, . Note, Let be the vector composed of the first feature of the data set, feature set. Obviously, the th feature of the data set corresponds to the th hidden node of WNN. Therefore, selecting an important WNN node is equivalent to selecting an important feature from the data set. Suppose you choose a feature of . Let be the sample set formed after selecting features, where . Order,,, its definition is as follows:
(12) (12)
则。那么在选择的特征空间中回归模型为 but. Then the regression model in the selected feature space is
其中,是未知的均值为0的误差。假设不同数据样本的误差是相互独立的,但是有相同的方差。为保证选择节点的泛化性能并考虑到数据的局部几何结构,我们使用基于Laplacian正则化的LSSVM求取的最优值,其优化问题为如下形式: where is the unknown error with a mean of 0. It is assumed that the errors of different data samples are independent of each other, but have the same variance. In order to ensure the generalization performance of the selected nodes and take into account the local geometric structure of the data, we use the optimal value obtained by the LSSVM based on Laplacian regularization, and the optimization problem is as follows:
(13) (13)
其中,为正则化因子,矩阵和权重定义可参考背景技术相关部分。目标函数对求导并等于0可得的最优解: Wherein, for regularization factor, matrix and weight definition, please refer to the relevant part of the background technology. The optimal solution obtained by deriving the objective function and being equal to 0:
其中,为的单位矩阵。令,那么 where is the identity matrix of . order, then
(14) (14)
S3122、基于Optimum的WNN隐含节点序贯选择。 S3122, based on Optimum's WNN implies sequential selection of nodes.
注意到、正定对称性以及,令 Note that the positive definite symmetry and, let
则估计参数的有偏性和方差为 The bias and variance of the estimated parameters are then
(15) (15)
(16) (16)
根据式(14)可知预测值为,则预测误差的方差为 According to formula (14), it can be seen that the predicted value is, then the variance of the forecast error is
(17) (17)
注意到和,,根据式(16),参数估计误差的方差为 Note that and , according to formula (16), the variance of the parameter estimation error is
(18) (18)
把上式带入式(17)得 Put the above formula into formula (17) to get
一般来说,正则化系数设置比较小,而误差惩罚系数设置较大,注意到和是正定矩阵,且,我们有 Generally speaking, the regularization coefficient is set to be relatively small, while the error penalty coefficient is set to be large. Note that and are positive definite matrices, and we have
(19) (19)
类似地 Similarly
(20) (20)
基于最优实验设计原理,我们期望选择的特征子集能够使得估计参数的协方差矩阵最小。而最小化也能够使得新样本的预测误差最小,该问题可以等价于Optimim优化准则:。 Based on the principle of optimal experimental design, we expect the selected feature subset to minimize the covariance matrix of the estimated parameters. And minimization can also minimize the prediction error of new samples, which can be equivalent to Optimim optimization guidelines: .
对矩阵,L是正半定矩阵,是正定、可逆矩阵。令,根据Woodbury公式得 For matrices, L is a positive semidefinite matrix, which is a positive definite, invertible matrix. Let, according to the Woodbury formula
(21) (twenty one)
注意到,我们可以得到 Note that we can get
(22) (twenty two)
考虑到是常数,因此选择WNN隐含节点问题就转化为如下优化问题: Considering it is a constant, the problem of selecting WNN hidden nodes is transformed into the following optimization problem:
(23) (twenty three)
注意到只包含个选择特征:,因此可以写为。由于,则,因此式(23)所示优化问题转化为 Note that there are only selection features: , so it can be written as . Since, then, the optimization problem shown in formula (23) is transformed into
(24) (twenty four)
我们使用序贯优化方法对上述优化问题进行求解。首先假设已经选择个节点,可以通过下面的优化问题选取第个节点: We use a sequential optimization method to solve the above optimization problem. First, assuming that a node has been selected, the first node can be selected through the following optimization problem:
(25) (25)
令,根据Woodbury和Sherman-Morrison公式可得 Order, according to Woodbury and Sherman-Morrison formula can be obtained
(26) (26)
由于和是常矩阵,因此式(25)所示的序贯优化问题转化为 Since the sum is a constant matrix, the sequential optimization problem shown in formula (25) is transformed into
(27) (27)
这样,通过求解上面的优化问题可以依次选取WNN重要的隐含节点。 In this way, important hidden nodes of WNN can be selected sequentially by solving the above optimization problem.
该方法是申请者首次提出,与现有技术相比,本方法思想和技术方法先进,能够在线调整WNN复杂度,并求取WNN最优参数值,尤其适用于系统非线性结构变化、存在未建模动态不确定性、具有流形结构数据集的系统辨识场合。 This method is proposed by the applicant for the first time. Compared with the existing technology, this method has advanced ideas and technical methods. It can adjust the complexity of WNN online and obtain the optimal parameter value of WNN. It is especially suitable for systems with nonlinear structural changes and uncertainties. Modeling dynamic uncertainty, system identification situations with manifold-structured datasets.
、在线更新WNN权重。 , Updating WNN weights online.
为克服工业对象在运行过程中受到设备动态、不确定性因素影响,避免使用大量计算机内存,降低老样本对模型的影响,本发明采用基于固定长度滑动窗口的在线更新权重算法。滑动窗口在增加新样本后需要从训练样本中移除一个最不重要的样本。在线更新WNN权重算法中,假设WNN模型首先根据训练样本集得到含有个小波神经元的WNN模型。 In order to overcome the influence of equipment dynamics and uncertain factors during the operation of industrial objects, avoid using a large amount of computer memory, and reduce the influence of old samples on the model, the present invention adopts an online update weight algorithm based on a fixed-length sliding window. The sliding window needs to remove one of the least important samples from the training samples after adding a new sample. In the online update WNN weight algorithm, it is assumed that the WNN model first obtains a WNN model containing wavelet neurons according to the training sample set.
如图5所示,S32、在线更新WNN权重,具体包括: As shown in Figure 5, S32, online update WNN weights, specifically include:
S321、样本递增更新阶段; S321, sample increment update stage;
S322、样本移除更新阶段。 S322. Sample removal and update phase.
、样本递增更新阶段,具体包括: , Sample incremental update stage, specifically including:
设新样本为,WNN隐含节点输出为, 。设的QR分解为,那么的QR分解为的上三角矩阵可逐行通过下式求得: Let the new sample be, the WNN hidden node output be, . Assuming that QR is decomposed into, then the upper triangular matrix of QR decomposed into can be obtained row by row through the following formula:
(28) (28)
其中,为矩阵的第列。假设要移除第个样本,由于最终预测模型与样本的次序无关,首先把样本与进行对调,然后给出移除样本后的样本递推形式。设矩阵A进行QR分解为。如果对矩阵A的第行和第行交换后矩阵的QR分解为,其中为正交矩阵,为上三角矩阵,那么。这里为矩阵Q第行和第行交换后的矩阵。这样通过上述方法把最不重要的样本移至到第1行,这样只需移除第一行样本即可。 where is the first column of the matrix. Assuming that the first sample is to be removed, since the final prediction model has nothing to do with the order of the samples, the sample is first swapped with , and then the sample recursion form after the sample is removed is given. Suppose the matrix A is decomposed by QR. If the QR decomposition of the matrix A after exchanging the first row and the first row is, where is an orthogonal matrix and is an upper triangular matrix, then. Here is the matrix after the row and row of matrix Q have been exchanged. In this way, the least important sample is moved to the first row through the above method, so that only the first row of samples is removed.
、样本移除更新阶段,具体包括: , The sample removal update stage, specifically including:
移除第一行样本后系统矩阵的递推QR分解。已知样本对应WNN隐含节点输出向量的形式为,的QR分解为 ,假设的QR分解为,那么矩阵的QR分解的上三角矩阵可逐行由下式得到: Recursive QR decomposition of the system matrix after removing the first row of samples. The form of the output vector of the known sample corresponding to the hidden node of WNN is, and the QR decomposition of it is , the assumed QR decomposition is, then the upper triangular matrix of the QR decomposition of the matrix can be obtained row by row as follows:
其中,为矩阵的第列。通过上面方法很容易求取移除样本后系统矩阵的递推QR分解形式,进而求出更新后的权值。如果只是从个训练样本中移除最老的样本,那么可以把上述样本递增和样本消除的取值更新步骤合并为一步。 where is the first column of the matrix. Through the above method, it is easy to obtain the recursive QR decomposition form of the system matrix after removing the samples, and then obtain the updated weights. If only the oldest sample is removed from the training samples, the value update steps of the above sample increment and sample elimination can be combined into one step.
经过上述两个阶段的计算后,WNN权重更新由下式计算 After the calculation of the above two stages, the WNN weight update is calculated by the following formula
本算法步骤消除对矩阵和求逆的计算,只需递归计算它们的QR分解,且注意到上三角矩阵求逆计算量很小,完全满足在线学习要求。 This algorithm step eliminates the calculation of the matrix and inversion, and only needs to recursively calculate their QR decomposition, and notice that the calculation of the upper triangular matrix inversion is very small, which fully meets the online learning requirements.
、WNN更新选择策略。 , WNN update selection strategy.
对新来的数据样本,需要根据一定的策略确定WNN模型是否更新以及采取的上述更新策略。现有的更新判别准则就是根据在设定时间段内相对预测误差大于事先定义的阈值判别结果,确定是否更新以及更新的策略。 For new data samples, it is necessary to determine whether the WNN model is updated and the above-mentioned update strategy adopted according to a certain strategy. The existing update criterion is to determine whether to update and the update strategy according to the judgment result that the relative prediction error is greater than the threshold value defined in advance within the set time period.
常规方法需要事先确定判别时间段长度参数以及两个阈值参数,如何选取这些参数仍然是一个开放的问题。考虑复杂类别数据之间存在大量的交叉重叠,数据聚类结果物理意义明显、可靠,能够很好地解释数据分布特性,本发明使用基于满意的在线满意G-K模糊聚类方法[李拧等,利用模糊满意聚类建立PH中和过程模型. 控制与决策,2002]确定WNN权重参数或WNN结构调整学习策略。与现有的更新策略选择准则相比,本发明使用提出数据聚类结果准确选择WNN更新策略的方法,适合复杂数据分布聚类的在线实现,无需选择聚类数量参数,克服了现有方法的缺点。该方法还另外一个显著的优点,根据聚类结果选择小波函数的尺度参数和平移参数,能够大大减少调整参数所需时间。 Conventional methods need to determine the length of the discriminant period and two threshold parameters in advance, and how to select these parameters is still an open problem. Considering that there are a large number of cross-overlaps among complex category data, the physical meaning of the data clustering results is obvious and reliable, and can well explain the data distribution characteristics. Fuzzy satisfactory clustering to establish PH neutralization process model. Control and Decision, 2002] to determine WNN weight parameters or WNN structure adjustment learning strategy. Compared with the existing update strategy selection criteria, the present invention uses a method for accurately selecting the WNN update strategy for data clustering results, which is suitable for the online realization of complex data distribution clustering, without the need to select cluster number parameters, and overcomes the limitations of the existing methods. shortcoming. This method also has another significant advantage. The scale parameters and translation parameters of the wavelet function are selected according to the clustering results, which can greatly reduce the time required for parameter adjustment.
假设其样本表示为,其中为系统的输入,为系统的输出。如果把系统的输入和输出看作一个样本,即,则样本集表示为。 Assume that its sample is expressed as , where is the input of the system and is the output of the system. If the input and output of the system are regarded as a sample, ie, the sample set is expressed as .
如图6所示,S33、WNN更新选择策略,即基于在线满意G-K模糊聚类,其实现步骤如下: As shown in Figure 6, S33 and WNN update the selection strategy, which is based on online satisfaction G-K fuzzy clustering, and its implementation steps are as follows:
S331、初始化:设初始聚类的个数以及算法结束阈值,初始隶属度矩阵。 S331. Initialization: set the number of initial clusters, the algorithm end threshold, and the initial membership degree matrix.
、根据隶属度矩阵聚类:根据初始隶属度矩阵,求解满意G-K模糊聚类优化问题 , Clustering according to the membership degree matrix: according to the initial membership degree matrix, solve the satisfactory G-K fuzzy clustering optimization problem
; ;
其中,隶属度值,;为样本;为聚类中心;m为模糊度,,其中协方差, 。计算得到隶属度矩阵,然后根据样本所属各聚类的隶属度选取最大值进行分类, 将样本集分为c个子集。 Among them, the membership value,; is the sample; is the cluster center; m is the ambiguity, where the covariance , . Calculate the membership degree matrix, and then select the maximum value for classification according to the membership degree of each cluster to which the sample belongs, and divide the sample set into c subsets.
、判断是否结束:计算给定的系统性能指标的当前值,当 (为预设阈值)时算法结束,否则算法转到下一步。一般取作为性能指标,为爹代次数,一般取 , Judging whether to end: calculate the current value of the given system performance index, when ( is the preset threshold), the algorithm ends, otherwise the algorithm goes to the next step. It is generally taken as a performance index, and it is the number of generations, which is generally taken as
S334、寻找新聚类;根据隶属度矩阵并按找出一个与各聚类均不相似样本。为避免噪声, 一般应找出几个类似的样本, 求其平均值作为新的聚类中心。 S334. Find a new cluster; find a sample that is not similar to each cluster according to the membership degree matrix. In order to avoid noise, it is generally necessary to find several similar samples and calculate their average value as the new cluster center.
、令为新的聚类初始中心,计算相应的新的初始隶属度矩阵。 , Let be the new clustering initial center, and calculate the corresponding new initial membership degree matrix.
、令, 转S332。 ,make , turn to S332.
对于新来的样本,本发明更新策略选择方法如下: For newly arrived samples, the updating strategy selection method of the present invention is as follows:
(1)保持WNN不变:计算样本属于当前聚类的隶属度大于0.5; (1) Keep WNN unchanged: the membership degree of the calculated sample belonging to the current cluster is greater than 0.5;
(2)需要调整WNN的局部结构:需要增加一个新的聚类时或者属于另外一个聚类; (2) The local structure of WNN needs to be adjusted: when a new cluster needs to be added or it belongs to another cluster;
(3)WNN权重更新:当样本在多个聚类的交叉重叠处(一般取当前聚类的隶属度是否位于区间(0.2, 0.5))。 (3) WNN weight update: when the sample is at the intersection of multiple clusters (generally, whether the membership degree of the current cluster is in the interval (0.2, 0.5)).
本发明对CPU性能要求不高,可以在嵌入式系统中实现和应用,极大地提高该方法的适用范围,如模式识别中的分类器,复杂非线性系统的插值和拟合等。 The invention does not require high CPU performance, can be implemented and applied in embedded systems, and greatly improves the scope of application of the method, such as classifiers in pattern recognition, interpolation and fitting of complex nonlinear systems, and the like.
本发明能够在线更新WNN模型,且能够保证模型的推广性能,很好地克服实际中系统受到各种不确定性因素、工况变化而导致的WNN模型适配的问题,用于工业过程控制中可以增加系统运行的平稳性,降低产品质量的波动性,提高设备的寿命,获得良好的经济效益。 The invention can update the WNN model on-line, and can ensure the generalization performance of the model, and can well overcome the problem of WNN model adaptation caused by various uncertain factors and changes in working conditions in the actual system, and is used in industrial process control It can increase the stability of system operation, reduce the fluctuation of product quality, improve the life of equipment, and obtain good economic benefits.
对于所属技术领域的技术人员而言,随着技术的发展,本发明构思可以不同方式实现。本发明的实施方式并不仅限于以上描述的实施例,而且可在权利要求的范围内进行变化。 For those skilled in the art, with the development of technology, the inventive concept can be implemented in different ways. The embodiments of the invention are not limited to the examples described above but may vary within the scope of the claims.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310466382.2A CN103676649A (en) | 2013-10-09 | 2013-10-09 | Local self-adaptive WNN (Wavelet Neural Network) training system, device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310466382.2A CN103676649A (en) | 2013-10-09 | 2013-10-09 | Local self-adaptive WNN (Wavelet Neural Network) training system, device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103676649A true CN103676649A (en) | 2014-03-26 |
Family
ID=50314559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310466382.2A Pending CN103676649A (en) | 2013-10-09 | 2013-10-09 | Local self-adaptive WNN (Wavelet Neural Network) training system, device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103676649A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598552A (en) * | 2014-12-31 | 2015-05-06 | 大连钜正科技有限公司 | Method for learning incremental update-supported big data features |
CN104915566A (en) * | 2015-06-17 | 2015-09-16 | 大连理工大学 | Design method for depth calculation model supporting incremental updating |
CN105490764A (en) * | 2015-12-11 | 2016-04-13 | 中国联合网络通信集团有限公司 | Channel model correction method and apparatus |
CN108226887A (en) * | 2018-01-23 | 2018-06-29 | 哈尔滨工程大学 | A kind of waterborne target rescue method for estimating state in the case of observed quantity transient loss |
CN111433689A (en) * | 2017-11-01 | 2020-07-17 | 卡里尔斯公司 | Generation of control systems for target systems |
CN112417722A (en) * | 2020-11-13 | 2021-02-26 | 华侨大学 | Sliding window NPE-based linear time-varying structure working mode identification method |
CN113093540A (en) * | 2021-03-31 | 2021-07-09 | 中国科学院光电技术研究所 | Sliding mode disturbance observer design method based on wavelet threshold denoising |
CN113420815A (en) * | 2021-06-24 | 2021-09-21 | 江苏师范大学 | Semi-supervised RSDAE nonlinear PLS intermittent process monitoring method |
WO2022121030A1 (en) * | 2020-12-10 | 2022-06-16 | 广州广电运通金融电子股份有限公司 | Central party selection method, storage medium, and system |
CN118732588A (en) * | 2024-09-02 | 2024-10-01 | 成都创科升电子科技有限责任公司 | A filtering optimization method, system, computer device, readable medium and vehicle |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5268834A (en) * | 1991-06-24 | 1993-12-07 | Massachusetts Institute Of Technology | Stable adaptive neural network controller |
GB2386437A (en) * | 2002-02-07 | 2003-09-17 | Fisher Rosemount Systems Inc | Adaptation of Advanced Process Control Blocks in Response to Variable Process Delay |
CN103064292A (en) * | 2013-01-15 | 2013-04-24 | 镇江市江大科技有限责任公司 | Biological fermentation adaptive control system and control method based on neural network inverse |
CN103279038A (en) * | 2013-06-19 | 2013-09-04 | 河海大学常州校区 | Self-adaptive control method of sliding formwork of micro gyroscope based on T-S fuzzy model |
CN103324091A (en) * | 2013-06-03 | 2013-09-25 | 上海交通大学 | Multi-model self-adaptive controller and control method of zero-order closely-bounded nonlinear multivariable system |
-
2013
- 2013-10-09 CN CN201310466382.2A patent/CN103676649A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5268834A (en) * | 1991-06-24 | 1993-12-07 | Massachusetts Institute Of Technology | Stable adaptive neural network controller |
GB2386437A (en) * | 2002-02-07 | 2003-09-17 | Fisher Rosemount Systems Inc | Adaptation of Advanced Process Control Blocks in Response to Variable Process Delay |
CN103064292A (en) * | 2013-01-15 | 2013-04-24 | 镇江市江大科技有限责任公司 | Biological fermentation adaptive control system and control method based on neural network inverse |
CN103324091A (en) * | 2013-06-03 | 2013-09-25 | 上海交通大学 | Multi-model self-adaptive controller and control method of zero-order closely-bounded nonlinear multivariable system |
CN103279038A (en) * | 2013-06-19 | 2013-09-04 | 河海大学常州校区 | Self-adaptive control method of sliding formwork of micro gyroscope based on T-S fuzzy model |
Non-Patent Citations (2)
Title |
---|
S. CHEN等: "Orthogonal-least-squares regression: a unified approach for data modeling", 《NEUROCOMPUTING》, 31 December 2009 (2009-12-31), pages 2670 - 2681 * |
王高峰: "火电厂燃烧系统预测控制技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 12, 15 December 2011 (2011-12-15), pages 26 - 40 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598552A (en) * | 2014-12-31 | 2015-05-06 | 大连钜正科技有限公司 | Method for learning incremental update-supported big data features |
CN104915566A (en) * | 2015-06-17 | 2015-09-16 | 大连理工大学 | Design method for depth calculation model supporting incremental updating |
CN105490764A (en) * | 2015-12-11 | 2016-04-13 | 中国联合网络通信集团有限公司 | Channel model correction method and apparatus |
CN105490764B (en) * | 2015-12-11 | 2018-05-11 | 中国联合网络通信集团有限公司 | A kind of channel model bearing calibration and device |
CN111433689A (en) * | 2017-11-01 | 2020-07-17 | 卡里尔斯公司 | Generation of control systems for target systems |
CN108226887B (en) * | 2018-01-23 | 2021-06-01 | 哈尔滨工程大学 | Water surface target rescue state estimation method under condition of transient observation loss |
CN108226887A (en) * | 2018-01-23 | 2018-06-29 | 哈尔滨工程大学 | A kind of waterborne target rescue method for estimating state in the case of observed quantity transient loss |
CN112417722B (en) * | 2020-11-13 | 2023-02-03 | 华侨大学 | Sliding window NPE-based linear time-varying structure working mode identification method |
CN112417722A (en) * | 2020-11-13 | 2021-02-26 | 华侨大学 | Sliding window NPE-based linear time-varying structure working mode identification method |
WO2022121030A1 (en) * | 2020-12-10 | 2022-06-16 | 广州广电运通金融电子股份有限公司 | Central party selection method, storage medium, and system |
CN113093540A (en) * | 2021-03-31 | 2021-07-09 | 中国科学院光电技术研究所 | Sliding mode disturbance observer design method based on wavelet threshold denoising |
CN113093540B (en) * | 2021-03-31 | 2022-06-28 | 中国科学院光电技术研究所 | A Design Method of Sliding Mode Disturbance Observer Based on Wavelet Threshold Denoising |
CN113420815A (en) * | 2021-06-24 | 2021-09-21 | 江苏师范大学 | Semi-supervised RSDAE nonlinear PLS intermittent process monitoring method |
CN113420815B (en) * | 2021-06-24 | 2024-04-30 | 江苏师范大学 | Nonlinear PLS intermittent process monitoring method of semi-supervision RSDAE |
CN118732588A (en) * | 2024-09-02 | 2024-10-01 | 成都创科升电子科技有限责任公司 | A filtering optimization method, system, computer device, readable medium and vehicle |
CN118732588B (en) * | 2024-09-02 | 2024-11-12 | 成都创科升电子科技有限责任公司 | A filtering optimization method, system, computer device, readable medium and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103676649A (en) | Local self-adaptive WNN (Wavelet Neural Network) training system, device and method | |
CN105205224B (en) | Time difference Gaussian process based on fuzzy curve analysis returns soft-measuring modeling method | |
Ghaseminezhad et al. | A novel self-organizing map (SOM) neural network for discrete groups of data clustering | |
Wu et al. | Using radial basis function networks for function approximation and classification | |
CN107967542B (en) | A prediction method of electricity sales based on long short-term memory network | |
Khormali et al. | A novel approach for recognition of control chart patterns: Type-2 fuzzy clustering optimized support vector machine | |
CN106600059A (en) | Intelligent power grid short-term load predication method based on improved RBF neural network | |
CN102831269A (en) | Method for determining technological parameters in flow industrial process | |
CN113076996B (en) | Radiation source signal identification method for improved particle swarm extreme learning machine | |
CN107480815A (en) | A kind of power system taiwan area load forecasting method | |
CN107609667B (en) | Heating load prediction method and system based on Box_cox transform and UFCNN | |
Tzeng | Design of fuzzy wavelet neural networks using the GA approach for function approximation and system identification | |
CN110286586A (en) | A Hybrid Modeling Method for Magnetorheological Damper | |
CN109063892A (en) | Industry watt-hour meter prediction technique based on BP-LSSVM combination optimization model | |
CN104156943B (en) | Multi objective fuzzy cluster image change detection method based on non-dominant neighborhood immune algorithm | |
CN105243454A (en) | Big data-based electrical load prediction system | |
CN107818340A (en) | Two-stage Air-conditioning Load Prediction method based on K value wavelet neural networks | |
CN112287990A (en) | Model optimization method of edge cloud collaborative support vector machine based on online learning | |
CN113361785A (en) | Power distribution network short-term load prediction method and device, terminal and storage medium | |
CN107481523A (en) | Method and system for predicting traffic flow speed | |
CN107798383A (en) | Improved core extreme learning machine localization method | |
CN111242867A (en) | Distributed online reconstruction method of graph signal based on truncated Taylor series approximation | |
CN116415177A (en) | A Classifier Parameter Identification Method Based on Extreme Learning Machine | |
CN105913078A (en) | Multi-mode soft measurement method for improving adaptive affine propagation clustering | |
CN107578101B (en) | A Data Flow Load Prediction Method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140326 |