CN108932671A - A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune - Google Patents
A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune Download PDFInfo
- Publication number
- CN108932671A CN108932671A CN201810575699.2A CN201810575699A CN108932671A CN 108932671 A CN108932671 A CN 108932671A CN 201810575699 A CN201810575699 A CN 201810575699A CN 108932671 A CN108932671 A CN 108932671A
- Authority
- CN
- China
- Prior art keywords
- lstm
- prediction model
- parameter
- wind
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 26
- 238000013277 forecasting method Methods 0.000 title claims abstract description 14
- 230000005611 electricity Effects 0.000 title claims abstract 12
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 13
- 230000007613 environmental effect Effects 0.000 claims abstract description 7
- 238000005457 optimization Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 19
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 201000004409 schistosomiasis Diseases 0.000 claims 1
- 230000003831 deregulation Effects 0.000 abstract 1
- 238000011084 recovery Methods 0.000 abstract 1
- 230000009471 action Effects 0.000 description 18
- 230000015654 memory Effects 0.000 description 8
- 230000002787 reinforcement Effects 0.000 description 6
- 230000000306 recurrent effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 125000004122 cyclic group Chemical group 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 206010048669 Terminal state Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Primary Health Care (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域technical field
本发明涉及电力信息技术领域,尤其是涉及一种采用深度Q神经网络调参的LSTM风电负荷预测方法。The invention relates to the technical field of electric power information, in particular to an LSTM wind power load forecasting method using a deep Q neural network for parameter tuning.
背景技术Background technique
风电负荷预测是电力调度工作中的重要组成部分,其预测好坏直接决定了风电能否接入电网系统。风电负荷属于时间序列,随着时间的变化不断更新。具有LSTM(LongShort Term Memory networks,长短期记忆网络)结构的RNN(Recurrent NeuralNetworks,循环神经网络)可以有效解决RNN网络的时间梯度消失的问题,并且由于RNN特殊的网络结构使得其对时间序列数据有独特的优势。Wind power load forecasting is an important part of power dispatching work, and the quality of its forecasting directly determines whether wind power can be connected to the grid system. Wind power load belongs to time series and is updated continuously with time changes. RNN (Recurrent Neural Networks, cyclic neural network) with LSTM (Long Short Term Memory networks, long short-term memory network) structure can effectively solve the problem of the disappearance of the time gradient of the RNN network, and due to the special network structure of RNN, it is useful for time series data. unique advantage.
循环神经网络有特殊的网络结构,即隐藏层的输入:除了当前时刻的输入层输入,还有上一时刻的输入层输入,如图1所示。在图1中,x、x1、x2分别为不同时间节点的输入,o、o1、o2则分别为对应时间的输出,U、V为线性关系矩阵,在整个RNN中是共享的。将与风电负荷相关的数据包括时间、风场的风速、实时功率、频率、风向、室外温度在内作为预测模型的输入,通过网络计算并得到输出结果o,然后将o与对应的风力负荷进行比较可以得到误差,得到误差后采用梯度下降(Gradient Descent)和BPTT(Back-Propagation Through Time,基于时间的反向传播)方法对模型进行训练,BPTT采用反向传播求解梯度并更新网络参数权重。将RNN中的循环展开,上一层的神经网络会传递信息给下一层,这就是RNN对时间序列数据的处理有优势的原因。不需要训练所有神经网络的参数,只需要训练一层即可,其中的参数均为共享参数。The cyclic neural network has a special network structure, that is, the input of the hidden layer: in addition to the input layer input at the current moment, there is also the input layer input at the previous moment, as shown in Figure 1. In Figure 1, x, x1, and x2 are inputs at different time nodes, o, o1, and o2 are outputs corresponding to time respectively, and U and V are linear relationship matrices, which are shared in the entire RNN. The data related to wind power load, including time, wind speed of the wind field, real-time power, frequency, wind direction, and outdoor temperature, are used as the input of the prediction model, and the output result o is obtained through network calculation, and then o is compared with the corresponding wind load The error can be obtained by comparison. After the error is obtained, the gradient descent (Gradient Descent) and BPTT (Back-Propagation Through Time, time-based backpropagation) methods are used to train the model. BPTT uses backpropagation to solve the gradient and update the network parameter weight. Expanding the loop in RNN, the upper layer of neural network will pass information to the next layer, which is why RNN has advantages in processing time series data. There is no need to train all the parameters of the neural network, only one layer needs to be trained, and the parameters in it are all shared parameters.
普通的RNN面对长时间跨度可能会有梯度消失或梯度爆炸的问题,LSTM可保留误差,用于沿时间和层进行反向传递。LSTM将误差保持在更为恒定的水平,让循环网络能够进行许多个时间步的学习(超过1000个时间步),从而打开了建立远距离因果联系的通道。Ordinary RNN may have the problem of gradient disappearance or gradient explosion in the face of long-term spans. LSTM can retain errors for reverse transmission along time and layers. LSTMs keep the error at a more constant level, allowing recurrent networks to learn over many time steps (more than 1000 time steps), thus opening the way to establish long-distance causal connections.
LSTM将信息存放在循环网络正常信息流之外的门控单元中。这些单元可以存储、写入或读取信息,就像计算机内存中的数据一样。单元通过门的开关判定存储哪些信息,以及何时允许读取、写入或清除信息。但与计算机中的数字式存储器不同的是,这些门是模拟的,包含输出范围全部在0~1之间的sigmoid函数的逐元素相乘操作。相比数字式存储,模拟值的优点是可微分,因此适合反向传播。这些门依据接收到的信号而开关,而且与神经网络的节点类似,它们会用自有的权重集对信息进行筛选,根据其强度和导入内容决定是否允许信息通过。这些权重就像调制输入和隐藏状态的权重一样,会通过循环网络的学习过程进行调整。也就是说,记忆单元会通过猜测、误差反向传播、用梯度下降调整权重的迭代过程学习何时允许数据进入、离开或被删除。其结构如图2所示。图2中最底部的三个箭头表示信息从多个点流入记忆单元(cell)。当前输入与过去的单元状态不只被送入记忆单元本身,同时也进入单元的三个门,而这些门将决定如何处理输入黑点即是门,通过与不同的系数相乘分别决定何时允许新输入进入(yin),何时清除当前的单元状态以及何时让单元状态对当前时间步的网络输出产生影响(yout)。Sc是记忆单元当前的状态,而gyin是当前的输入。每个门都可开可关,而且门在每个时间步都会重新组合开关状态。记忆单元在每个时间步都可以决定是否遗忘其状态,是否允许写入,是否允许读取。LSTM预测是否准确与超参数有直接关系,因此,合适的超参数使预测模型能达到或者极为接近全局最优点。现有技术通常采用Q-Learning算法,其算法流程为:LSTMs store information in gated cells outside the normal flow of information in the recurrent network. These cells can store, write or read information, just like data in computer memory. The cells decide what information to store and when to allow it to be read, written, or cleared by the opening and closing of the gate. But unlike digital memory in a computer, these gates are analog, consisting of element-wise multiplications of sigmoid functions whose outputs all range between 0 and 1. Compared to digital storage, analog values have the advantage of being differentiable and thus suitable for backpropagation. These gates open and close based on the received signal, and similar to the nodes of a neural network, they use their own set of weights to filter information, and decide whether to allow information to pass according to its strength and input content. These weights, like the weights that modulate the input and hidden states, are adjusted through the learning process of the recurrent network. That is, the memory unit learns when data is allowed to enter, leave, or be removed through an iterative process of guessing, error backpropagation, and gradient descent to adjust weights. Its structure is shown in Figure 2. The bottom three arrows in Figure 2 represent the flow of information into memory cells from multiple points. The current input and the past cell state are not only sent into the memory cell itself, but also into the three gates of the cell, and these gates will determine how to process the input. Input enters (y in ), when to clear the current cell state And when to let the unit state affect the network output at the current time step (y out ). S c is the current state of the memory cell, and gy in is the current input. Each gate can be opened and closed, and the gate recombines the on and off states at each time step. A memory cell can decide at each time step whether to forget its state, allow writing, or allow reading. The accuracy of LSTM predictions is directly related to hyperparameters. Therefore, appropriate hyperparameters enable the prediction model to reach or be very close to the global optimum. The existing technology usually adopts the Q-Learning algorithm, and its algorithm flow is as follows:
初始化Q(s,a),a∈A(s),任意的数值,且Q(terminal-state)=0;Initialize Q(s,a), a∈A(s), any value, and Q(terminal-state)=0;
重复(对每一节episode);Repeat (for each episode);
初始化状态S;Initialize state S;
重复(对episode中的每一步):Repeat (for each step in the episode):
使用某一个policy,如(ε-greedy)根据状态S选取一个动作执行;Use a certain policy, such as (ε-greedy) to select an action to execute according to the state S;
执行完动作后,观察reward和新的状态S′;After executing the action, observe the reward and the new state S′;
Q(St,At)←Q(St,At)+a(Rt+1+λmaxaQ(St+1,a)-Q(St,At))Q(S t ,A t )←Q(S t ,A t )+a(R t+1 +λmax a Q(S t+1 ,a)-Q(S t ,A t ))
S←S′S←S′
循环直到终止。Loop until terminated.
算法中的α为学习率,其控制前一个Q值和新提出的Q值之间被考虑到的差异程度。Q指相对应的Q值,λ则是折扣因子,当折扣因子为0时,预测模型会倾向于当前表格做决策,当其为1时则倾向于做之前没做过的尝试来扩大Q表的内容,一般而言折扣因子取0到1之间的一个数来平衡即使奖励与探索。Rt+1+λmaxaQ(St+1,a)为目标Q值,Q-Learning算法主要就是让Q(St,a)接近目标Q值。有助于优化风电预测模型,使之适应于不同地域。然而风电负荷受地域环境影响较大,不同地域的模型参数有较大的不同,当预测模型应用在不同的地域时需要专业人才去调节,而预测模型的调参又颇费人力,较为不便。α in the algorithm is the learning rate, which controls how much difference between the previous Q-value and the newly proposed Q-value is taken into account. Q refers to the corresponding Q value, and λ is the discount factor. When the discount factor is 0, the prediction model will tend to make decisions in the current table, and when it is 1, it will tend to expand the Q table by trying something that has not been done before. Generally speaking, the discount factor takes a number between 0 and 1 to balance rewards and exploration. R t+1 +λmax a Q(S t+1 ,a) is the target Q value, and the Q-Learning algorithm mainly makes Q(S t ,a) close to the target Q value. It is helpful to optimize the wind power forecasting model and adapt it to different regions. However, the wind power load is greatly affected by the regional environment, and the model parameters in different regions are quite different. When the prediction model is applied in different regions, professionals are required to adjust it, and the adjustment of the prediction model parameters is labor-intensive and inconvenient.
发明内容Contents of the invention
本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种自动调参、提高预测效率,且能够自适应不同地域的采用深度Q神经网络调参的LSTM风电负荷预测方法。The purpose of the present invention is to provide an LSTM wind power load forecasting method that uses deep Q neural network parameter tuning to automatically adjust parameters, improve forecasting efficiency, and adapt to different regions in order to overcome the above-mentioned defects in the prior art.
本发明的目的可以通过以下技术方案来实现:The purpose of the present invention can be achieved through the following technical solutions:
一种采用深度Q神经网络调参的LSTM风电负荷预测方法,该方法包括以下步骤:An LSTM wind power load forecasting method using deep Q neural network parameter adjustment, the method includes the following steps:
S1:采集电力系统环境的原始数据,选取训练集及预测集;S1: Collect the original data of the power system environment, select the training set and prediction set;
S2:采用LSTM作为预测模型,利用DQN调节预测模型中的超参数;S2: Use LSTM as the prediction model, and use DQN to adjust the hyperparameters in the prediction model;
利用DQN调节预测模型中的参数的具体内容包括环境参数调节、状态调整、动作选择及调整学习率的强化学习奖励。环境参数调节结合LSTM预测模型及一系列的动作,形成了一个马尔科夫决策模型;状态调整、动作选择及调整学习率的强化学习奖励的实现基于形成的马尔科夫决策模型。The specific content of using DQN to adjust the parameters in the prediction model includes environmental parameter adjustment, state adjustment, action selection and reinforcement learning rewards for adjusting the learning rate. The environment parameter adjustment combines the LSTM prediction model and a series of actions to form a Markov decision model; the realization of state adjustment, action selection and the reinforcement learning reward of adjusting the learning rate is based on the formed Markov decision model.
其中,环境参数调节的具体内容为:Among them, the specific content of environmental parameter adjustment is as follows:
采用学习率调节函数f(x)调节适应学习率,采用正则参数调节函数g(x)调节适应正则参数,假设(p,y)为一个训练样本,p为输入,包括学习率xt和正则参数zt,y为期望的输出,a为实际输出,则有:The learning rate adjustment function f(x) is used to adjust the adaptive learning rate, and the regular parameter adjustment function g(x) is used to adjust the adaptive regular parameters. Assume (p, y) is a training sample, p is the input, including the learning rate x t and regularization The parameter z t , y is the expected output, and a is the actual output, then:
式中,n为样本个数。In the formula, n is the number of samples.
状态调整的具体内容为:The specific content of status adjustment is as follows:
采用包含六个状态特征的特征向量来表示状态,六个状态特征的特征向量包括期望调整的超参数、候选迭代目标值、过去M步最大目标值、下降方向与梯度之间的点积、MI/MAX编码、函数评价数和对齐度量,则有:A feature vector containing six state features is used to represent the state. The feature vector of the six state features includes the hyperparameters expected to be adjusted, the candidate iteration target value, the maximum target value of the past M steps, the dot product between the descent direction and the gradient, and MI /MAX encoding, function evaluation numbers and alignment metrics, then:
设为时间t-1得到的M个最低目标值的列表,状态[St]编码由下式决定:Assume For the list of M lowest target values obtained at time t-1, the state [S t ] encoding is determined by:
式中,当f(xt)小于的最小值时,编码为1,在之前的M个F中则取0,其他情况取-1;In the formula, when f(x t ) is less than When the minimum value of , it is coded as 1, it is 0 in the previous M Fs, and -1 is used in other cases;
给出状态调整[st]alignment为:Given the state adjustment [st] alignment is:
下降方向的表达式为:descending direction The expression is:
式中,为关于的梯度,为学习率的均值。In the formula, for about the gradient of is the mean value of the learning rate.
动作选择的具体内容为:The specific content of the action selection is:
对于给定的状态,采用在接受迭代之后将学习速率或正则化参数重置为初始值的方法进行动作选择,当控制学习速率时,有两个动作,保持学习速率或一半学习速率;对于调整正则化系数,除了两种选择之外,允许其增加四分之一。For a given state, the method of resetting the learning rate or regularization parameter to the initial value after accepting iterations is used for action selection. When controlling the learning rate, there are two actions, keeping the learning rate or half the learning rate; for adjusting The regularization coefficient, except for two alternatives, is allowed to increase by a quarter.
调整学习率的强化学习奖励rid(f,xt)的表达式为:The expression of the reinforcement learning reward r id (f, x t ) with adjusted learning rate is:
式中,flb为函数值的目标下界,c为目标下界值。In the formula, f lb is the target lower bound of the function value, and c is the target lower bound value.
S3:将训练集代入调节参数后的预测模型,将训练结果反馈至DQN中进行参数优化,获取最优LSTM预测模型;S3: Substituting the training set into the prediction model after adjusting the parameters, feeding back the training results to DQN for parameter optimization, and obtaining the optimal LSTM prediction model;
对训练部分采用经验回放的技巧,在每一次对神经网络的参数进行更新时,从数据里随机地调取部分之前的训练结果,用于更新DQN,进而获取最优LSTM预测模型。The technique of experience playback is used for the training part. Every time the parameters of the neural network are updated, part of the previous training results are randomly retrieved from the data to update the DQN, and then obtain the optimal LSTM prediction model.
S4:利用最优LSTM预测模型进行风电负荷预测。S4: Use the optimal LSTM forecasting model for wind power load forecasting.
与现有技术相比,本发明采用DQN使预测模型自行学习调节超参数,可以适应不同地域的风电预测模型,无需不同的地域时需要专业人才去调节,大大提高了预测效率。Compared with the prior art, the present invention uses DQN to enable the prediction model to learn and adjust hyperparameters by itself, which can adapt to wind power prediction models in different regions, and requires professionals to adjust when there is no need for different regions, which greatly improves the prediction efficiency.
附图说明Description of drawings
图1为RNN结构图;Figure 1 is a structure diagram of RNN;
图2为LSTM结构图;Figure 2 is a structure diagram of LSTM;
图3为本发明方法的流程示意图;Fig. 3 is a schematic flow sheet of the method of the present invention;
图4为本发明实施例中DQN学习速率为0.05、e_greedy=0.01时的收敛效果图;Fig. 4 is the convergence effect figure when DQN learning rate is 0.05, e_greedy=0.01 in the embodiment of the present invention;
图5为本发明实施例中采用Q梯度下降与采用一般梯度下降的预测模型准确率对比图;Fig. 5 is a comparison chart of prediction model accuracy using Q gradient descent and general gradient descent in the embodiment of the present invention;
图6为本发明实施例中采用Q梯度下降与采用一般梯度下降的预测模型误差下降收敛效果对比图。FIG. 6 is a comparison diagram of the convergence effect of prediction model error reduction using Q gradient descent and general gradient descent in the embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.
实施例Example
如图3所示,本发明涉及一种采用深度Q神经网络调参的LSTM风电负荷预测方法,该方法的主要内容为:As shown in Fig. 3, the present invention relates to a kind of LSTM wind power load forecasting method that adopts depth Q neural network tuning parameter, and the main content of this method is:
1)采集电力系统环境的原始数据,选取训练集及预测集。1) Collect the original data of the power system environment, and select the training set and prediction set.
2)采用LSTM作为预测模型,利用DQN动态适应预测模型中的超参数,获取预测模型的输出值。利用DQN动态适应预测模型中的超参数的具体内容包括环境参数调节、状态调整、动作选择、调整学习率的强化学习奖励。2) Using LSTM as the prediction model, using DQN to dynamically adapt to the hyperparameters in the prediction model to obtain the output value of the prediction model. The specific content of using DQN to dynamically adapt to the hyperparameters in the prediction model includes environmental parameter adjustment, state adjustment, action selection, and reinforcement learning rewards for adjusting the learning rate.
3)将训练集代入调节参数后的预测模型,将训练结果反馈至DQN中进行参数优化,获取最优LSTM预测模型;3) Substituting the training set into the prediction model after adjusting the parameters, feeding back the training results to DQN for parameter optimization, and obtaining the optimal LSTM prediction model;
4)利用最优LSTM预测模型进行风电负荷预测。4) Use the optimal LSTM forecasting model for wind power load forecasting.
将LSTM作为预测模型,采用一个深度Q神经网络(DQN)去动态的适应预测模型中的超参数,每当DQN做出一个动作即取一个学习率的值,这个值会被模拟到预测模型中随后会有一个输出,并对其进行奖励估值,这时DQN将动作与相应的奖励估值计入一个Q表格中,而数据量巨大,故需要用到深度网络来记录之前尝试的结果,从而可以使DQN能从表格中学习到调节超参数的技巧。其中的环境、动作和奖励的定义如下:Use LSTM as a prediction model, and use a deep Q neural network (DQN) to dynamically adapt to the hyperparameters in the prediction model. Whenever DQN makes an action, it takes a learning rate value, and this value will be simulated into the prediction model. Then there will be an output, and it will be rewarded and valued. At this time, DQN will include the action and the corresponding reward value in a Q table, and the amount of data is huge, so it is necessary to use a deep network to record the results of previous attempts. In this way, DQN can learn the skills of adjusting hyperparameters from the table. The environment, actions and rewards are defined as follows:
1、环境1. Environment
式中,学习率调节函数f(xt)、正则参数调节函数g(zt)使在不同的学习率与正则参数的情况下预测模型的输出与期望之间缩小差距,在两式中xt、zt分别表示学习率与正则参数;(p,y)是一个训练样本,n为样本个数,p为输入,包括学习率xt、正则参数zt;y为期望的输出,a为实际输出。这里调节函数f(xt)、正则参数调节函数g(zt)采用交叉熵代价函数,当误差大时权重更新快,误差小时权重更新慢。采用f(x)去调节适应学习率,采用g(x)调节适应正则参数。环境结合了预测模型和一系列的动作以及其他必要元素,形成了一个马尔科夫决策模型,即动作的选取只依赖用户当前的状态,与之前的历史行为没有关系。状态调整、动作选择及调整学习率的强化学习奖励的完成基于该马尔科夫决策模型。In the formula, the learning rate adjustment function f(x t ) and the regular parameter adjustment function g(z t ) make the gap between the output of the prediction model and the expectation narrow under different learning rates and regular parameters. In the two formulas, x t and z t represent the learning rate and regularization parameters respectively; (p, y) is a training sample, n is the number of samples, p is the input, including learning rate x t and regularization parameter z t ; y is the expected output, a for the actual output. Here, the adjustment function f(x t ) and the regular parameter adjustment function g(z t ) adopt the cross-entropy cost function. When the error is large, the weight update is fast, and when the error is small, the weight update is slow. Use f(x) to adjust the adaptive learning rate, and use g(x) to adjust the adaptive regularization parameters. The environment combines a predictive model with a series of actions and other necessary elements to form a Markov decision model, that is, the selection of actions depends only on the user's current state and has nothing to do with previous historical behaviors. Reinforcement learning rewards for state adjustment, action selection, and adjustment of the learning rate are done based on this Markov decision model.
2、状态2. Status
用具有六个状态特征的特征向量来表示状态。状态特征分别是我们期望调整的超参数、候选迭代目标值、过去M步最大目标值、下降方向与梯度之间的点积、MI/MAX编码、函数评价数和对齐度量,前四个特征可直接获取,对于最后两个特征,有:States are represented by eigenvectors with six state characteristics. The state features are the hyperparameters we expect to adjust, the candidate iteration target value, the maximum target value in the past M steps, the dot product between the descent direction and the gradient, MI/MAX encoding, function evaluation number, and alignment measure. The first four features can be Obtained directly, for the last two features, there are:
设是时间t-1得到的M个最低目标值的列表,状态[St]编码由下式决定:Assume is a list of the M lowest target values obtained at time t-1, and the state [S t ] code is determined by the following formula:
式中,当f(xt)较之前的最小还小时,编码为1,在之前的M个F中则取0,其他情况取-1。让成为下降方向,其表达式为:In the formula, when f(x t ) is smaller than the previous When it is still young, it is coded as 1, and it is 0 in the previous M Fs, and -1 in other cases. Let becomes the descending direction, and its expression is:
式中,为关于的梯度,为学习率的均值。In the formula, for about the gradient of is the mean value of the learning rate.
给出状态调整[st]alignment为:Given the state adjustment [st] alignment is:
此外,为了使状态特征独立于特定目标函数,将所有特征变换为区间[-1,1]内的特征。Furthermore, in order to make state features independent of a specific objective function, all features are transformed into features in the interval [-1, 1].
3、动作3. Action
对于给定的状态,动作是如何改变学习速率以及正则化参数的组合。一般来说,学习速率和正则化参数非常小。因此,采用在接受迭代之后将学习速率或正则化参数重置为初始值的策略。因此,当控制学习速率时,有两个动作:保持学习速率或一半学习速率。对于调整正则化系数,除了两种选择之外,允许其增加四分之一。For a given state, the action is how to change the combination of learning rate and regularization parameters. In general, the learning rate and regularization parameters are very small. Therefore, a strategy of resetting the learning rate or regularization parameters to initial values after accepting iterations is adopted. Therefore, when controlling the learning rate, there are two actions: maintain the learning rate or half the learning rate. For adjusting the regularization coefficient, it is allowed to increase by a quarter in all but two choices.
4、奖励4. Rewards
为了调整学习速率,奖励被定义为从目标净训练损失到下界的反距离。调整学习率的强化学习奖励rid(f,xt)如下式所示:To tune the learning rate, the reward is defined as the inverse distance from the target net training loss to a lower bound. The reinforcement learning reward r id (f, x t ) for adjusting the learning rate is as follows:
式中,flb为函数值的目标下界,一般来说可将其实设置为零,作为损失函数之和的目标。c为目标下界。In the formula, f lb is the target lower bound of the function value, generally speaking, it can be set to zero as the target of the sum of loss functions. c is the target lower bound.
5、经验回放5. Experience playback
在训练部分应用经验回放的技巧,每一次对神经网络的参数进行更新时,就从数据里随机地调取一小批之前的训练结果,帮助培训神经网络。In the training part, the technique of experience playback is applied. Every time the parameters of the neural network are updated, a small batch of previous training results are randomly retrieved from the data to help train the neural network.
一个经验包含a(si,ai,ri+1,si+1,label)j,其中i是指时间步为i;j是指e_greed为j。这些元组储存在经验E的记忆中。除了用大部分最近的经验来更新DQN,一个子集S∈E被从记忆中拉出来用于小批量的更新DQN。An experience contains a(s i , a i , r i+1 , s i+1 , label) j , where i refers to time step i; j refers to e_greed as j. These tuples are stored in the memory of experience E. In addition to updating DQN with most recent experience, a subset S ∈ E is pulled from memory for updating DQN in mini-batches.
6、训练结果6. Training results
本实施例将LSTM预测模型设置为6个输入(时间、风场的风速、实时功率、频率、风向、室外温度),一个输出(负荷),循环神经网络设置为3层,隐藏单元为128个,激活函数选择softsign激活函数,实验中的负荷数据有8932条,其中80%的数据作为训练数据,20%的数据作为测试集。折扣因子λ开始时设置为0.99,探索概率设置为1,在100步内均匀衰减到0.1。当DQN的学习速率为0.05,e_greed为0.01时,如图4的纵轴所示,DQN的损失开始明显的收敛。当训练8小时迭代到100步后DQN达到大约40%的精度,比基准线高10%,且其损失收敛速度也较基准线(梯度下降)快,如图5横轴为迭代步数,纵轴为预测模型准确率,相比较梯度下降的方法,Q梯度下降模型的准确率的上升速度与幅度均有明显优势。图6中横轴为迭代步数,纵轴为Q梯度下降与梯度下降的预测模型误差,Q梯度下降模型误差下降更为迅速。受限于计算能力的限制,我们迭代了100步,但从其中看出DQN的学习能力还是使得目标网络的精确度上升很快且其误差的收敛性也很好。In this embodiment, the LSTM prediction model is set to 6 inputs (time, wind speed of the wind field, real-time power, frequency, wind direction, outdoor temperature), one output (load), the recurrent neural network is set to 3 layers, and the hidden units are 128 , the activation function selects the softsign activation function, and there are 8932 load data in the experiment, of which 80% of the data are used as training data and 20% of the data are used as the test set. The discount factor λ is set to 0.99 at the beginning, and the exploration probability is set to 1, which decays uniformly to 0.1 within 100 steps. When the learning rate of DQN is 0.05 and e_greed is 0.01, as shown in the vertical axis of Figure 4, the loss of DQN begins to converge significantly. After 8 hours of training and iterating to 100 steps, DQN reaches an accuracy of about 40%, which is 10% higher than the baseline, and its loss convergence speed is also faster than the baseline (gradient descent), as shown in Figure 5. The horizontal axis is the number of iteration steps, and the vertical axis The axis is the accuracy rate of the prediction model. Compared with the gradient descent method, the Q gradient descent model has obvious advantages in both the rate of increase and the magnitude of the accuracy rate. In Figure 6, the horizontal axis is the number of iteration steps, and the vertical axis is the prediction model error of Q gradient descent and gradient descent, and the Q gradient descent model error decreases more rapidly. Due to the limitation of computing power, we iterated for 100 steps, but we can see that the learning ability of DQN still makes the accuracy of the target network rise rapidly and the convergence of its error is also very good.
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的工作人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any worker familiar with the technical field can easily think of various equivalents within the technical scope disclosed in the present invention. Modifications or replacements shall all fall within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810575699.2A CN108932671A (en) | 2018-06-06 | 2018-06-06 | A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810575699.2A CN108932671A (en) | 2018-06-06 | 2018-06-06 | A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108932671A true CN108932671A (en) | 2018-12-04 |
Family
ID=64449976
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810575699.2A Pending CN108932671A (en) | 2018-06-06 | 2018-06-06 | A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108932671A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109657874A (en) * | 2018-12-29 | 2019-04-19 | 安徽数升数据科技有限公司 | A kind of electric power Mid-long term load forecasting method based on long memory models in short-term |
CN109882996A (en) * | 2019-01-25 | 2019-06-14 | 珠海格力电器股份有限公司 | A kind of method and apparatus of control |
CN110245742A (en) * | 2019-05-08 | 2019-09-17 | 上海电力学院 | A kind of adaptive deep learning model optimization method based on Keras platform |
CN110474339A (en) * | 2019-08-07 | 2019-11-19 | 国网福建省电力有限公司 | A kind of electric network reactive-load control method based on the prediction of depth generation load |
CN110516889A (en) * | 2019-09-03 | 2019-11-29 | 广东电网有限责任公司 | A kind of load Comprehensive Prediction Method and relevant device based on Q-learning |
CN110674993A (en) * | 2019-09-26 | 2020-01-10 | 广东电网有限责任公司 | User load short-term prediction method and device |
CN110909941A (en) * | 2019-11-26 | 2020-03-24 | 广州供电局有限公司 | Power load prediction method, device and system based on LSTM neural network |
CN111598721A (en) * | 2020-05-08 | 2020-08-28 | 天津大学 | A real-time load scheduling method based on reinforcement learning and LSTM network |
CN111651220A (en) * | 2020-06-04 | 2020-09-11 | 上海电力大学 | A method and system for automatic optimization of Spark parameters based on deep reinforcement learning |
CN111768028A (en) * | 2020-06-05 | 2020-10-13 | 天津大学 | A GWLF model parameter adjustment method based on deep reinforcement learning |
CN111884213A (en) * | 2020-07-27 | 2020-11-03 | 国网北京市电力公司 | Power distribution network voltage adjusting method based on deep reinforcement learning algorithm |
CN112288157A (en) * | 2020-10-27 | 2021-01-29 | 华能酒泉风电有限责任公司 | A wind farm power prediction method based on fuzzy clustering and deep reinforcement learning |
CN112308278A (en) * | 2019-08-02 | 2021-02-02 | 中移信息技术有限公司 | Optimization method, device, equipment and medium for prediction model |
CN112488452A (en) * | 2020-11-06 | 2021-03-12 | 中国电子科技集团公司第十八研究所 | Energy system management multi-time scale optimal decision method based on deep reinforcement learning |
CN112614009A (en) * | 2020-12-07 | 2021-04-06 | 国网四川省电力公司电力科学研究院 | Power grid energy management method and system based on deep expected Q-learning |
CN112712385A (en) * | 2019-10-25 | 2021-04-27 | 北京达佳互联信息技术有限公司 | Advertisement recommendation method and device, electronic equipment and storage medium |
CN113361768A (en) * | 2021-06-04 | 2021-09-07 | 重庆科技学院 | Grain depot health condition prediction method, storage device and server |
CN113988414A (en) * | 2021-10-27 | 2022-01-28 | 内蒙古工业大学 | A wind power output power prediction method based on P_LSTNet and weighted Markov verification |
CN114124460A (en) * | 2021-10-09 | 2022-03-01 | 广东技术师范大学 | Industrial control system intrusion detection method, device, computer equipment and storage medium |
CN114219182A (en) * | 2022-01-20 | 2022-03-22 | 天津大学 | A wind power prediction method for abnormal weather scenarios based on reinforcement learning |
CN118247532A (en) * | 2024-05-27 | 2024-06-25 | 贵州航天智慧农业有限公司 | Intelligent monitoring and regulating method and system for plant growth environment |
CN118278914A (en) * | 2024-04-10 | 2024-07-02 | 南京恒星自动化设备有限公司 | Method for realizing equipment fault rush repair based on GIS (geographic information System) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096729A (en) * | 2016-06-06 | 2016-11-09 | 天津科技大学 | A kind of towards the depth-size strategy learning method of complex task in extensive environment |
CN106557462A (en) * | 2016-11-02 | 2017-04-05 | 数库(上海)科技有限公司 | Name entity recognition method and system |
US20170213126A1 (en) * | 2016-01-27 | 2017-07-27 | Bonsai AI, Inc. | Artificial intelligence engine configured to work with a pedagogical programming language to train one or more trained artificial intelligence models |
CN107241213A (en) * | 2017-04-28 | 2017-10-10 | 东南大学 | A kind of web service composition method learnt based on deeply |
CN107370188A (en) * | 2017-09-11 | 2017-11-21 | 国网山东省电力公司莱芜供电公司 | A kind of power system Multiobjective Scheduling method of meter and wind power output |
CN107909227A (en) * | 2017-12-20 | 2018-04-13 | 北京金风慧能技术有限公司 | Ultra-short term predicts the method, apparatus and wind power generating set of wind power |
-
2018
- 2018-06-06 CN CN201810575699.2A patent/CN108932671A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170213126A1 (en) * | 2016-01-27 | 2017-07-27 | Bonsai AI, Inc. | Artificial intelligence engine configured to work with a pedagogical programming language to train one or more trained artificial intelligence models |
CN106096729A (en) * | 2016-06-06 | 2016-11-09 | 天津科技大学 | A kind of towards the depth-size strategy learning method of complex task in extensive environment |
CN106557462A (en) * | 2016-11-02 | 2017-04-05 | 数库(上海)科技有限公司 | Name entity recognition method and system |
CN107241213A (en) * | 2017-04-28 | 2017-10-10 | 东南大学 | A kind of web service composition method learnt based on deeply |
CN107370188A (en) * | 2017-09-11 | 2017-11-21 | 国网山东省电力公司莱芜供电公司 | A kind of power system Multiobjective Scheduling method of meter and wind power output |
CN107909227A (en) * | 2017-12-20 | 2018-04-13 | 北京金风慧能技术有限公司 | Ultra-short term predicts the method, apparatus and wind power generating set of wind power |
Non-Patent Citations (2)
Title |
---|
SAMANTHA HANSEN: "Using Deep Q-Learning to Control Optimization Hyperparameters", 《HTTPS://ARXIV.ORG/ABS/1602.04062》 * |
赵冬斌等: "深度强化学习综述:兼论计算机围棋的发展 ", 《控制理论与应用》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109657874A (en) * | 2018-12-29 | 2019-04-19 | 安徽数升数据科技有限公司 | A kind of electric power Mid-long term load forecasting method based on long memory models in short-term |
CN109882996A (en) * | 2019-01-25 | 2019-06-14 | 珠海格力电器股份有限公司 | A kind of method and apparatus of control |
CN110245742A (en) * | 2019-05-08 | 2019-09-17 | 上海电力学院 | A kind of adaptive deep learning model optimization method based on Keras platform |
CN112308278A (en) * | 2019-08-02 | 2021-02-02 | 中移信息技术有限公司 | Optimization method, device, equipment and medium for prediction model |
CN110474339A (en) * | 2019-08-07 | 2019-11-19 | 国网福建省电力有限公司 | A kind of electric network reactive-load control method based on the prediction of depth generation load |
CN110474339B (en) * | 2019-08-07 | 2022-06-03 | 国网福建省电力有限公司 | Power grid reactive power control method based on deep power generation load prediction |
CN110516889B (en) * | 2019-09-03 | 2023-07-07 | 广东电网有限责任公司 | Load comprehensive prediction method based on Q-learning and related equipment |
CN110516889A (en) * | 2019-09-03 | 2019-11-29 | 广东电网有限责任公司 | A kind of load Comprehensive Prediction Method and relevant device based on Q-learning |
CN110674993A (en) * | 2019-09-26 | 2020-01-10 | 广东电网有限责任公司 | User load short-term prediction method and device |
CN112712385B (en) * | 2019-10-25 | 2024-01-12 | 北京达佳互联信息技术有限公司 | Advertisement recommendation method and device, electronic equipment and storage medium |
CN112712385A (en) * | 2019-10-25 | 2021-04-27 | 北京达佳互联信息技术有限公司 | Advertisement recommendation method and device, electronic equipment and storage medium |
CN110909941A (en) * | 2019-11-26 | 2020-03-24 | 广州供电局有限公司 | Power load prediction method, device and system based on LSTM neural network |
CN110909941B (en) * | 2019-11-26 | 2022-08-02 | 广东电网有限责任公司广州供电局 | Power load prediction method, device and system based on LSTM neural network |
CN111598721A (en) * | 2020-05-08 | 2020-08-28 | 天津大学 | A real-time load scheduling method based on reinforcement learning and LSTM network |
CN111651220A (en) * | 2020-06-04 | 2020-09-11 | 上海电力大学 | A method and system for automatic optimization of Spark parameters based on deep reinforcement learning |
CN111651220B (en) * | 2020-06-04 | 2023-08-18 | 上海电力大学 | Spark parameter automatic optimization method and system based on deep reinforcement learning |
CN111768028B (en) * | 2020-06-05 | 2022-05-27 | 天津大学 | A GWLF model parameter adjustment method based on deep reinforcement learning |
CN111768028A (en) * | 2020-06-05 | 2020-10-13 | 天津大学 | A GWLF model parameter adjustment method based on deep reinforcement learning |
CN111884213A (en) * | 2020-07-27 | 2020-11-03 | 国网北京市电力公司 | Power distribution network voltage adjusting method based on deep reinforcement learning algorithm |
CN112288157A (en) * | 2020-10-27 | 2021-01-29 | 华能酒泉风电有限责任公司 | A wind farm power prediction method based on fuzzy clustering and deep reinforcement learning |
CN112488452A (en) * | 2020-11-06 | 2021-03-12 | 中国电子科技集团公司第十八研究所 | Energy system management multi-time scale optimal decision method based on deep reinforcement learning |
CN112614009A (en) * | 2020-12-07 | 2021-04-06 | 国网四川省电力公司电力科学研究院 | Power grid energy management method and system based on deep expected Q-learning |
CN112614009B (en) * | 2020-12-07 | 2023-08-25 | 国网四川省电力公司电力科学研究院 | Power grid energy management method and system based on deep expectation Q-learning |
CN113361768A (en) * | 2021-06-04 | 2021-09-07 | 重庆科技学院 | Grain depot health condition prediction method, storage device and server |
CN114124460A (en) * | 2021-10-09 | 2022-03-01 | 广东技术师范大学 | Industrial control system intrusion detection method, device, computer equipment and storage medium |
CN113988414A (en) * | 2021-10-27 | 2022-01-28 | 内蒙古工业大学 | A wind power output power prediction method based on P_LSTNet and weighted Markov verification |
CN113988414B (en) * | 2021-10-27 | 2024-05-28 | 内蒙古工业大学 | Wind power output power prediction method based on P_ LSTNet and weighted Markov verification |
CN114219182A (en) * | 2022-01-20 | 2022-03-22 | 天津大学 | A wind power prediction method for abnormal weather scenarios based on reinforcement learning |
CN114219182B (en) * | 2022-01-20 | 2024-08-20 | 天津大学 | Abnormal weather scene wind power prediction method based on reinforcement learning |
CN118278914A (en) * | 2024-04-10 | 2024-07-02 | 南京恒星自动化设备有限公司 | Method for realizing equipment fault rush repair based on GIS (geographic information System) |
CN118247532A (en) * | 2024-05-27 | 2024-06-25 | 贵州航天智慧农业有限公司 | Intelligent monitoring and regulating method and system for plant growth environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932671A (en) | A kind of LSTM wind-powered electricity generation load forecasting method joined using depth Q neural network tune | |
CN114863248B (en) | A method for image object detection based on deep supervised self-distillation | |
CN110619430B (en) | A spatiotemporal attention mechanism approach for traffic prediction | |
CN110619420B (en) | A Short-Term Residential Load Forecasting Method Based on Attention-GRU | |
CN111680786B (en) | Time sequence prediction method based on improved weight gating unit | |
CN113505536A (en) | Optimized traffic flow prediction model based on space-time diagram convolution network | |
CN103105246A (en) | Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm | |
CN111027772A (en) | Multi-factor short-term load forecasting method based on PCA-DBILSTM | |
CN111861013A (en) | A kind of power load forecasting method and device | |
CN110929958A (en) | Short-term traffic flow prediction method based on deep learning parameter optimization | |
CN110222387B (en) | Multi-element drilling time sequence prediction method based on mixed leaky integration CRJ network | |
CN114511021A (en) | Extreme learning machine classification algorithm based on improved crow search algorithm | |
CN114118567B (en) | Power service bandwidth prediction method based on double-channel converged network | |
CN110276483A (en) | Prediction Method of Sugar Raw Materials Based on Neural Network | |
CN115438842A (en) | A Load Forecasting Method Based on Adaptive Improved Ephemera and BP Neural Network | |
CN115496290A (en) | Medium-and-long-term runoff time-varying probability prediction method based on 'input-structure-parameter' full-factor hierarchical combination optimization | |
CN115907200A (en) | Concrete dam deformation prediction method, computer equipment and storage medium | |
Chen et al. | Groundwater level prediction using SOM-RBFN multisite model | |
CN115796364A (en) | Intelligent interactive decision-making method for discrete manufacturing system | |
CN118643348A (en) | A model online learning method based on historical data | |
CN117407802A (en) | Runoff prediction method based on improved depth forest model | |
CN116681159A (en) | Short-term power load prediction method based on whale optimization algorithm and DRESN | |
CN114548350A (en) | Power load prediction method based on goblet sea squirt group and BP neural network | |
CN115794805B (en) | Method for supplementing measurement data of medium-low voltage distribution network | |
CN117318025A (en) | Short-term load prediction method based on weighted gray correlation projection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181204 |
|
RJ01 | Rejection of invention patent application after publication |