CN109492814B - Urban traffic flow prediction method, system and electronic equipment - Google Patents
Urban traffic flow prediction method, system and electronic equipment Download PDFInfo
- Publication number
- CN109492814B CN109492814B CN201811357538.2A CN201811357538A CN109492814B CN 109492814 B CN109492814 B CN 109492814B CN 201811357538 A CN201811357538 A CN 201811357538A CN 109492814 B CN109492814 B CN 109492814B
- Authority
- CN
- China
- Prior art keywords
- lstm
- neural network
- rnn neural
- cso
- traffic flow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000013528 artificial neural network Methods 0.000 claims abstract description 160
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 59
- 238000005457 optimization Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 40
- 239000011159 matrix material Substances 0.000 claims description 35
- 238000012545 processing Methods 0.000 claims description 34
- 230000015654 memory Effects 0.000 claims description 28
- 230000006870 function Effects 0.000 claims description 22
- 238000010606 normalization Methods 0.000 claims description 20
- 238000012360 testing method Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 230000002441 reversible effect Effects 0.000 claims description 11
- 230000004931 aggregating effect Effects 0.000 claims description 6
- 230000002776 aggregation Effects 0.000 claims description 4
- 238000004220 aggregation Methods 0.000 claims description 4
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical group [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 claims 1
- 230000006403 short-term memory Effects 0.000 abstract description 4
- 230000007787 long-term memory Effects 0.000 abstract description 3
- 238000003062 neural network model Methods 0.000 abstract description 3
- 230000008901 benefit Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 210000002569 neuron Anatomy 0.000 description 6
- 230000002860 competitive effect Effects 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 239000002245 particle Substances 0.000 description 5
- 230000007547 defect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000010355 oscillation Effects 0.000 description 2
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012731 temporal analysis Methods 0.000 description 1
- 238000000700 time series analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Entrepreneurship & Innovation (AREA)
- Biomedical Technology (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to a method, a system and electronic equipment for predicting urban traffic flow. The method comprises the following steps: step a: constructing an LSTM-RNN neural network; step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network; step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network; step d: and inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network. The method is based on the competition group algorithm and combines the long-term and short-term memory recursion deep neural network model to carry out prediction learning of the traffic flow, and can effectively improve the prediction precision of the deep neural network, so that the traffic flow prediction performance is remarkably improved.
Description
Technical Field
The application belongs to the technical field of intelligent traffic, and particularly relates to a method and a system for predicting urban traffic flow and electronic equipment.
Background
The real-time traffic flow prediction is the implementation basis of an Intelligent Transportation System (ITS), an accurate prediction result can provide effective path planning service for travelers, and an efficient and safe road traffic condition, and has important significance for real-time scheduling and effective operation of the Intelligent Transportation System.
The existing traffic flow prediction models can be divided into parametric and non-parametric types. In the eighties of the last century, Iwao Okutani and the like construct a short-term traffic flow model based on a Kalman filtering theory by dynamically adjusting parameters of the model. Thereafter, researchers have predicted flow based on parameterized models of time series analysis. A.G.Hobeika, etc. establishes a short-time traffic flow prediction model based on regression analysis; billy m.williams et al proposed a periodic-based differential Autoregressive Moving Average (ARIMA) prediction model, which effectively reduced the Mean Absolute Percentage Error (MAPE). Although ARIMA can achieve a compact prediction model and good approximation performance, complex and huge traffic flow data has high randomness and nonlinearity, and accurate model description is difficult to realize by using linear models or parameterized models such as ARIMA.
With the development of deep learning theory and computer hardware technology, the artificial neural network with strong adaptivity and self-organization property provides an effective tool for a complex system prediction model. In recent years, students propose a plurality of model algorithms based on artificial Neural networks, wherein compared with a traditional Neural Network and a common Recurrent Neural Network (RNN), some students adopt a Long Short Term Memory (LSTM) model and a gated round robin Unit (GRU) to predict on a Short-Term real-time traffic flow to obtain a better prediction effect, and the model based on the method has high requirements on model input and structural parameters, and how to select or construct a proper deep Neural Network has important significance for improving the accuracy and robustness of Short-Term traffic flow prediction.
Compared with various parameterized models and non-parameterized models of the traditional artificial neural network, the time sequence prediction performance of the LSTM-RNN deep neural network structure is more excellent, and the LSTM-RNN deep neural network structure becomes the most popular model prediction tool. However, the LSTM-RNN network model with unreasonable initial parameter values is adopted, so that the accuracy of the model is remarkably reduced due to the fact that the LSTM-RNN network is deep in structural layer number and the relation between mass traffic flow data input and initial value setting is tight.
Disclosure of Invention
The application provides a method, a system and an electronic device for predicting urban traffic flow, which aim to solve at least one of the technical problems in the prior art to a certain extent.
In order to solve the above problems, the present application provides the following technical solutions:
an urban traffic flow prediction method comprises the following steps:
step a: constructing an LSTM-RNN neural network;
step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
step d: and inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the step a also comprises the following steps: acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data, and adding a time dimension to generate a three-dimensional tensor; and processing the two-dimensional matrix to obtain an input vector of the LSTM-RNN neural network.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the step a also comprises the following steps: aggregating the historical traffic flow data according to a set time interval; and calculating the minimum value min and the maximum value max of the sample data in the aggregated historical traffic flow data, normalizing the sample data by using a min-max method, and dividing the normalized sample data into a training set and a test set.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step b, the performing parameter optimization on the LSTM-RNN neural network through the CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network specifically includes: constructing LSTM-RNN neural network, and randomly generatingA population WiBy WiAs an initial value of the LSTM-RNN neural network, and Wi=(ωi1,ωi2,...ωis)TWherein s is the number of individuals in the population; the update strategy adopted by the CSO algorithm is as follows:
Xl,k(t+1)=Xl,k(t)+Vl,k(t+1)
in the above formula, Xi,k(t) and Vi,k(t) respectively representing the position and the speed of the ith individual in the kth dimension of the tth generation population; xw,k(t) represents winning individuals of the kth dimension of the t-th generation population, Xl,k(t) inferior individuals of kth dimension of t generation population; r1(k,t),R2(k,t),R3(k,t)∈[0,1]nThree random numbers respectively;represents the average position of all individuals in the population at the t generation of the k dimension,is thatThe weight after CSO training is used as the initial parameter value of the LSTM-RNN neural network.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step c, the adding the initial parameter value to the LSTM-RNN neural network to obtain the CSO-LSTM-RNN neural network further includes: inputting the sample data after normalization processing into a CSO-LSTM-RNN neural network for forward calculation; and training the CSO-LSTM-RNN neural network by combining a back propagation algorithm and gradient descent update parameters to obtain the CSO-LSTM-RNN neural network with the minimum error.
Another technical scheme adopted by the embodiment of the application is as follows: an urban traffic flow prediction system comprising:
a first model building module: used for constructing an LSTM-RNN neural network;
a model initialization module: the LSTM-RNN neural network is optimized through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
a second model building module: the system is used for adding the initial parameter value into the LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
a result output module: the traffic flow prediction device is used for inputting traffic flow data into the CSO-LSTM-RNN neural network and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network.
The technical scheme adopted by the embodiment of the application further comprises the following steps:
a data acquisition module: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data and adding a time dimension to generate a three-dimensional tensor;
a data processing module: and the two-dimensional matrix is used for processing the two-dimensional matrix to obtain an input vector of the LSTM-RNN neural network.
The technical scheme adopted by the embodiment of the application further comprises the following steps:
a normalization module: the historical traffic flow data are aggregated according to a set time interval;
a data aggregation module: the method is used for calculating the minimum value min and the maximum value max of sample data in the aggregated historical traffic flow data, normalizing the sample data by using a min-max method, and dividing the normalized sample data into a training set and a test set.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the model initialization module optimizes parameters of the LSTM-RNN neural network through a CSO algorithm, and the initial parameter values of the LSTM-RNN neural network are obtained by the following steps: constructing an LSTM-RNN neural network and randomly generating a population WiBy WiAs an initial value of the LSTM-RNN neural network, and Wi=(ωi1,ωi2,...ωis)TWherein s is an individual in the populationCounting; the update strategy adopted by the CSO algorithm is as follows:
Xl,k(t+1)=Xl,k(t)+Vl,k(t+1)
in the above formula, Xi,k(t) and Vi,k(t) respectively representing the position and the speed of the ith individual in the kth dimension of the tth generation population; xw,k(t) represents winning individuals of the kth dimension of the t-th generation population, Xl,k(t) inferior individuals of kth dimension of t generation population; r1(k,t),R2(k,t),R3(k,t)∈[0,1]nThree random numbers respectively;represents the average position of all individuals in the population at the t generation of the k dimension,is thatThe weight after CSO training is used as the initial parameter value of the LSTM-RNN neural network.
The technical scheme adopted by the embodiment of the application also comprises a reverse calculation module;
the second model building module is also used for inputting the sample data after the normalization processing into a CSO-LSTM-RNN neural network for forward calculation; and the reverse calculation module is used for training the CSO-LSTM-RNN neural network by combining a reverse propagation algorithm and gradient descent update parameters to obtain the CSO-LSTM-RNN neural network with the minimum error.
The embodiment of the application adopts another technical scheme that: an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the following operations of the urban traffic flow prediction method described above:
step a: constructing an LSTM-RNN neural network;
step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
step d: and inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network.
Compared with the prior art, the embodiment of the application has the advantages that: the urban traffic flow prediction method, the urban traffic flow prediction system and the electronic equipment in the embodiment of the application have the advantages that the CSO-LSTM-RNN neural network is established, traffic flow data collected in real time are input into the CSO-LSTM-RNN neural network, the initial input parameters are obtained by training through the competitive group heuristic algorithm, then the back propagation training is carried out, the training set output model result is obtained, and error evaluation is carried out, compared with the prior art, the method and the system at least have the following advantages:
1. the method is based on a competition group algorithm and combines a long-term and short-term memory recursive deep neural network model to carry out prediction learning of traffic flow, and can effectively improve the prediction precision of the deep neural network, so that the traffic flow prediction performance is remarkably improved.
2. Compared with a general LSTM prediction model, the method can quickly converge to the near-optimal initial parameter value, and cannot be trapped in local defects.
3. The application belongs to the application of artificial intelligence in the field of intelligent traffic systems, has certain self-perception and learning capabilities on traffic flow changes in a traffic network, and can effectively serve a traffic control and pedestrian path guidance system according to an obtained prediction result.
Drawings
Fig. 1 is a flowchart of an urban traffic flow prediction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an LSTM-RNN neural network structure according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an urban traffic flow prediction system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a hardware device of a city traffic flow prediction method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Please refer to fig. 1, which is a flowchart illustrating an urban traffic flow prediction method according to an embodiment of the present application. The urban traffic flow prediction method comprises the following steps:
step 100: acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data, and adding a time dimension to generate a three-dimensional tensor;
in step 100, acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data, and generating a three-dimensional tensor by adding a time dimension: acquiring vehicle information uploaded in real time in a target region range (region, R) through a vehicle real-time navigation satellite database, recording the position of a place where traffic flow is expected to be predicted in the region R, and representing each place needing prediction by using a Node set Node { n1, n2, n3, n4 … nq }; constructing a grid matrix of N x N, and recording the grid matrix as Y; each matrix element yij represents the traffic flow from node i to node j; adding the time dimension T to obtain a three-dimensional tensor S of (N, N, T); each tensor element stij represents the traffic flow from node i to node j at time T, where T is the total number of all times. And (3) acquiring traffic flow data between the places according to a specific observed T set by a technician, inputting the traffic flow data into the generated matrix Y, and generating a final three-dimensional tensor S according to the operation.
Step 200: processing the two-dimensional matrix to obtain an input vector of the LSTM-RNN neural network;
in step 200, the processing of the two-dimensional input matrix specifically includes: for matrix Y, the number of columns is kept constant, and all the rows are added to obtain a new two-dimensional grid matrix of T × N, denoted X. In the time dimension, the element of each row vector Xt is denoted xkj (k ═ 1+2+3+ … + N, j belonging to 1 to N), meaning the total amount of traffic flow from all nodes through node j at time t. Since the k value of an element indicates the same meaning, xj ═ xkj is given. The goal is to predict Xt (hat) based on a series of historical data sets (Xt-1, Xt-2, Xt-3.. Xt-T, T being the total time interval). To simplify the expression, a historical data set (Xt-1, Xt-2, Xt-3.. X0) may be obtained assuming T-1.
Step 300: aggregating historical traffic flow data according to a set time interval;
in step 300, aggregating the historical traffic flow data according to a set time interval specifically comprises: the number of vehicles passing through a certain node in a certain time (specified time) interval is specified, t in the algorithm expression is specified to be 1h, namely the time interval corresponding to each input of the LSTM-RNN in the time dimension is 1h, and the time interval can be specifically set according to practical application.
Step 400: calculating the minimum value min and the maximum value max of sample data in the aggregated historical traffic flow data, normalizing the sample data by using a min-max method, and dividing the normalized sample data into a training set and a test set;
in step 400, all data used for training and testing in the aggregated historical traffic flow data are sample data. The normalization of the sample data specifically comprises: processing the accumulated historical traffic flow data by a min-max method to enable the value domain interval of the sample data after normalization processing to be [0, 1 ]; the specific normalization processing formula is as follows:
in the formula (1), xt kjRepresenting the total traffic flow from all nodes to pass through the node j at the moment t; max is all xt kjMaximum value of (1); min is all xt kjMinimum value of (1).
In the embodiment of the application, the proportion of the training set to the test set to the original data is determined to be 99% and 1% according to the data scale, and it can be understood that the proportion of the training set to the test set can be adjusted according to the actual original data scale.
Step 500: constructing an LSTM-RNN neural network;
in step 500, the LSTM structure diagram is an original drawing of the specific structure of the LSTM-RNN neural network. Fig. 2 is a schematic diagram of the LSTM-RNN neural network structure according to an embodiment of the present application. The LSTM-RNN neural network includes an input layer, an implication layer, and an output layer, and the implication layer contains recursive fully-connected LSTM nodes, which contain LSTM-specific memory modules. The memory module comprises one or more self-connected memory units and a gate control, namely an input gate (input gate), an output gate (output gate) and a forgetting gate (forget gate). The memory module is controlled by the three gates to implement the functions of writing, reading and resetting.
In the embodiment of the application, the number of neurons of the input layer and the output layer of the LSTM-RNN neural network is determined according to the dimension of the input feature vector and the finally output traffic flow state, and the number of neurons of the hidden layer is set. And determining the number of the neurons in the hidden layer by a hyperparameter adjusting process. For example, the learning rate is 0.05, the population size of the competition group algorithm is 200, the iterative competition evolution times are 150, and the like. It will be appreciated that one skilled in the art can select and set the hyper-parameters based on the number of nodes and corresponding data sets to be employed.
Step 600: performing parameter optimization on the LSTM-RNN neural network by a CSO (Competitive Swarm Optimizer) algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
in step 600, the parameters are optimized specifically including: initializing the size of the population, the number of iterations, the weight value and the threshold value in the CSO algorithm. Construction of LSTM-RNN nervesNetwork, randomly generating a population WiBy WiAs an initial value for the neural network. And Wi=(ωi1,ωi2,...ωis)TWherein s is the number of individuals in the population.
The competitive swarm algorithm is an algorithm which is inspiration based on the particle swarm algorithm, but has better implementation effect and higher application data dimension and data volume compared with the particle swarm algorithm. In CSO, the whole population is randomly divided into P/2 pairs of individuals (P is the population scale), each pair of individuals is divided into a superior individual and a inferior individual according to the fitness and the size of a target function value, the superior individual and the inferior individual compete with each other, namely the inferior individual learns the superior individual under a certain strategy and updates the speed and the position, and the updated individual is transmitted to the next generation; the winning individual is passed directly to the next generation. The update strategy employed by CSO can be expressed by the following equation:
Xl,k(t+1)=Xl,k(t)+Vl,k(t+1)
in the formula (4), Xi,k(t) and Vi,k(t) respectively representing the position and the speed of the ith individual in the kth dimension of the tth generation population; xw,k(t) represents winning individuals of the kth dimension of the t-th generation population, Xl,k(t) inferior individuals of kth dimension of t generation population; r1(k,t),R2(k,t),R3(k,t)∈[0,1]nThree random numbers respectively;represents the average position of all individuals in the population at the t generation of the k dimension,is thatThe weight after CSO training is used as the initial parameter of the LSTM-RNN neural networkThe value is obtained.
When the initial input weight is optimized by CSO, the optimization process adopts calculation of the average position of all individuals in a population, which may cause the local search precision to be reduced after the individual updates the position, so that technicians can adopt a proper adaptive strategy to add adjustment parameters to further optimize the CSO algorithm. It can be understood that, those skilled in the art can also maintain the diversity of the population in the updating process of the competition group algorithm by setting the proportion of the population individuals participating in competition each time to the population by themselves. According to the method, the good global search capability of the competition group algorithm in large-scale function solving is combined with the capability of the LSTM for storing short-term fluctuation information, the historical length of the prediction model is dynamically determined, and the defect caused by local minimum or oscillation effect is avoided in a parameter initialization stage, so that the prediction precision, robustness and convergence rate of the traffic flow are improved.
Step 700: adding initial parameter values obtained by the CSO algorithm into an LSTM-RNN neural network to obtain a competition group long-short term memory deep neural network (CSO-LSTM-RNN), inputting the training set data subjected to normalization processing into the CSO-LSTM-RNN neural network, and performing forward calculation;
in step 700, the CSO-LSTM-RNN neural network trains the data using a gradient descent approach. The forward propagation formula is as follows:
Γf=σ(ωf[a(t-1),x(t)]+bf)
Γi=σ(ωi[a(t-1),x(t)]+bi)
Γo=σ(ωo[a(t-1),x(t)]+bo)
a(t)=Γo*tanh C(t) (2)
in the formula (2), x(t)For the input sequence at time t, x represents the multiplication of matrix elements, ω represents the weight of the hidden layer, and b represents the offset. The objective function of the training process, i.e. the loss function, is expressed as:
in the formula (3), the first and second groups,representing the real observed value of j node input at the time t;the predicted value of j node output at t moment is represented; n represents the number of instances contained in the training set. And updating the parameter w and the parameter b in the CSO-LSTM-RNN neural network by a well-known Back Propagation Through Time (BPTT) so as to minimize the objective function, achieve the minimum loss and further achieve the maximum precision.
Step 800: training the CSO-LSTM-RNN neural network by combining a back propagation algorithm and gradient descent update parameters to obtain the CSO-LSTM-RNN neural network with the minimum error;
in step 800, because the number of geographical location nodes selected by the vehicle flow is very large in the present application, and the variable quantity of the input data in the time dimension is positively correlated with the number of nodes, in order to avoid an overfitting phenomenon caused by possibly existing complex data, L2 regularization processing is performed on a loss function in a specific implementation process. The loss function L will be updated as:
in formula (4), the symbol indicates that the expression on the right side of the symbol is assigned to the expression on the left side of the symbol.
When random gradient descending is carried out, the weight value and the offset are updated as follows:
step 900: calculating the average Error (MSE) of the CSO-LSTM-RNN neural network with the minimum Error on the test set, selecting the CSO-LSTM-RNN neural network with the minimum Error on the test set to predict the traffic flow, and outputting a traffic flow prediction result;
in step 900, the CSO-LSTM-RNN neural network has an output benefit that when a problem is converted from supervised learning to time series prediction, the traffic flow of a certain node at the next time can be specifically predicted. It will be appreciated that the skilled person can also select a multivariable (multinode) output depending on the actual situation.
Step 1000: and performing inverse normalization processing on the output traffic flow prediction result to obtain predicted value data conforming to the realistic description.
The prediction of the CSO-LSTM-RNN neural network on the traffic flow in the application is already carried out by a large amount of experiments, the result proves to be feasible and effective, and the experiment proves that the combined algorithm for optimizing the initial parameters is also superior to the general classical PSO (Particle Swarm Optimization) and PSO variant algorithm. After the CSO and LSTM-RNN structures are combined specifically, based on algorithm development environments and kits such as tensierflow and keras, the CSO algorithm is rewritten by using python language, and combined with the LSTM-RNN structure, a preliminary result is obtained, and the effectiveness of the algorithm is verified.
Please refer to fig. 3, which is a schematic structural diagram of an urban traffic flow prediction system according to an embodiment of the present application. The urban traffic flow prediction system comprises a data acquisition module, a data processing module, a data aggregation module, a normalization module, a first model construction module, a model initialization module, a second model construction module, a reverse calculation module, a result output module and a reverse normalization module.
A data acquisition module: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data, and adding a time dimension to generate a three-dimensional tensor; the method comprises the following steps of obtaining historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data, and adding a time dimension to generate a three-dimensional tensor specifically: acquiring vehicle information uploaded in real time in a target region range (region, R) through a vehicle real-time navigation satellite database, recording the position of a place where traffic flow is expected to be predicted in the region R, and representing each place needing prediction by using a Node set Node { n1, n2, n3, n4 … nq }; constructing a grid matrix of N x N, and recording the grid matrix as Y; each matrix element yij represents the traffic flow from node i to node j; adding the time dimension T to obtain a three-dimensional tensor S of (N, N, T); each tensor element stij represents the traffic flow from node i to node j at time T, where T is the total number of all times. And (3) acquiring traffic flow data between the places according to a specific observed T set by a technician, inputting the traffic flow data into the generated matrix Y, and generating a final three-dimensional tensor S according to the operation.
A data processing module: the two-dimensional matrix is used for processing the two-dimensional matrix to obtain an input vector of the LSTM-RNN neural network; the processing of the two-dimensional input matrix specifically comprises: for matrix Y, the number of columns is kept constant, and all the rows are added to obtain a new two-dimensional grid matrix of T × N, denoted X. In the time dimension, the element of each row vector Xt is denoted xkj (k ═ 1+2+3+ … + N, j belonging to 1 to N), meaning the total amount of traffic flow from all nodes through node j at time t. Since the k value of an element indicates the same meaning, xj ═ xkj is given. The goal is to predict Xt (hat) based on a series of historical data sets (Xt-1, Xt-2, Xt-3.. Xt-T, T being the total time interval). To simplify the expression, a historical data set (Xt-1, Xt-2, Xt-3.. X0) may be obtained assuming T-1.
A data aggregation module: the system is used for aggregating historical traffic flow data according to a set time interval; the specific steps of aggregating historical traffic flow data according to a set time interval are as follows: the number of vehicles passing through a certain node in a certain time (specified time) interval is specified, t in the algorithm expression is specified to be 1h, namely the time interval corresponding to each input of the LSTM-RNN in the time dimension is 1 h.
A normalization module: the method is used for calculating the minimum value min and the maximum value max of sample data in the aggregated historical traffic flow data, normalizing the sample data by using a min-max method, and dividing the normalized sample data into a training set and a test set; and all data used for training and testing in the aggregated historical traffic flow data are sample data. The normalization of the sample data specifically comprises: processing the accumulated historical traffic flow data by a min-max method to enable the value domain interval of the sample data after normalization processing to be [0, 1 ]; the specific normalization processing formula is as follows:
in the formula (1), xt kjRepresenting the total traffic flow from all nodes to pass through the node j at the moment t; max is all xt kjMaximum value of (1); min is all xt kjMinimum value of (1).
In the embodiment of the application, the proportion of the training set to the test set to the original data is determined to be 99% and 1% according to the data scale, and it can be understood that the proportion of the training set to the test set can be adjusted according to the actual original data scale.
A first model building module: used for constructing an LSTM-RNN neural network; the LSTM-RNN neural network comprises an input layer, an implicit layer and an output layer, wherein the implicit layer comprises recursive fully-connected LSTM nodes and comprises memory modules specific to the LSTM. The memory module comprises one or more self-connected memory units and a gate control, namely an input gate (input gate), an output gate (output gate) and a forgetting gate (forget gate). The memory module is controlled by the three gates to implement the functions of writing, reading and resetting.
In the embodiment of the application, the number of neurons of the input layer and the output layer of the LSTM-RNN neural network is determined according to the dimension of the input feature vector and the finally output traffic flow state, and the number of neurons of the hidden layer is set. And determining the number of the neurons in the hidden layer by a hyperparameter adjusting process. For example, the learning rate is 0.05, the population size of the competition group algorithm is 200, the iterative competition evolution times are 150, and the like. It will be appreciated that one skilled in the art can select and set the hyper-parameters based on the number of nodes and corresponding data sets to be employed.
A model initialization module: the method is used for optimizing the parameters of the LSTM-RNN neural network through a CSO algorithm to obtain initial parameter values of the LSTM-RNN neural network; wherein, optimizing parameters specifically includes: initializing the size of the population, the number of iterations, the weight value and the threshold value in the CSO algorithm. Constructing an LSTM-RNN neural network and randomly generating a population WiBy WiAs an initial value for the neural network. And Wi=(ωi1,ωi2,...ωis)TWherein s is the number of individuals in the population.
The competitive swarm algorithm is an algorithm which is inspiration based on the particle swarm algorithm, but has better implementation effect and higher application data dimension and data volume compared with the particle swarm algorithm. In CSO, the whole population is randomly divided into P/2 pairs of individuals (P is the population scale), each pair of individuals is divided into a superior individual and a inferior individual according to the fitness and the size of a target function value, the superior individual and the inferior individual compete with each other, namely the inferior individual learns the superior individual under a certain strategy and updates the speed and the position, and the updated individual is transmitted to the next generation; the winning individual is passed directly to the next generation. The update strategy employed by CSO can be expressed by the following equation:
Xl,k(t+1)=Xl,k(t)+Vl,k(t+1)
in the formula (4), Xi,k(t) and Vi,k(t) respectively representing the position and the speed of the ith individual in the kth dimension of the tth generation population; xw,k(t) represents winning individuals of the kth dimension of the t-th generation population, Xl,k(t) inferior individuals of kth dimension of t generation population; r1(k,t),R2(k,t),R3(k,t)∈[0,1]nThree random numbers respectively;represents the average position of all individuals in the population at the t generation of the k dimension,is thatThe weight after CSO training is used as the initial parameter value of the LSTM-RNN neural network.
When the initial input weight is optimized by CSO, the optimization process adopts calculation of the average position of all individuals in a population, which may cause the local search precision to be reduced after the individual updates the position, so that technicians can adopt a proper adaptive strategy to add adjustment parameters to further optimize the CSO algorithm. It can be understood that, those skilled in the art can also maintain the diversity of the population in the updating process of the competition group algorithm by setting the proportion of the population individuals participating in competition each time to the population by themselves. According to the method, the good global search capability of the competition group algorithm in large-scale function solving is combined with the capability of the LSTM for storing short-term fluctuation information, the historical length of the prediction model is dynamically determined, and the defect caused by local minimum or oscillation effect is avoided in a parameter initialization stage, so that the prediction precision, robustness and convergence rate of the traffic flow are improved.
A second model building module: the CSO-LSTM-RNN neural network is obtained by adding the initial parameter values obtained by the CSO algorithm into the LSTM-RNN neural network, and the training set data processed by normalization is input into the CSO-LSTM-RNN neural network for forward calculation; wherein the CSO-LSTM-RNN neural network trains data in a gradient descent mode. The forward propagation formula is as follows:
Γf=σ(ωf[a(t-1),x(t)]+bf)
Γi=σ(ωi[a(t-1),x(t)]+bi)
Γo=σ(ω0[a(t-1),x(t)]+bo)
a(t)=Γo*tanh C(t) (2)
in the formula (2), x(t)For the input sequence at time t, x represents the multiplication of matrix elements, ω represents the weight of the hidden layer, and b represents the offset. The objective function of the training process, i.e. the loss function, is expressed as:
in the formula (3), the first and second groups,representing the real observed value of j node input at the time t;the predicted value of j node output at t moment is represented; n represents the number of instances contained in the training set. And updating the parameter w and the parameter b in the CSO-LSTM-RNN neural network through a known reverse duration gradient descent algorithm, so that the objective function is minimized, the minimum loss is achieved, and the maximum precision is further realized.
A reverse calculation module: the method is used for training the CSO-LSTM-RNN neural network by utilizing a back propagation algorithm and gradient descent update parameters to obtain the CSO-LSTM-RNN neural network with the minimum error, and calculating the average error of the CSO-LSTM-RNN neural network with the minimum error on a test set; in the application, because the number of nodes at the geographic position selected by the vehicle flow is very large, the variable quantity of the input data in the time dimension is positively correlated with the number of nodes, and in order to avoid an overfitting phenomenon caused by possibly existing complex data, an L2 regularization process is adopted for a loss function in a specific implementation process. The loss function L will be updated as:
in formula (4), the symbol indicates that the expression on the right side of the symbol is assigned to the expression on the left side of the symbol.
When random gradient descending is carried out, the weight value and the offset are updated as follows:
a result output module: the CSO-LSTM-RNN neural network is used for selecting the CSO-LSTM-RNN neural network with the minimum error on the test set to predict the traffic flow and outputting a traffic flow prediction result; the CSO-LSTM-RNN neural network has the advantage that when the problem is converted into time series prediction through supervised learning, the traffic flow of a certain node at the next moment can be predicted specifically. It will be appreciated that the skilled person can also select a multivariable (multinode) output depending on the actual situation.
An inverse normalization module: and the traffic flow prediction device is used for carrying out reverse normalization processing on the output traffic flow prediction result to obtain predicted value data conforming to the realistic description.
Fig. 4 is a schematic structural diagram of a hardware device of a city traffic flow prediction method according to an embodiment of the present application. As shown in fig. 4, the device includes one or more processors and memory. Taking a processor as an example, the apparatus may further include: an input system and an output system.
The processor, memory, input system, and output system may be connected by a bus or other means, as exemplified by the bus connection in fig. 4.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules. The processor executes various functional applications and data processing of the electronic device, i.e., implements the processing method of the above-described method embodiment, by executing the non-transitory software program, instructions and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processing system over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input system may receive input numeric or character information and generate a signal input. The output system may include a display device such as a display screen.
The one or more modules are stored in the memory and, when executed by the one or more processors, perform the following for any of the above method embodiments:
step a: constructing an LSTM-RNN neural network;
step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
step d: and inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application provide a non-transitory (non-volatile) computer storage medium having stored thereon computer-executable instructions that may perform the following operations:
step a: constructing an LSTM-RNN neural network;
step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
step d: and inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network.
Embodiments of the present application provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the following:
step a: constructing an LSTM-RNN neural network;
step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
step d: and inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network.
The urban traffic flow prediction method, the urban traffic flow prediction system and the electronic equipment in the embodiment of the application have the advantages that the CSO-LSTM-RNN neural network is established, traffic flow data collected in real time are input into the CSO-LSTM-RNN neural network, the initial input parameters are obtained by training through the competitive group heuristic algorithm, then the back propagation training is carried out, the training set output model result is obtained, and error evaluation is carried out, compared with the prior art, the method and the system at least have the following advantages:
1. the method is based on a competition group algorithm and combines a long-term and short-term memory recursive deep neural network model to carry out prediction learning of traffic flow, and can effectively improve the prediction precision of the deep neural network, so that the traffic flow prediction performance is remarkably improved.
2. Compared with a general LSTM prediction model, the method can quickly converge to the near-optimal initial parameter value, and cannot be trapped in local defects.
3. The application belongs to the application of artificial intelligence in the field of intelligent traffic systems, has certain self-perception and learning capabilities on traffic flow changes in a traffic network, and can effectively serve a traffic control and pedestrian path guidance system according to an obtained prediction result.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (5)
1. The urban traffic flow prediction method is characterized by comprising the following steps:
step a: constructing an LSTM-RNN neural network;
step b: performing parameter optimization on the LSTM-RNN neural network through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
step c: adding the initial parameter value into an LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
step d: inputting traffic flow data into the CSO-LSTM-RNN neural network, and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network;
the step a also comprises the following steps: acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data, and adding a time dimension to generate a three-dimensional tensor; processing the two-dimensional matrix to obtain an input vector of the LSTM-RNN neural network;
the step a also comprises the following steps: aggregating the historical traffic flow data according to a set time interval; calculating the minimum value min and the maximum value max of sample data in the aggregated historical traffic flow data, normalizing the sample data by using a min-max method, and dividing the normalized sample data into a training set and a test set;
in the step a, the LSTM-RNN neural network comprises an input layer, an implicit layer and an output layer, and the implicit layer comprises recursive fully-connected LSTM nodes, wherein the implicit layer comprises a memory module specific to LSTM, the memory module comprises one or more self-connected memory units and a gate control, namely an input gate, an output gate and a forgetting gate, and the memory module is controlled by the three gates to realize the functions of writing, reading and resetting;
in the step c, adding the initial parameter value to the LSTM-RNN neural network to obtain the CSO-LSTM-RNN neural network further includes: inputting the sample data after normalization processing into a CSO-LSTM-RNN neural network for forward calculation; and training the CSO-LSTM-RNN neural network by combining a back propagation algorithm and gradient descent update parameters to obtain the CSO-LSTM-RNN neural network with the minimum error,
the forward propagation formula is as follows:
Γf=σ(ωf[a(t-1),x(t)]+bf)
Γi=σ(ωi[a(t-1),x(t)]+bi)
Γo=σ(ωo[a(t-1),x(t)]+bo)
a(t)=Γo*tanhC(t)
in the formula, x(t)For the input sequence at time t, x represents the multiplication of matrix elements, ω represents the weight of the hidden layer, b represents the offset, and the objective function of the training process, i.e. the loss function, is expressed as:
2. The method according to claim 1, wherein in the step b, the LSTM-RNN neural network is optimized by a CSO algorithm to obtain initial parameter values of the LSTM-RNN neural network, specifically: constructing an LSTM-RNN neural network and randomly generating a population WiBy WiAs an initial value of the LSTM-RNN neural network, and Wi=(ωi1,ωi2,...ωis)TWherein s is the number of individuals in the population; the update strategy adopted by the CSO algorithm is as follows:
Xl,k(t+1)=Xl,k(t)+Vl,k(t+1)
in the above formula, Xi,k(t) and Vi,k(t) respectively representing the position and the speed of the ith individual in the kth dimension of the tth generation population; xw,k(t) represents winning individuals of the kth dimension of the t-th generation population, Xl,k(t) inferior individuals of kth dimension of t generation population; r1(k,t),R2(k,t),R3(k,t)∈[0,1]nThree random numbers respectively;represents the average position of all individuals in the population at the t generation of the k dimension,is thatThe weight after CSO training is used as the initial parameter value of the LSTM-RNN neural network.
3. An urban traffic flow prediction system, comprising:
a first model building module: used for constructing an LSTM-RNN neural network;
a model initialization module: the LSTM-RNN neural network is optimized through a CSO algorithm to obtain an initial parameter value of the LSTM-RNN neural network;
a second model building module: the system is used for adding the initial parameter value into the LSTM-RNN neural network to obtain a CSO-LSTM-RNN neural network;
a result output module: the traffic flow prediction system is used for inputting traffic flow data into the CSO-LSTM-RNN neural network and outputting a traffic flow prediction result through the CSO-LSTM-RNN neural network;
further comprising:
a data acquisition module: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring historical traffic flow data of a target node, constructing a two-dimensional matrix according to the historical traffic flow data and adding a time dimension to generate a three-dimensional tensor;
a data processing module: the two-dimensional matrix is used for processing the two-dimensional matrix to obtain an input vector of the LSTM-RNN neural network;
further comprising:
a normalization module: the historical traffic flow data are aggregated according to a set time interval;
a data aggregation module: the method is used for calculating the minimum value min and the maximum value max of sample data in the aggregated historical traffic flow data, normalizing the sample data by using a min-max method, and dividing the normalized sample data into a training set and a test set;
the device also comprises a reverse calculation module;
the second model building module is also used for inputting the sample data after the normalization processing into a CSO-LSTM-RNN neural network for forward calculation; the reverse calculation module is used for training the CSO-LSTM-RNN neural network by combining a reverse propagation algorithm and gradient descent update parameters to obtain the CSO-LSTM-RNN neural network with the minimum error,
the forward propagation formula is as follows:
Γf=σ(ωf[a(t-1),x(t)]+bf)
Γi=σ(ωi[a(t-1),x(t)]+bi)
Гo=σ(ωo[a(t-1),x(t)]+bo)
a(t)=Γo*tanh C(t) (2)
in the formula (2), x(t)Is output at time tIn sequence, x represents the multiplication of matrix elements, ω represents the weight of the hidden layer, b represents the offset, and the objective function of the training process, i.e. the loss function, is expressed as:
4. The urban traffic flow prediction system according to claim 3, wherein the model initialization module performs parameter optimization on the LSTM-RNN neural network through a CSO algorithm, and the initial parameter values of the LSTM-RNN neural network obtained specifically are: constructing an LSTM-RNN neural network and randomly generating a population WiBy WiAs an initial value of the LSTM-RNN neural network, and Wi=(ωi1,ωi2,...ωis)TWherein s is the number of individuals in the population; the update strategy adopted by the CSO algorithm is as follows:
Xl,k(t+1)=Xl,k(t)+Vl,k(t+1)
in the above formula, Xi,k(t) and Vi,k(t) respectively representing the position and the speed of the ith individual in the kth dimension of the tth generation population; xw,k(t) represents winning individuals of the kth dimension of the t-th generation population, Xl,k(t) inferior individuals of kth dimension of t generation population; r1(k,t),R2(k,t),R3(k,t)∈[0,1]nThree random numbers respectively; xk(t) represents the average position of all individuals of the population in the kth generation of the k dimension,and (c) is a control weight parameter of X (t), and the weight after CSO training is used as an initial parameter value of the LSTM-RNN neural network.
5. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the urban traffic flow prediction method of any of claims 1 to 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811357538.2A CN109492814B (en) | 2018-11-15 | 2018-11-15 | Urban traffic flow prediction method, system and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811357538.2A CN109492814B (en) | 2018-11-15 | 2018-11-15 | Urban traffic flow prediction method, system and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109492814A CN109492814A (en) | 2019-03-19 |
CN109492814B true CN109492814B (en) | 2021-04-20 |
Family
ID=65694914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811357538.2A Active CN109492814B (en) | 2018-11-15 | 2018-11-15 | Urban traffic flow prediction method, system and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492814B (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097755B (en) * | 2019-04-29 | 2021-08-17 | 东北大学 | Highway traffic flow state identification method based on deep neural network |
CN110299005B (en) * | 2019-06-10 | 2020-11-17 | 浙江大学 | Urban large-scale road network traffic speed prediction method based on deep ensemble learning |
CN110335466B (en) * | 2019-07-11 | 2021-01-26 | 青岛海信网络科技股份有限公司 | Traffic flow prediction method and apparatus |
CN110517488A (en) * | 2019-08-19 | 2019-11-29 | 南京理工大学 | The Short-time Traffic Flow Forecasting Methods with Recognition with Recurrent Neural Network are decomposed based on timing |
CN110580548A (en) * | 2019-08-30 | 2019-12-17 | 天津大学 | Multi-step traffic speed prediction method based on class integration learning |
CN110675623B (en) * | 2019-09-06 | 2020-12-01 | 中国科学院自动化研究所 | Short-term traffic flow prediction method, system and device based on hybrid deep learning |
CN110491129A (en) * | 2019-09-24 | 2019-11-22 | 重庆城市管理职业学院 | The traffic flow forecasting method of divergent convolution Recognition with Recurrent Neural Network based on space-time diagram |
CN110765980A (en) * | 2019-11-05 | 2020-02-07 | 中国人民解放军国防科技大学 | Abnormal driving detection method and device |
CN111179596B (en) * | 2020-01-06 | 2021-09-21 | 南京邮电大学 | Traffic flow prediction method based on group normalization and gridding cooperation |
CN110991775B (en) * | 2020-03-02 | 2020-06-26 | 北京全路通信信号研究设计院集团有限公司 | Deep learning-based rail transit passenger flow demand prediction method and device |
CN111369049B (en) * | 2020-03-03 | 2022-08-26 | 新疆大学 | Method and device for predicting traffic flow and electronic equipment |
CN111507530B (en) * | 2020-04-17 | 2022-05-31 | 集美大学 | RBF neural network ship traffic flow prediction method based on fractional order momentum gradient descent |
CN115486200A (en) * | 2020-04-22 | 2022-12-16 | 瑞典爱立信有限公司 | Managing nodes in a communication network |
CN111242394B (en) * | 2020-04-26 | 2021-03-23 | 北京全路通信信号研究设计院集团有限公司 | Method and system for extracting spatial correlation characteristics |
CN111709549B (en) * | 2020-04-30 | 2022-10-21 | 东华大学 | SVD-PSO-LSTM-based short-term traffic flow prediction navigation reminding method |
CN111709553B (en) * | 2020-05-18 | 2023-05-23 | 杭州电子科技大学 | Subway flow prediction method based on tensor GRU neural network |
CN111950697A (en) * | 2020-07-01 | 2020-11-17 | 燕山大学 | Cement product specific surface area prediction method based on gated cycle unit network |
CN111754775B (en) * | 2020-07-03 | 2021-05-25 | 浙江大学 | Traffic flow prediction method based on feature reconstruction error |
CN112036682A (en) * | 2020-07-10 | 2020-12-04 | 广西电网有限责任公司 | Early warning method and device for frequent power failure |
CN112257918B (en) * | 2020-10-19 | 2021-06-22 | 中国科学院自动化研究所 | Traffic flow prediction method based on circulating neural network with embedded attention mechanism |
CN112653894A (en) * | 2020-12-15 | 2021-04-13 | 深圳万兴软件有限公司 | Interframe predictive coding searching method and device, computer equipment and storage medium |
CN113537566B (en) * | 2021-06-16 | 2022-05-06 | 广东工业大学 | Ultra-short-term wind power prediction method based on DCCSO optimization deep learning model |
CN113689721B (en) * | 2021-07-30 | 2022-09-20 | 深圳先进技术研究院 | Automatic driving vehicle speed control method, system, terminal and storage medium |
CN113627089B (en) * | 2021-08-27 | 2022-08-02 | 东南大学 | Urban traffic flow simulation method based on mixed density neural network |
CN114791571B (en) * | 2022-04-18 | 2023-03-24 | 东北电力大学 | Lithium ion battery life prediction method and device based on improved CSO-LSTM network |
CN115762151A (en) * | 2022-11-11 | 2023-03-07 | 华东师范大学 | Traffic situation prediction method and device |
CN115577860B (en) * | 2022-11-21 | 2023-04-07 | 南京地铁运营咨询科技发展有限公司 | Intelligent maintenance method and system for rail transit based on adaptive control |
CN116203432B (en) * | 2023-03-23 | 2023-10-20 | 广东工业大学 | CSO optimization-based unscented Kalman filtering method for predicting battery state of charge |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372722A (en) * | 2016-09-18 | 2017-02-01 | 中国科学院遥感与数字地球研究所 | Subway short-time flow prediction method and apparatus |
-
2018
- 2018-11-15 CN CN201811357538.2A patent/CN109492814B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372722A (en) * | 2016-09-18 | 2017-02-01 | 中国科学院遥感与数字地球研究所 | Subway short-time flow prediction method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN109492814A (en) | 2019-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109492814B (en) | Urban traffic flow prediction method, system and electronic equipment | |
CN112216108B (en) | Traffic prediction method based on attribute-enhanced space-time graph convolution model | |
Ahmed et al. | A comprehensive comparison of recent developed meta-heuristic algorithms for streamflow time series forecasting problem | |
CN114626512B (en) | High-temperature disaster forecasting method based on directed graph neural network | |
CN111210633B (en) | Short-term traffic flow prediction method based on deep learning | |
CN113053115B (en) | Traffic prediction method based on multi-scale graph convolution network model | |
CN112071065A (en) | Traffic flow prediction method based on global diffusion convolution residual error network | |
CN110794842A (en) | Reinforced learning path planning algorithm based on potential field | |
CN111754025B (en) | CNN+GRU-based short-time bus passenger flow prediction method | |
Zhu et al. | Coke price prediction approach based on dense GRU and opposition-based learning salp swarm algorithm | |
Liao et al. | Short-term power prediction for renewable energy using hybrid graph convolutional network and long short-term memory approach | |
CN113591380B (en) | Traffic flow prediction method, medium and equipment based on graph Gaussian process | |
Yu et al. | Error correction method based on data transformational GM (1, 1) and application on tax forecasting | |
CN111860787A (en) | Short-term prediction method and device for coupling directed graph structure flow data containing missing data | |
CN112766600B (en) | Urban area crowd flow prediction method and system | |
Massaoudi et al. | Performance evaluation of deep recurrent neural networks architectures: Application to PV power forecasting | |
CN114565187A (en) | Traffic network data prediction method based on graph space-time self-coding network | |
CN113362637A (en) | Regional multi-field-point vacant parking space prediction method and system | |
CN112766603A (en) | Traffic flow prediction method, system, computer device and storage medium | |
Yang et al. | Prediction of equipment performance index based on improved chaotic lion swarm optimization–LSTM | |
Wu et al. | Combined IXGBoost-KELM short-term photovoltaic power prediction model based on multidimensional similar day clustering and dual decomposition | |
Zuo | Integrated forecasting models based on LSTM and TCN for short-term electricity load forecasting | |
Li et al. | Learning high-order fuzzy cognitive maps via multimodal artificial bee colony algorithm and nearest-better clustering: Applications on multivariate time series prediction | |
Chen et al. | A Spark-based Ant Lion algorithm for parameters optimization of random forest in credit classification | |
CN114572229A (en) | Vehicle speed prediction method, device, medium and equipment based on graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |