CN110263836A - A kind of bad steering state identification method based on multiple features convolutional neural networks - Google Patents
A kind of bad steering state identification method based on multiple features convolutional neural networks Download PDFInfo
- Publication number
- CN110263836A CN110263836A CN201910510060.0A CN201910510060A CN110263836A CN 110263836 A CN110263836 A CN 110263836A CN 201910510060 A CN201910510060 A CN 201910510060A CN 110263836 A CN110263836 A CN 110263836A
- Authority
- CN
- China
- Prior art keywords
- data
- data set
- layer
- convolution
- mobile phone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000011176 pooling Methods 0.000 claims description 44
- 230000006870 function Effects 0.000 claims description 38
- 230000001133 acceleration Effects 0.000 claims description 36
- 230000004913 activation Effects 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 7
- 210000002569 neuron Anatomy 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 210000004027 cell Anatomy 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000002411 adverse Effects 0.000 description 3
- 238000013480 data collection Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a kind of bad steering state identification methods based on multiple features convolutional neural networks, comprising: acquires vehicle-mounted smart phone inertial sensor data, is pre-processed, obtain set of source data;Set of source data is divided into data cell one by one, to each data cell carry out statistics feature extraction, and it is tagged data set is made, be named as characteristic data set;Multiple features convolutional neural networks are built, suitable network parameter and optimizer is selected, and train up to multiple features convolutional neural networks using set of source data and characteristic data set, obtains trained multiple features convolutional neural networks model;Classified using trained multiple features convolutional neural networks model to vehicle carried mobile phone inertial sensor data, to realize the identification to the current driving condition of automobile, judge whether the current driving condition of automobile is bad steering state, and carries out data record and processing on backstage.The present invention has the advantages that arithmetic speed is fast, discrimination is high, environment resistant interference performance is strong.
Description
Technical Field
The invention relates to the technical field of sensor data acquisition and deep learning, in particular to a bad driving state identification method based on a multi-feature convolutional neural network.
Background
With the rapid development of the automobile industry and the increasing popularity of automobiles, automobiles have become the most important transportation tools. However, some drivers still have the problem of irregular driving, and traffic control departments and some network appointment platforms hope to supervise the driving state of the drivers so as to evaluate the driving habits of the drivers.
At present, there are three main methods for detecting bad driving states, and firstly, dangerous driving states are detected by installing different types of sensors or vehicle-mounted computer systems on an automobile, so that driving risks are reduced. And secondly, whether the driving state of the driver is good or not is judged according to the external states of the driver, such as eyeball motion, nodding, physiological indexes and the like. And thirdly, identifying and classifying the driving state of the vehicle by combining portable equipment such as a smart phone, a smart watch and the like. Compared with the former two methods, the driving state analysis is simpler and more convenient by using the sensor information of the portable equipment, and the method is beneficial to popularization. At present, the method is mainly based on data acquired instantly and utilizes a sensor data change threshold value and a traditional machine learning algorithm to analyze, and both the robustness and the accuracy of the system need to be improved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a bad driving state identification method based on a multi-feature convolutional neural network, which comprises the following steps:
step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and recording the data set as a source data set;
step 2: completing data division of a source data set, dividing the source data set into data units, and performing statistical feature extraction on each data unit to obtain a feature data set;
and step 3: building a multi-feature convolutional neural network, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, and judging whether the current driving state of the automobile is a bad driving state or not according to the classification result.
Further, the step 1 comprises:
step 1.1: acquiring data of an inertial sensor of the smart phone in various automobile driving states, and acquiring and storing various data of the vehicle-mounted smart phone sensor in various driving states, wherein the inertial sensor comprises an accelerometer and a gyroscope;
step 1.2: preprocessing the data acquired in the step 1.1 by adopting a data filtering, coordinate conversion and data centralization method to obtain preprocessed data;
step 1.3: and (3) according to the driving state of the automobile during data acquisition, performing labeling operation on the preprocessed data obtained in the step (1.2) to obtain a labeled data set, and naming the labeled data set as a source data set.
Further, the step 1.1 comprises:
the various driving states of the automobile comprise 10 types: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn, sharp left turn, sharp right turn, sharp deceleration, and sharp acceleration. In the 10 states, data acquisition is respectively carried out on an inertial sensor (an accelerometer and a gyroscope) of the vehicle-mounted smart phone, and a triaxial acceleration acc of the smart phone is acquired by the accelerometerx、accy、acczAcquiring three-axis angular velocity gyr of mobile phone for gyroscopex、gyry、gyrzAnd recording the acquisition time t, wherein D seconds are acquired for each of 10 driving states, and n is acquired every second1And (generally taking a value of 100) times, obtaining a data sequence and storing the data sequence in a file.
Further, the step 1.2 comprises:
step 1.2.1, carrying out data filtering on the obtained data sequence according to a Kalman filter to suppress a noise signal;
and step 1.2.2, when the front face of the mobile phone is placed horizontally upwards, the coordinate system of the mobile phone is overlapped with the geodetic coordinate system, namely the mobile phone is horizontally forwards in the driving direction of the automobile to form a positive direction of a y axis, is horizontally rightwards in the driving direction of the automobile to form a positive direction of an x axis, and is vertically arranged on a plane of the y axis of the x axis to form a positive direction of a z axis. The mobile phone coordinate system and the geodetic coordinate system are both right-hand coordinate systems;
and 1.2.3, if the mobile phone cannot keep a horizontal posture in the data acquisition process, converting the data in the mobile phone coordinate system into a geodetic coordinate system by using matrix transformation. The following formulas are coordinate rotation matrixes of an x axis, a y axis and a z axis respectively:
wherein R isx(theta) is the x-axis coordinate rotation matrix,is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, theta is an included angle between the x-axis of the mobile phone coordinate system and the x-axis of the geodetic coordinate system,is the included angle between the y axis of the mobile phone coordinate system and the y axis of the geodetic coordinate system, and psi is the included angle between the z axis of the mobile phone coordinate system and the z axis of the geodetic coordinate system. And performing coordinate conversion on the acquired acceleration data and the acquired angular velocity data under the mobile phone coordinate system by using the following formula:
wherein A is the collected acceleration data under the mobile phone coordinate system, G is the collected angular velocity data under the mobile phone coordinate system, and Rx(theta) is the x-axis coordinate rotation matrix,is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, AEAcceleration data in a geodetic coordinate system obtained by coordinate conversion, GEStoring the coordinate-converted data into a data set A for obtaining the angular velocity data under the geodetic coordinate system after coordinate conversion1In (1).
Step 1.2.4, data set A is mapped using the following equation1The data in (3) is processed by data centralization:
wherein, Xc kAs data set A1C row and k column, e +1 is data set A1Number of lines of (1), midXc kData set A after centralization processing1C row and k column data to obtain a preprocessed data set A2。
Further, the step 1.3 includes:
respectively corresponding to 10 automobile driving states by using numbers 0-9 according to the data set A2The preprocessed data set A obtained in the step 1.2 is subjected to corresponding automobile driving states during data acquisition of each row2Tagging, i.e. in data set A2And adding a column, wherein the content is a number of 0-9 corresponding to the driving state of the automobile during data acquisition of each row. Recording the labeled data set as a source data set, wherein the data structure of the source data set is as follows:
V=(acc′x acc′y acc′z gyr′x gyr′y gyr′z t S),
where V is the line sequence of the source data set, acc'xIs preprocessed mobile phone x-axis acceleration data, accy'is preprocessed mobile phone y-axis acceleration data, acc'zIs preprocessed mobile phone z-axis acceleration data, gyrx' is preprocessed mobile phone x-axis angular velocity data, gyry' is preprocessed mobile phone y-axis angular velocity data, gyrz' is preprocessed mobile phone z-axis angular velocity data, t is the acquisition time of the line data, and S is the data label of the line.
Further, the step 2 includes:
step 2.1: and dividing the source data set according to the acquisition time of the data. When data is collected, data collection is carried out for D seconds in each driving state, and 100 groups of data are collected per second, so that n collected in the same 1 second1(value is 100) row data as a data unit;
step 2.2: and (3) performing statistical feature extraction on each data unit obtained in the step (2.1), and making a data set named as a feature data set.
Further, the step 2.2 includes:
and (3) extracting statistical characteristics of the data units divided in the step (2.1), wherein the statistical characteristics needing to be extracted comprise: the average value, the variance, the maximum value, the minimum value, the variation amplitude and the average crossing rate specifically comprise:
the average value can well reflect the average level of data, so that the average value of each line of data of the data units plays an important role in the prediction classification of the data, one data unit is taken as the current data unit, and the average value calculation is carried out according to the following formula:
wherein, Xi jFor the data of ith row and jth column of the current data unit, n +1 is the row number of each data unit, since 100 rows of data are taken as one data unit, where n is 99,is the average value of the jth column data of the current data unit.
The degree of dispersion of the data can be reflected by the variance, which is calculated for each column of data cells using the following formula:
wherein,is the variance of the jth column of the current data unit.
Maximum value Max (X) of data unit per columni) And minimum Min (X)i) The peak value of the change of the vehicle acceleration can be reflected, and the peak value can also be used as an auxiliary characteristic. The amplitude of the data change is calculated using the following formula:
of these, Max (X)i) Is the maximum value of the ith column data of the current data unit, Min (X)i) Is the maximum value of the ith column data of the current data unit,is at presentThe variation amplitude of the ith column data of the data unit.
The average crossing rate of data in each column in the data unit can reflect the correlation between adjacent rows of data in the same column. Calculating the average crossing rate of each column of data in the data unit by adopting the following formula:
wherein, Xi jFor data in ith row and jth column of current data cell, Xi+1 jIs the data of the i +1 th row and the j th column of the data unit,is the average value of the jth column data of the current data unit, gamma is an indicator function,the average cross rate of the jth column data of the current data unit is obtained.
The first 6 columns of data for each data unit have 6 eigenvalues per column: mean, variance, maximum, minimum, amplitude of variation, average cross rate, so there are 36 statistical features per data unit. And forming a new data set by using the statistical characteristics of each data unit, and marking the new data set as a characteristic data set. The characteristic data set has m rows and 36 columns, and m is the number of data units.
Further, the step 3 includes:
step 3.1: building a multi-feature convolutional neural network, and determining a network structure;
step 3.2: and selecting a network optimizer and training the multi-feature convolutional neural network by using the source data set and the feature data set to obtain a trained multi-feature convolutional neural network model.
Further, the step 3.1 includes:
the multi-feature convolutional neural network structure is composed of 3 parts, and the specific construction method is as follows:
the first part comprises an input layer, two convolution layers and a pooling layer and is used for carrying out convolution feature extraction on data and is used for obtaining a convolution feature map of the b-th data unit. And b is taken from 0 to m in sequence, wherein m is the number of the data units. The first part of the input comes from the source data set, and since each data unit is a two-dimensional array of 100 × 8, the first 6 columns (100 × 6) are taken as the input of the first part and sent to the input layer. The input layer is followed by the first convolutional layer of the first part, which uses 16 convolution kernels 3 x 3, with step size 1 and number of fills 1. The calculation formula of the output size of the convolutional layer is shown as the following formula:
where Z is the length of the convolution output data, W is the length of the convolution input data, P is the number of fills, F is the length of the convolution kernel, and S represents the step size. For the first convolutional layer in the first part, the output data size of the first convolutional layer in the first part is therefore calculated to be 100 × 6 × 16 from the above formula. A linear rectification function is used as the activation function after the convolutional layer. And sending the data passing through the activation function into a second convolution layer of the first part, wherein the second convolution layer adopts 32 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer of the first part is 100 × 6 × 32 according to an output size calculation formula of the convolution layers. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. The data passing through the activation function is sent to the first part of the pooling layer, and the pooling layer is generally divided into two types according to the working principle: the method comprises the steps of a maximum pooling layer and a mean pooling layer, wherein all the pooling layers adopted by the method are maximum pooling layers, and for the first part of the pooling layers, rectangular windows with the size of 2 x 2 are adopted for sliding, the step length in the horizontal direction is 2, and the step length in the vertical direction is 2. The output size calculation formula of the pooling layer is shown as follows:
where Z 'is the length of the output of the pooling layer, W' is the length of the input of the pooling layer, F 'is the length of the filter, and S' represents the step size in the horizontal direction. The output size of the pooled layers of the first fraction was 50 x 3 x 32 according to the above formula. The output of the pooling layer of the first part is the convolution signature of the b-th data unit.
The second part consists of a convolutional layer and a pooling layer. And fusing and shaping the convolution characteristic graph of the b-th data unit obtained from the first part and the convolution characteristic graph of the b-1 th data unit into a two-dimensional vector with the size of 100 x 96, and naming the two-dimensional vector as a full integration characteristic graph. When b is 0, the convolution signature of the b-1 th data unit is replaced by all 0 data with a size of 50 x 3 x 32. And (3) feeding the fully integrated feature map into a second part of the convolution layer, wherein the convolution layer adopts 6 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer in the first part is 100 × 96 × 6 according to the output size calculation formula of the convolution layer. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. The data that passed the activation function is fed into the first part of the pooling layer, which is slid with a rectangular window of 2 x 2 size, with a horizontal step size of 2 and a vertical step size of 2. The output size of the convolution layer of the second portion is 50 x 48 x 6 according to the output size calculation formula of the pooling layer.
The third part consists of three layers of fully connected networks: an input layer, a hidden layer and an output layer. The input layer of the third section is formed by the output of the second section together with the feature data set. The output size of the second part is 50 x 48 x 6, the b th row of the feature data set has 36 data, they are integrated into a one-dimensional vector of 14436 x 1, and this vector is fed into the input layer as the input of the third part. 1024 neurons are placed in the hidden layer, 10 neurons are placed in the output layer, and the hidden layer corresponds to 10 driving states of the automobile.
Further, the step 3.2 includes:
firstly, a source data set and a feature data set are divided into a training set and a testing set according to a ratio of 4: 1. In the training process, each data unit of the source data set and each line of the feature data set are used as a training unit, the loss function uses a cross entropy loss function, and the network optimizer adopts a adam optimizer to fully train the network, so that the trained multi-feature convolutional neural network model is obtained.
Further, the step 4 comprises:
step 4.1: and (3) acquiring data of an inertial sensor of the vehicle-mounted smart phone in the driving process of the automobile in real time, and classifying the data acquired in real time by using the trained model obtained in the step (3) to obtain the current driving state class of the automobile.
Step 4.2: and 4, judging according to the current driving state of the automobile obtained in the step 4.1, if the driving state of the automobile is as follows: the method comprises the steps of judging whether the current driving state of a driver is good or not by any one of normal driving, parking state, normal acceleration, normal deceleration, normal left turn and normal right turn, judging whether the current driving state of the driver is good or not by any one of sudden left turn, sudden right turn, sudden deceleration and sudden acceleration, judging whether the current driving state of the driver is bad or not, sending out prompt tones to remind the driver of driving normally, recording data, counting the bad driving times of the driver every 24 hours, and uploading the bad driving times to a background.
According to the technical scheme, the invention provides a bad driving state identification method based on a multi-feature convolutional neural network, which comprises the following steps: step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and naming the data set as a source data set; step 2: finishing data division of a source data set, dividing the source data set into individual data units, performing statistical feature extraction on each data unit, and making a data set named as a feature data set; and step 3: building a multi-feature convolutional neural network, selecting appropriate network parameters and an optimizer, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model; and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, so as to realize the identification of the current driving state of the automobile. And judging whether the current driving state of the automobile is a bad driving state or not, and recording and processing data in the background.
The invention provides a bad driving state identification method based on a multi-feature convolutional neural network, which fully considers the characteristic that the adjacent moments of the driving state of an automobile are linked and designs the multi-feature convolutional neural network. And analyzing and predicting the driving state by using the data of adjacent moments. The problems that the accuracy rate of the existing poor driving recognition system is not high, the stability of the system is not good and the like are solved. And the method has high portability, can be applied to a smart phone platform, and has wide application prospect.
Aiming at the problem that the existing driving state identification precision and stability are not high, the invention provides a bad driving state identification method based on a multi-feature convolutional neural network and an intelligent mobile phone inertial sensor, and designs a multi-feature convolutional neural network model and an algorithm. The stability and the accuracy of the driving state recognition system are improved, and various driving states can be reliably recognized.
Drawings
The foregoing and other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic workflow diagram of a method for identifying an adverse driving state based on a multi-feature convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a system general block diagram of an identification method of an adverse driving state based on a multi-feature convolutional neural network according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a coordinate system of a smart phone according to an embodiment of the present invention;
fig. 4 is a data acquisition field image of an inertial sensor of a vehicle-mounted smart phone according to an embodiment of the present invention;
FIG. 5 is a structural diagram of a bad driving recognition system based on a multi-feature convolutional neural network according to an embodiment of the present invention;
FIG. 6 is a diagram of a multi-feature convolutional neural network model provided by an embodiment of the present invention;
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
The embodiment of the invention discloses a method for identifying bad driving states based on a multi-feature convolutional neural network, which comprises the following steps that: the accelerometer [ can refer to the principle and teaching application [ J ] of accelerometer in smart phone, such as Yang bud, Huyiquan, physical report, 2017(01):80-81 ], the gyroscope [ can refer to Liuyan column, Yandong, miniature gyroscope [ J ] mechanics and practice, 2017,39(05): 506-. And preprocessing the data to prepare a data set, dividing a data unit, and extracting statistical characteristics. And constructing a multi-feature convolutional neural network, training the network by using the acquired data, and predicting the driving state of the automobile by using the obtained network model. The method can be applied to the fields of intelligent driving and the like.
Referring to fig. 1 and fig. 2, a schematic workflow diagram of an undesirable driving condition identification method based on a multi-feature convolutional neural network according to an embodiment of the present invention includes the following steps:
step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and naming the data set as a source data set;
step 2: finishing data division of a source data set, dividing the source data set into individual data units, performing statistical feature extraction on each data unit, and making a data set named as a feature data set;
and step 3: building a multi-feature convolutional neural network, selecting network parameters and an optimizer, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, so as to realize the identification of the current driving state of the automobile. And judging whether the current driving state of the automobile is a bad driving state or not, and recording and processing data in the background.
The invention is further described with reference to the following figures and specific examples.
In the embodiment of the present invention, a coordinate system of the smartphone is as shown in fig. 3, a field picture of smartphone sensor data acquisition is as shown in fig. 4, and data of the smartphone inertial sensor is acquired, coordinate converted, and preprocessed according to the example of fig. 4.
The step 1 comprises the following steps: step 1.1: acquiring data of inertial sensors (an accelerometer and a gyroscope) of the smart phone in various automobile driving states, and acquiring and storing various data of the vehicle-mounted smart phone sensors in various driving states;
step 1.2: preprocessing the data acquired in the step 1.1 by adopting a data filtering, coordinate conversion and data centralization method to obtain preprocessed data;
step 1.3: and (3) according to the driving state of the automobile during data acquisition, performing labeling operation on the preprocessed data obtained in the step (1.2) to obtain a labeled data set, and naming the labeled data set as a source data set.
In an embodiment of the present invention, step 1.1 includes:
the driving state of the automobile is divided into 10 types: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn, sharp left turn, sharp right turn, sharp deceleration, sharp acceleration. In the 10 states, data acquisition is respectively carried out on an inertial sensor (an accelerometer and a gyroscope) of the vehicle-mounted smart phone, and a triaxial acceleration acc of the smart phone is acquired by the accelerometerx、accy、acczAcquiring three-axis angular velocity gyr of mobile phone for gyroscopex、gyry、gyrzAnd recording the acquisition time t, wherein D seconds (D is not less than 10000) are acquired in each of 10 driving states, 100 times are acquired per second, a data sequence is obtained, and the data sequence is stored in a file.
The step 1.2 comprises the following steps:
according to the Kalman filter (a common recursive filter), refer to ZHou P, Li M, Shen G, UseITFree: instant knotting Young telephone Attitude [ C ]// Mobile Computing & NetworkingACM,2014: 101-. When the front face of the mobile phone is placed horizontally upwards, a mobile phone coordinate system is superposed with a geodetic coordinate system, namely, the mobile phone coordinate system is horizontally forward along the driving direction of the automobile to be a positive direction of a y axis, is horizontally rightward along the driving direction of the automobile to be a positive direction of an x axis, and is vertically arranged on a plane of the y axis of the x axis to be a positive direction of a z axis. The mobile phone coordinate system and the geodetic coordinate system are both right-hand coordinate systems. If the mobile phone cannot keep a horizontal posture in the data acquisition process, matrix transformation is used for converting data in a mobile phone coordinate system into a geodetic coordinate system. The following formulas are coordinate rotation matrixes of an x axis, a y axis and a z axis respectively:
wherein R isx(theta) is the x-axis coordinate rotation matrix,is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, theta is an included angle between the x-axis of the mobile phone coordinate system and the x-axis of the geodetic coordinate system,is the included angle between the y axis of the mobile phone coordinate system and the y axis of the geodetic coordinate system, and psi is the included angle between the z axis of the mobile phone coordinate system and the z axis of the geodetic coordinate system. And performing coordinate conversion on the acquired acceleration data and the acquired angular velocity data under the mobile phone coordinate system by using the following formula:
wherein A is the collected acceleration data under the mobile phone coordinate system, G is the collected angular velocity data under the mobile phone coordinate system, and Rx(theta) is the x-axis coordinate rotation matrix,is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, AEAcceleration data in a geodetic coordinate system obtained by coordinate conversion, GEStoring the coordinate-converted data into a data set A for obtaining the angular velocity data under the geodetic coordinate system after coordinate conversion1In (1).
For dataset A, the following formula is used1The data in (3) is processed by data centralization:
wherein, Xc kAs data set A1C row and k column, e +1 is data set A1Number of lines of (1), midXc kData set A after centralization processing1C row and k column data to obtain a preprocessed data set A2。
The step 1.3 comprises the following steps:
respectively corresponding to 10 automobile driving states by using numbers 0-9 according to the data set A2The preprocessed data set A obtained in the step 1.2 is subjected to corresponding automobile driving states during data acquisition of each row2Tagging, i.e. in data set A2And adding a column, wherein the content is a number of 0-9 corresponding to the driving state of the automobile during data acquisition of each row. Recording the labeled data set as a source data set, wherein the data structure of the source data set is as follows:
V=(acc′x acc′y acc′z gyr′x gyr′y gyr′z t S),
where V is the row sequence of the source data set, accx' is preprocessed mobile phone x-axis acceleration data, accy' is preprocessed mobile phone y-axis acceleration data, accz' is preprocessed mobile phone z-axis acceleration data, gyrx' is preprocessed mobile phone x-axis angular velocity data, gyry' is preprocessed mobile phone y-axis angular velocity data, gyrz' is preprocessed mobile phone z-axis angular velocity data, t is the acquisition time of the line data, and S is the data label of the line.
In the embodiment of the present invention, the step 2 includes:
step 2.1: and dividing the source data set according to the acquisition time of the data. When data is collected, data collection is carried out for D seconds in each driving state, and 100 groups of data are collected per second, so that n collected in the same 1 second1(value is 100) row data as a data unit;
step 2.2: and (3) performing statistical feature extraction on each data unit obtained in the step (2.1), and making a data set named as a feature data set.
The step 2.2 comprises the following steps:
and (3) extracting statistical characteristics of the data units divided in the step (2.1), wherein the statistical characteristics needing to be extracted comprise: mean, variance, maximum, minimum, amplitude of change, and average crossing rate.
The mean value can well reflect the average level of data, so that the mean value of each line of data of a data unit plays an important role in the prediction classification of the data, and the mean value is calculated according to the following formula:
wherein, Xi jFor the data of ith row and jth column of data unit, n +1 is the row number of each data unit, since 100 rows of data are taken as one data unit, where n is 99,is the average value of the jth column data of the current data unit.
The degree of dispersion of the data can be reflected by the variance, which is calculated for each column of data cells using the following formula:
wherein,is the variance of the jth column of the current data unit.
Maximum value Max (X) of data unit per columni) And minimum Min (X)i) The peak value of the change of the vehicle acceleration can be reflected, and the peak value can also be used as an auxiliary characteristic. And the magnitude of the data change can be calculated using the following equation:
of these, Max (X)i) Is the maximum value of the ith row data of the data unit, Min (X)i) Is the maximum value of the ith column of data in the data unit,the variation amplitude of the ith column data of the data unit is shown.
The average crossing rate of data in each column in the data unit can reflect the correlation between adjacent rows of data in the same column. The calculation method is shown as the following formula:
wherein, Xi jIs data of ith row and jth column of data cell, Xi+1 jIs the data of the i +1 th row and the j th column of the data unit,is the average of the jth column data of the current data unit, and gamma is the indication function [ Zhengwei, Zhang Jing, Yang Hu ] which can be consulted ] improving the level set active contour model [ J ] of the boundary indication function]Laser technology, 2016,40(1):126-,method for dynamically adjusting crossover rate and variation rate of genetic algorithm based on fuzzy reasoning [ J ] for average crossover rate of jth column data of current data unit [ see Pengzheing, Lishaping ]]Pattern recognition and artificial intelligence 2002,15(04): 413-.
The first 6 columns of data for each data unit have 6 eigenvalues per column: mean, variance, maximum, minimum, amplitude of variation, average cross rate, so there are 36 statistical features per data unit. A new data set, named feature data set, is constructed using the statistical features of each data unit. The characteristic data set has m rows and 36 columns, and m is the number of data units.
As shown in fig. 5, which is a structural diagram of a system for identifying an unfavorable driving state based on a multi-feature convolutional neural network according to an embodiment of the present invention, it can be known from fig. 5 that the present invention utilizes data of two adjacent data units to jointly analyze a driving state of an automobile, and sends source data set data and feature data set data of two adjacent data units to the multi-feature convolutional neural network for classification, and fig. 6 shows a structural diagram of the multi-feature convolutional neural network. In the embodiment of the present invention, the step 3 includes:
step 3.1: building a multi-feature convolutional neural network, and determining a network structure;
step 3.2: and selecting a network optimizer and training the multi-feature convolutional neural network by using the source data set and the feature data set to obtain a trained multi-feature convolutional neural network model.
In the embodiment of the present invention, the step 3.1 includes:
the multi-feature convolutional neural network structure is composed of 3 parts, and the specific construction method is as follows:
the first part comprises an input layer, two convolution layers and a pooling layer, is mainly used for carrying out convolution feature extraction on data and is used for obtaining a convolution feature map of the b-th data unit. And b is taken from 0 to m in sequence, wherein m is the number of the data units. The input of the part comes from the source data set, and since each data unit is a two-dimensional array of 100 × 8, the first 6 columns (100 × 6) are taken as the input of the first part, and sent to the input layer. The input layer is followed by the first convolutional layer of the first part, which uses 16 convolution kernels 3 x 3, with step size 1 and number of fills 1. The calculation formula of the output size of the convolutional layer is shown as the following formula:
where Z is the length of the convolution output data, W is the length of the convolution input data, P is the number of fills, F is the length of the convolution kernel, and S represents the step size. For the first convolutional layer in the first part, the output data size of the first convolutional layer in the first part is therefore calculated to be 100 × 6 × 16 from the above formula. Linear rectification functions [ can refer to wunavey ] image denoising algorithm based on deep learning [ D ] shanghai transport university, 2015 ] are used as activation functions after the convolutional layer. And sending the data passing through the activation function into a second convolution layer of the first part, wherein the second convolution layer adopts 32 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer of the first part is 100 × 6 × 32 according to an output size calculation formula of the convolution layers. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. And sending the data subjected to the activation function into the first part of the pooling layers, wherein the pooling layers are generally divided into two types according to the working principle, namely a maximum pooling layer and an average pooling layer, all the pooling layers adopted by the method are the maximum pooling layers, and for the first part of the pooling layers, a rectangular window with the size of 2 x 2 is adopted for sliding, the step length in the horizontal direction is 2, and the step length in the vertical direction is 2. The output size calculation formula of the pooling layer is shown as follows:
where Z 'is the length of the output of the pooling layer, W' is the length of the input of the pooling layer, F 'is the length of the filter, and S' represents the step size in the horizontal direction. The output size of the pooled layers of the first fraction was 50 x 3 x 32 according to the above formula. The output of the pooling layer of the first part is the convolution signature of the b-th data unit.
The second part consists of a convolutional layer and a pooling layer. And fusing and shaping the convolution characteristic graph of the b-th data unit obtained from the first part and the convolution characteristic graph of the b-1 th data unit into a two-dimensional vector with the size of 100 x 96, and naming the two-dimensional vector as a full integration characteristic graph. When b is 0, the convolution signature of the b-1 th data unit is replaced by all 0 data with a size of 50 x 3 x 32. And (3) feeding the fully integrated feature map into a second part of the convolution layer, wherein the convolution layer adopts 6 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, and the output size of the second convolution layer in the first part is 100 × 96 × 6 according to the output size calculation formula of the convolution layer. The second convolutional layer of the first part is also followed by a linear rectification function as the activation function. The data that passed the activation function is fed into the first part of the pooling layer, which is slid with a rectangular window of 2 x 2 size, with a horizontal step size of 2 and a vertical step size of 2. The output size of the convolution layer of the second portion is 50 x 48 x 6 according to the output size calculation formula of the pooling layer.
The third part consists of three layers of fully connected networks: an input layer, a hidden layer and an output layer. The input layer of the third section is formed by the output of the second section together with the feature data set. The output size of the second part is 50 x 48 x 6, the b th row of the feature data set has 36 data, they are integrated into a one-dimensional vector of 14436 x 1, and this vector is fed into the input layer as the input of the third part. 1024 neurons are placed in the hidden layer, 10 neurons are placed in the output layer, and the hidden layer corresponds to 10 driving states of the automobile.
In the embodiment of the present invention, the step 3.2 includes:
firstly, a source data set and a feature data set are divided into a training set and a testing set according to a ratio of 4: 1. In the training process, each data unit of a source data set and each row of a feature data set are used as a training unit, the loss function uses a cross entropy loss function [ can refer to Ron, Wanling, Li Xin, Liu Peng, modified deep convolutional neural network of a Softmax classifier and application [ J ] of the deep convolutional neural network in face recognition [ natural science edition ], Shanghai university student newspaper (natural science edition), 2018,24(03): 352-.
In the embodiment of the present invention, the step 4 includes:
step 4.1: and (3) acquiring data of an inertial sensor of the vehicle-mounted smart phone in the driving process of the automobile in real time, and classifying the data acquired in real time by using the trained model obtained in the step (3) to obtain the current driving state class of the automobile.
Step 4.2: judging according to the current driving state of the automobile obtained in the step 4.1, if the driving state of the automobile is as follows: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn then indicate that present driver's driving state is good, if the car driving state is sharp left turn, sharp right turn, sharp deceleration, sharp acceleration, then indicate that bad driving state has appeared, send out the prompt tone and remind the driver to standardize the driving to carry out data recording, count the bad driving number of times of driver every 24 hours, and upload to the backstage.
Examples illustrate that: taking a rapid acceleration state as an example, when an automobile is in the rapid acceleration state, acquiring data of a vehicle-mounted mobile phone sensor to perform state recognition, preprocessing the data in real time and then sending the data into a multi-feature convolutional neural network, wherein output results of 10 neurons of an output layer of the multi-feature convolutional neural network are respectively as follows: (0.011, 0.002, 0.073, 0.003, 0.013, 0.020, 0.003, 0.009 and 0.866), the result shows that the probability value of the automobile at the moment belongs to the states of 0 to 9, the probability value of the automobile belonging to the state 9 is the highest, and 9 corresponds to the rapid acceleration state, which indicates that the state of the automobile predicted by the network at the moment is the rapid acceleration state and belongs to the bad driving state.
Through the implementation of the technical scheme, the invention has the advantages that: (1) the data acquisition method and the data preprocessing process of the vehicle-mounted smart phone sensor are provided and comprise the following steps of; data filtering, coordinate transformation and data centralization. (2) A data set manufacturing method and a multi-feature convolutional neural network building method are provided, and two adjacent data units are used for completing classification of automobile driving states together. (3) A bad driving state identification method based on a multi-feature convolutional neural network is provided. (4) The method has the advantages of high speed of identifying bad driving states, high identification precision and good system stability.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for identifying an adverse driving state based on a multi-feature convolutional neural network provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The invention provides a method for identifying bad driving states based on a multi-feature convolutional neural network, and a method and a way for implementing the technical scheme are numerous, the above description is only a preferred embodiment of the invention, and it should be noted that, for a person skilled in the art, a plurality of improvements and embellishments can be made without departing from the principle of the invention, and the improvements and embellishments should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (10)
1. A bad driving state identification method based on a multi-feature convolutional neural network is characterized by comprising the following steps:
step 1: collecting and storing data of an inertial sensor of the vehicle-mounted smart phone, preprocessing the collected data of the inertial sensor of the vehicle-mounted smart phone, labeling the data to prepare a data set, and recording the data set as a source data set;
step 2: completing data division of a source data set, dividing the source data set into data units, and performing statistical feature extraction on each data unit to obtain a feature data set;
and step 3: building a multi-feature convolutional neural network, and fully training the multi-feature convolutional neural network by using a source data set and a feature data set to obtain a trained multi-feature convolutional neural network model;
and 4, classifying the data of the inertial sensor of the vehicle-mounted smart phone by using the trained multi-feature convolutional neural network model, and judging whether the current driving state of the automobile is a bad driving state or not according to the classification result.
2. The method of claim 1, wherein step 1 comprises:
step 1.1: acquiring data of an inertial sensor of the smart phone in various automobile driving states, and acquiring and storing various data of the vehicle-mounted smart phone sensor in various driving states, wherein the inertial sensor comprises an accelerometer and a gyroscope;
step 1.2: preprocessing the data acquired in the step 1.1 by adopting a data filtering, coordinate conversion and data centralization method to obtain preprocessed data;
step 1.3: and (3) according to the driving state of the automobile during data acquisition, performing labeling operation on the preprocessed data obtained in the step (1.2) to obtain a labeled data set, and recording the labeled data set as a source data set.
3. The method according to claim 2, characterized in that step 1.1 comprises:
the various driving states of the automobile comprise 10 types: normal driving, parking state, normal acceleration, normal deceleration, normal left turn, normal right turn, sharp left turn, sharp right turn, sharp deceleration and sharp acceleration, respectively carry out data acquisition to the inertial sensor of on-vehicle smart mobile phone under these 10 kinds of states, gather the triaxial acceleration acc of cell-phone to the accelerometerx、accy、acczAcquiring three-axis angular velocity gyr of mobile phone for gyroscopex、gyry、gyrzAnd recording the acquisition time t, wherein10 driving states are collected for D seconds and n is collected every second1Then, a data sequence is obtained and stored.
4. A method according to claim 3, characterised in that step 1.2 comprises:
step 1.2.1, performing data filtering on the obtained data sequence according to a Kalman filter;
step 1.2.2, when the front face of the mobile phone is placed horizontally upwards, a mobile phone coordinate system is overlapped with a geodetic coordinate system, namely, the mobile phone coordinate system is horizontally forward along the automobile driving direction and is a positive direction of a y axis, the mobile phone coordinate system is horizontally rightward along the automobile driving direction and is a positive direction of an x axis, and the mobile phone coordinate system is vertically arranged on a plane of the y axis of the x axis and is a positive direction of a z axis; the mobile phone coordinate system and the geodetic coordinate system are both right-hand coordinate systems;
step 1.2.3, if the mobile phone cannot keep a horizontal posture in the data acquisition process, converting data under a mobile phone coordinate system into a geodetic coordinate system by using matrix transformation, wherein the following formulas are coordinate rotation matrixes of an x axis, a y axis and a z axis respectively:
wherein R isx(theta) is the x-axis coordinate rotation matrix,is a y-axis coordinate rotation matrix, Rz(psi) is a z-axis coordinate rotation matrix, theta is an included angle between the x-axis of the mobile phone coordinate system and the x-axis of the geodetic coordinate system,is the included angle between the y axis of the mobile phone coordinate system and the y axis of the geodetic coordinate system, and psi is the included angle between the z axis of the mobile phone coordinate system and the z axis of the geodetic coordinate system;
and performing coordinate conversion on the acquired acceleration data and the acquired angular velocity data under the mobile phone coordinate system by using the following formula:
wherein A is the collected acceleration data under the mobile phone coordinate system, G is the collected angular velocity data under the mobile phone coordinate system, AEAcceleration data in a geodetic coordinate system obtained by coordinate conversion, GEStoring the coordinate-converted data into a data set A for obtaining the angular velocity data under the geodetic coordinate system after coordinate conversion1Performing the following steps;
step 1.2.4, data set A is mapped using the following equation1The data in (3) is processed by data centralization:
wherein, Xc kAs data set A1C row and k column, e +1 is data set A1Number of lines of (1), midXc kData set A after centralization processing1C row and k column data to obtain a preprocessed data set A2。
5. The method of claim 4, wherein step 1.3 comprises:
the numbers 0-9 are respectively used for corresponding to 10 automobile driving statesFrom data set A2The preprocessed data set A obtained in the step 1.2 is subjected to corresponding automobile driving states during data acquisition of each row2Tagging, i.e. in data set A2Adding a column, wherein the content is a number of 0-9 corresponding to the driving state of the automobile during data acquisition of each row, and recording the labeled data set as a source data set, wherein the data structure of the source data set is shown as the following formula:
V=(accx′ accy′ accz′ gyrx′ gyry′ gyrz′ t S),
where V is the row sequence of the source data set, accx' is preprocessed mobile phone x-axis acceleration data, accy' is preprocessed mobile phone y-axis acceleration data, accz' is preprocessed mobile phone z-axis acceleration data, gyrx' is preprocessed mobile phone x-axis angular velocity data, gyry' is preprocessed mobile phone y-axis angular velocity data, gyrz' is preprocessed mobile phone z-axis angular velocity data, t is the acquisition time of the line data, and S is the data label of the line.
6. The method of claim 5, wherein the step 2 comprises:
step 2.1: dividing a source data set according to the acquisition time of the data, and acquiring n within the same 1 second1The line data is taken as a data unit;
step 2.2: and (3) performing statistical feature extraction on each data unit obtained in the step (2.1), and making a data set named as a feature data set.
7. The method of claim 6, wherein step 2.2 comprises:
and (3) extracting statistical characteristics of the data units divided in the step (2.1), wherein the statistical characteristics needing to be extracted comprise: the average value, the variance, the maximum value, the minimum value, the variation amplitude and the average crossing rate specifically comprise:
taking any one data unit as a current data unit, and carrying out average value calculation according to the following formula:
wherein, Xi jIs the data of ith row and jth column of the current data unit, n +1 is the row number of each data unit,the average value of the jth column data of the current data unit is obtained;
the variance of each column of data cells is calculated using the following formula:
wherein,the variance of the jth column of the current data unit;
the amplitude of the data change is calculated using the following formula:
of these, Max (X)i) Is the maximum value of the ith column data of the current data unit, Min (X)i) Is the maximum value of the ith column data of the current data unit,the variation amplitude of the ith column of data of the current data unit is obtained;
calculating the average crossing rate of each column of data in the data unit by adopting the following formula:
wherein, Xi+1 jIs the data of the (i + 1) th row and the (j) th column of the current data unit, gamma is an indicating function,the average cross rate of the jth column data of the current data unit is obtained;
the first 6 columns of data for each data unit have 6 eigenvalues per column: the average value, the variance, the maximum value, the minimum value, the variation range and the average crossing rate, so that each data unit has 36 statistical characteristics, a new data set is formed by using the statistical characteristics of each data unit and is marked as a characteristic data set, the characteristic data set has m rows and 36 columns, and m is the number of the data units.
8. The method of claim 7, wherein step 3 comprises:
step 3.1: building a multi-feature convolutional neural network, and determining a network structure;
step 3.2: and selecting a network optimizer and training the multi-feature convolutional neural network by using the source data set and the feature data set to obtain a trained multi-feature convolutional neural network model.
9. The method according to claim 8, characterized in that step 3.1 comprises:
the multi-feature convolutional neural network structure is composed of 3 parts, and the specific construction method is as follows:
the first part comprises an input layer, two convolution layers and a pooling layer, is used for performing convolution feature extraction on data and has the function of acquiring a convolution feature map of the b-th data unit, b is sequentially taken from 0 to m, the input of the first part is from a source data set, and each data unit is n1The two-dimensional array of 8 takes the first 6 columns as the input of the first part and sends the input to the input layer; behind the input layerThe first convolutional layer of the first part is selected from 16 convolution kernels of 3 x 3, the step size is 1, the filling number is 1, and the calculation formula of the output size of the convolutional layer is shown as the following formula:
wherein Z is the length of convolution output data, W is the length of convolution input data, P is the number of padding, F is the length of convolution kernel, and S represents the step size; for the first convolutional layer in the first part, the output data size of the first convolutional layer in the first part is n calculated by the formula16 x 16; after the convolution layer, using linear rectification function as activation function, sending the data passing through the activation function into the second convolution layer of the first part, the second convolution layer adopts 32 3 × 3 convolution kernels, the step length is 1, the filling quantity is 1, according to the output size calculation formula of the convolution layer, the output size of the second convolution layer of the first part is n16 x 32; and after the second convolution layer of the first part, the linear rectification function is also used as an activation function, data passing through the activation function is sent to the pooling layers of the first part, the pooling layers are maximum pooling layers, for the pooling layers of the first part, a rectangular window with the size of 2 x 2 is adopted for sliding, the step size in the horizontal direction is 2, the step size in the vertical direction is 2, and the calculation formula of the output size of the pooling layers is shown as the following formula:
wherein Z 'is the length of the output of the pooling layer, W' is the length of the input of the pooling layer, F 'is the length of the filter, and S' represents the step size in the horizontal direction, and according to the formula, the output size of the pooling layer of the first part is 50 x 3 x 32, and the output of the pooling layer of the first part is the convolution characteristic map of the b-th data unit;
the second part is composed of a convolutional layer and a pooling layer, the second part is obtained from the first partThe convolution characteristic diagram of the b data units and the convolution characteristic diagram of the b-1 data unit are fused and shaped into a convolution characteristic diagram with the size of n1And (3) a 96 two-dimensional vector named as a fully integrated feature map, when b is 0, replacing the convolution feature map of the b-1 data unit with full 0 data with the size of 50 x 3 x 32, sending the fully integrated feature map into a second part of convolution layers, wherein the convolution layers adopt 6 3 x 3 convolution kernels, the step size is 1, the filling quantity is 1, and the output size of the second convolution layer in the first part is n according to the output size calculation formula of the convolution layers196 x 6; using a linear rectification function as an activation function after the second convolution layer of the first part, and sending data passing through the activation function into the pooling layer of the first part, wherein the pooling layer slides by adopting a rectangular window with the size of 2 x 2, the step length in the horizontal direction is 2, and the step length in the vertical direction is 2; the output size of the convolution layer of the second portion is 50 x 48 x 6 according to the output size calculation formula of the pooling layer;
the third part consists of three layers of fully connected networks: an input layer, a hidden layer and an output layer; the input layer of the third part is formed by the output of the second part and the characteristic data set together; the output size of the second part is 50 x 48 x 6, the b th row of the feature data set has 36 data, the 36 data are integrated into a one-dimensional vector of 14436 x 1, and the vector is used as the input of the third part and is sent to the input layer; 1024 neurons are placed in the hidden layer, 10 neurons are placed in the output layer, and the hidden layer corresponds to 10 driving states of the automobile.
10. The method according to claim 9, characterized in that step 3.2 comprises:
dividing a source data set and a feature data set into a training set and a testing set according to a ratio of 4:1, taking each data unit of the source data set and each line of the feature data set as a training unit in the training process, wherein a cross entropy loss function is used as a loss function, and a network optimizer adopts an adam optimizer to fully train the network to obtain a trained multi-feature convolutional neural network model;
step 4 comprises the following steps:
step 4.1: acquiring inertial sensor data of a vehicle-mounted smart phone in the driving process of the automobile in real time, and classifying the data acquired in real time by using the trained model obtained in the step 3 to obtain the current driving state class of the automobile;
step 4.2: and 4, judging according to the current driving state of the automobile obtained in the step 4.1, if the driving state of the automobile is as follows: the method comprises the steps of judging whether the current driving state of a driver is good or not by any one of normal driving, parking state, normal acceleration, normal deceleration, normal left turn and normal right turn, judging whether the current driving state of the driver is good or not by any one of sudden left turn, sudden right turn, sudden deceleration and sudden acceleration, judging whether the current driving state of the driver is bad or not, sending out prompt tones to remind the driver of driving normally, recording data, counting the bad driving times of the driver every 24 hours, and uploading the bad driving times to a background.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910510060.0A CN110263836B (en) | 2019-06-13 | 2019-06-13 | Bad driving state identification method based on multi-feature convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910510060.0A CN110263836B (en) | 2019-06-13 | 2019-06-13 | Bad driving state identification method based on multi-feature convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110263836A true CN110263836A (en) | 2019-09-20 |
CN110263836B CN110263836B (en) | 2021-02-19 |
Family
ID=67918021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910510060.0A Active CN110263836B (en) | 2019-06-13 | 2019-06-13 | Bad driving state identification method based on multi-feature convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110263836B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111231971A (en) * | 2020-03-02 | 2020-06-05 | 中汽数据(天津)有限公司 | Automobile safety performance analysis and evaluation method and system based on big data |
CN112732797A (en) * | 2021-01-26 | 2021-04-30 | 武汉理工大学 | Fuel cell automobile driving behavior analysis method, device and storage medium |
CN113095197A (en) * | 2021-04-06 | 2021-07-09 | 深圳市汉德网络科技有限公司 | Vehicle driving state identification method and device, electronic equipment and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140039718A1 (en) * | 2012-07-31 | 2014-02-06 | International Business Machines Corporation | Detecting an abnormal driving condition |
CN108563891A (en) * | 2018-04-23 | 2018-09-21 | 吉林大学 | A method of traffic accident is intelligently prevented based on Inertial Measurement Unit |
CN109034134A (en) * | 2018-09-03 | 2018-12-18 | 深圳市尼欧科技有限公司 | Abnormal driving behavioral value method based on multitask depth convolutional neural networks |
-
2019
- 2019-06-13 CN CN201910510060.0A patent/CN110263836B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140039718A1 (en) * | 2012-07-31 | 2014-02-06 | International Business Machines Corporation | Detecting an abnormal driving condition |
CN108563891A (en) * | 2018-04-23 | 2018-09-21 | 吉林大学 | A method of traffic accident is intelligently prevented based on Inertial Measurement Unit |
CN109034134A (en) * | 2018-09-03 | 2018-12-18 | 深圳市尼欧科技有限公司 | Abnormal driving behavioral value method based on multitask depth convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
周后飞等: "智能手机车辆异常驾驶行为检测方法", 《智能系统学报》 * |
张蓓: "智能手机车辆异常驾驶事件检测系统的设计与实现", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111231971A (en) * | 2020-03-02 | 2020-06-05 | 中汽数据(天津)有限公司 | Automobile safety performance analysis and evaluation method and system based on big data |
CN111231971B (en) * | 2020-03-02 | 2021-04-30 | 中汽数据(天津)有限公司 | Automobile safety performance analysis and evaluation method and system based on big data |
CN112732797A (en) * | 2021-01-26 | 2021-04-30 | 武汉理工大学 | Fuel cell automobile driving behavior analysis method, device and storage medium |
CN113095197A (en) * | 2021-04-06 | 2021-07-09 | 深圳市汉德网络科技有限公司 | Vehicle driving state identification method and device, electronic equipment and readable storage medium |
CN113095197B (en) * | 2021-04-06 | 2024-07-16 | 深圳市汉德网络科技有限公司 | Vehicle driving state identification method and device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110263836B (en) | 2021-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875674B (en) | Driver behavior identification method based on multi-column fusion convolutional neural network | |
CN111079602B (en) | Vehicle fine granularity identification method and device based on multi-scale regional feature constraint | |
CN110329271A (en) | A kind of multisensor vehicle driving detection system and method based on machine learning | |
CN106611169B (en) | A kind of dangerous driving behavior real-time detection method based on deep learning | |
JP2020530578A (en) | Driving behavior scoring method and equipment | |
CN109978893A (en) | Training method, device, equipment and the storage medium of image, semantic segmentation network | |
CN108280415A (en) | Driving behavior recognition methods based on intelligent mobile terminal | |
CN107657237A (en) | Car crass detection method and system based on deep learning | |
CN110263836B (en) | Bad driving state identification method based on multi-feature convolutional neural network | |
CN111428558A (en) | Vehicle detection method based on improved YO L Ov3 method | |
CN108694408B (en) | Driving behavior recognition method based on deep sparse filtering convolutional neural network | |
CN109871789A (en) | Vehicle checking method under a kind of complex environment based on lightweight neural network | |
CN114926825A (en) | Vehicle driving behavior detection method based on space-time feature fusion | |
CN110852358A (en) | Vehicle type distinguishing method based on deep learning | |
CN108769104A (en) | A kind of road condition analyzing method for early warning based on onboard diagnostic system data | |
CN116186594A (en) | Method for realizing intelligent detection of environment change trend based on decision network combined with big data | |
CN116935361A (en) | Deep learning-based driver distraction behavior detection method | |
CN111753610A (en) | Weather identification method and device | |
CN113221759A (en) | Road scattering identification method and device based on anomaly detection model | |
CN114492634B (en) | Fine granularity equipment picture classification and identification method and system | |
CN113052071B (en) | Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle | |
CN114863170A (en) | Deep learning-based new energy vehicle battery spontaneous combustion early warning method and device | |
CN118196573A (en) | Vehicle detection method and system based on deep learning | |
CN117710841A (en) | Small target detection method and device for aerial image of unmanned aerial vehicle | |
CN113806413B (en) | Trajectory screening and classifying method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |