CN106934378B - Automobile high beam identification system and method based on video deep learning - Google Patents
Automobile high beam identification system and method based on video deep learning Download PDFInfo
- Publication number
- CN106934378B CN106934378B CN201710156201.4A CN201710156201A CN106934378B CN 106934378 B CN106934378 B CN 106934378B CN 201710156201 A CN201710156201 A CN 201710156201A CN 106934378 B CN106934378 B CN 106934378B
- Authority
- CN
- China
- Prior art keywords
- frame
- key frame
- deep learning
- module
- video data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
- G06V20/47—Detecting features for summarising video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an automobile high beam identification system and method based on video deep learning, wherein the system comprises the following two parts: the foreground part is used for realizing the identification and processing of the high beam violation behaviors and comprises a road monitoring equipment module, a video processing and identifying module, an identification result processing module and a database of the violation results to be detected, which are connected in sequence; the background part is used for processing the video and realizing the deep learning of the video and comprises a key frame extraction algorithm, a labeled database and a deep learning module, wherein the labeled database is constructed by calling the key frame extraction algorithm to extract the key frame from the original video data, the data in the labeled database is used for the training of the deep learning module, and the trained deep learning module and the key frame extraction algorithm are used for the calling of the video processing and recognition module. The invention automatically analyzes and identifies the monitoring video, ensures the completeness of law enforcement evidence, is similar to manual judgment and has intelligence.
Description
Technical Field
The invention relates to an automobile high beam identification system, in particular to an automobile high beam identification system and method based on video deep learning. Belongs to the technical field of intelligent traffic.
Background
Since the innovation is open, the economy of China is continuously, stably and rapidly developed, the living standard of people in China is improved unprecedentedly, and more people in China have private vehicles. The rapid increase of the number of private cars brings convenience to people going out, and meanwhile, the occurrence frequency of traffic accidents is higher and higher.
There are many reasons for traffic accidents, many of which are caused by improper use of high beam lights. At present, the violation of the high beam is mainly supervised by the traffic police, and due to the limitation of police force and time, the violation of all the high beams cannot be effectively supervised. In addition, some high beam snapshot systems developed in recent years all recognize snapshot pictures, but these methods have certain limitations, which are expressed in that: 1) the number of the captured high beam pictures is small and inconsistent, the high beam pictures are likely to be generated by a driver during normal use and are easily misjudged as disorder high beam, so that the pictures are not sufficient as law enforcement evidence; 2) in order to obtain the pictures, a plurality of capturing devices are often additionally erected at the same place, so that the manufacturing cost is high; 3) the originally laid video monitoring equipment cannot be completely utilized, and resource waste is caused.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an automobile high beam identification system based on video deep learning.
The invention also provides an automobile high beam identification method based on video deep learning corresponding to the system.
In order to achieve the purpose, the invention adopts the following technical scheme:
an automobile high beam identification system based on video deep learning comprises the following two parts:
the foreground part is used for realizing the identification and processing of the high beam violation behaviors and comprises a road monitoring equipment module, a video processing and identifying module, an identification result processing module and a database of the violation results to be detected, which are connected in sequence;
the background part is used for processing the video and realizing the deep learning of the video and comprises a key frame extraction algorithm, a labeled database and a deep learning module, wherein the labeled database is constructed by calling the key frame extraction algorithm to extract the key frame from the original video data, the data in the labeled database is used for the training of the deep learning module, and the trained deep learning module and the key frame extraction algorithm are used for the calling of the video processing and recognition module.
As one of the preferable technical solutions, the key frame extraction algorithm is a clustering-based key frame extraction algorithm.
As one of the preferable technical solutions, the deep learning module is a CNN + LSE (convolutional neural network + least square estimation) -based deep learning module.
The system corresponds to an automobile high beam identification method based on video deep learning, and the method specifically comprises the following steps:
(1) the road monitoring equipment module acquires driving video data of the automobile and transmits the driving video data to the video processing and identifying module;
(2) the video processing and identifying module calls a key frame extraction algorithm to extract key frames of video data, then graying operation is carried out, the grayed key frames are used as input, a deep learning module which is trained according to a database with labels and is based on CNN + LSE is called, output labels of all key frames are obtained, the output labels comprise dipped headlights, fog lamps or high beam lamps, and the labels are assigned to corresponding key frame images;
(3) and (3) taking the video data and the key frame with the label obtained in the step (2) as the input of a recognition result processing module for judging whether the vehicle violates the regulations, embedding a license plate recognition system in the recognition result processing module, extracting the license plate of the target vehicle when the target vehicle has the high beam violation behaviors, acquiring the vehicle information, and importing the suspected violation video data into a database of the violation results to be detected.
In the step (2), the key frame extraction algorithm is as follows:
(2-1) taking the ith segment V in the original video databaseiExtracting n frames at equal time intervals and using Fi,jNaming the frame at the jth moment of the ith video data, and representing the key frame sequence of the corresponding video data as { F }i,1,Fi,2,...,Fi,nIn which Fi,1Is the first frame, Fi,nIs a tail frame; defining the similarity between two adjacent frames as the similarity of histograms of the two adjacent frames (namely histogram characteristic difference), and controlling the clustering density by a predefined threshold value delta; wherein i, j and n are integers;
(2-2) selecting the first frame Fi,1Is the initial cluster center and calculates frame Fi,jSimilarity with initial cluster center, if the value is less than delta, judging that the distance between the frame and the cluster center frame is too large, therefore, Fi,jCannot be added to the cluster; if Fi,jSimilarity with all clustering centers is less than delta, Fi,jForm aA new cluster, Fi,jIs a new cluster center; otherwise, adding the frame into the cluster with the maximum similarity to the frame, and enabling the distance between the frame and the center of the cluster to be minimum;
(2-3) repeating (2-2) to convert the original video data ViAfter the n frames extracted in (1) are respectively classified into different clusters, the key frames can be selected: extracting the frame nearest to the cluster center from each cluster as the representative frame of the cluster, wherein the representative frames of all clusters form the original video data ViThe key frame of (1).
In the step (2), the construction method of the database with the tags comprises the following steps:
the method comprises the steps of taking a large amount of vehicle running video data under a big data background as original video data, calling a key frame extraction algorithm based on clustering to the original video data to extract key frames, manually judging the light types of vehicles in the key frames, and adding labels to each key frame to enable the original key frames to become labeled data, wherein the label types comprise: three types of dipped headlight, fog light and high beam are respectively represented by-1, 0 and 1; storing the key frame data with the label into a labeled database, wherein the data in the labeled database are the original video data and the labeled key frame thereof, and the labeled key frame is represented as (F)i,jK), where k takes the value-1, 0 or 1.
In the step (2), a construction method of the CNN + LSE-based deep learning module is that a LeNet5 convolutional neural network structure is adopted, the module is divided into eight layers, the first six layers are a feature extraction part, the second two layers are a classifier part, wherein the feature extraction layer adopts a classical convolutional neural network structure, and the classifier layer adopts a full-connection structure; the module takes data in a labeled database as training data, a CNN + LSE combined algorithm is adopted to train the deep learning module, a CNN method is adopted to train the feature extraction part, and an LSE method is adopted to train the classifier layer so as to realize the rapid learning of module parameters and enhance the generalization capability of the module.
The specific method comprises the following steps:
inputting a video key frame in a database with labels into a first layer of a CNN + LSE-based deep learning module; performing convolution operation on the upper layer output by adopting different convolution cores in the second layer; the third layer performs pooling (down-sampling) on the upper layer output; the fourth layer and the fifth layer repeat the operations of the second layer and the third layer; the sixth layer sequentially expands the output characteristics of the upper layer and arranges the output characteristics into a line; the seventh layer is fully interconnected with the upper layer output features; the last layer is also in a form of full interconnection with the upper layer. The output of the deep learning module based on CNN + LSE is in three cases: low beam, fog and high beam, denoted-1, 0 and 1, respectively.
The deep learning module based on CNN + LSE is trained as follows:
taking any sample from the tagged database (F)i,jK) to Fi,jFirstly, graying operation is carried out to change the key frame into a grayscale image, and then the grayed key frame F is converted into a grayscale imagei,j' input into the module, i.e. input data as (F)i,j', k); training the two parts of the deep learning module by adopting a CNN (common noise network) and LSE (least squares) method respectively; the parameter training method of the feature extraction part comprises the following steps:
(2-a1) initializing all connection weight parameters of the feature extraction part in the deep learning module;
(2-A2) calculating the actual output label O corresponding to the input key framek;
(2-A3) calculating actual output label OkDifference from the corresponding ideal output label k;
(2-A4) weight learning: reversely transmitting and adjusting a connection weight parameter matrix of a feature extraction part in the deep learning module by a method of minimizing errors;
(2-A5) until all the key frames of the video data are traversed, and the parameter training is finished;
the parameter training method of the classifier part is as follows:
(2-B1) connection weights and biases between the rasterized layer and the fully-connected layer are randomly generated and the fully-connected layer output is written
Wherein G (-) is an activation function, aiTo connect weights, biFor bias, L is the number of nodes of the full link layer, N is the number of all key frames, xjKey frame, i ═ 1,2, …, L, j ═ 1,2, …, N;
(2-B2) writing the net output result of the corresponding key frame as an output vector Y ═ Y1y2… yn]TWherein y isjFor the jth key frame xjA corresponding output tag;
(2-B3) calculating an output weight β ═ PHY between the fully-connected layer and the output layer, where P ═ HTH)-1。
In the step (3), the data in the database of the violation results to be detected is the video data judged to be violating the regulations by the identification result processing module, wherein the violation results to be detected should be manually checked, then the information which is confirmed to be correct is imported into the database of the violation regulations, and the information which is judged to be correct is deleted.
In the step (3), the method for judging whether the high beam violation exists is as follows: keyframe labeled as high beamAnd its next key frameTime interval Δ T between j2-j1If the delta T is larger than or equal to theta, the vehicle has the phenomenon that the high beam violates the regulations, wherein the theta is a violation time threshold value.
The invention has the beneficial effects that:
the invention automatically analyzes and identifies the monitoring video, ensures the completeness of law enforcement evidence, is similar to manual judgment, has intelligence, is simple in equipment arrangement, and can fully utilize the original monitoring equipment. The method comprises the following specific steps:
(1) by mining the video data, the sufficiency of law enforcement evidence is greatly improved on the basis of ensuring the accuracy, and the loss of an evidence chain is prevented when the high beam violates the law;
(2) the requirement of the same point position on the number of the devices is low, and a large amount of originally distributed monitoring devices can be directly reused, so that the cost is reduced, and the utilization rate of the devices is improved;
(3) the intelligent judgment of high beam violation is carried out by adopting a video deep learning-based mode, so that manual law enforcement is replaced, real automation is realized, and the efficiency is improved; meanwhile, after deep learning, the high beam violation identification effect is expected to reach or exceed the manual identification level, so that the real intellectualization of the identification system is realized;
(4) the deep learning module performs parameter learning on the system by adopting a CNN + LSE method, so that the parameter learning speed of the system is higher, the generalization capability of the module is stronger, and the robustness of the system is improved.
Drawings
FIG. 1 is a schematic diagram of the system architecture of the present invention;
fig. 2 is a diagram of a CNN + LSE-based deep learning module architecture.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and examples, which are provided for the purpose of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, an automobile high beam identification system based on video deep learning includes the following two parts:
the foreground part is used for realizing the identification and processing of the high beam violation behaviors and comprises a road monitoring equipment module, a video processing and identifying module, an identification result processing module and a database of the violation results to be detected, which are connected in sequence;
the background part is used for processing the video and realizing the deep learning of the video and comprises a key frame extraction algorithm, a labeled database and a deep learning module, wherein the labeled database is constructed by calling the key frame extraction algorithm to extract the key frame from the original video data, the data in the labeled database is used for the training of the deep learning module, and the trained deep learning module and the key frame extraction algorithm are used for the calling of the video processing and recognition module.
The key frame extraction algorithm is a key frame extraction algorithm based on clustering; the deep learning module is a CNN + LSE-based deep learning module.
The system corresponds to an automobile high beam identification method based on video deep learning, and the method specifically comprises the following steps:
(1) the road monitoring equipment module obtains the driving video data of the automobile and transmits the driving video data to the video processing and identifying module.
(2) The video processing and recognition module calls a key frame extraction algorithm to extract key frames of original video data, then graying operation is carried out, the grayed key frames are used as input, a trained deep learning module with a label database based on CNN + LSE is called, output labels of all key frames, including dipped headlights, fog lights or high beam lights, are obtained, and the labels are assigned to corresponding key frame images.
The key frame extraction algorithm is as follows:
(2-1) taking the ith segment V in the original video databaseiExtracting n frames at equal time intervals and using Fi,jNaming the frame at the jth moment of the ith video data, and representing the key frame sequence of the corresponding video data as { F }i,1,Fi,2,...,Fi,nIn which Fi,1Is the first frame, Fi,nIs a tail frame; defining the similarity between two adjacent frames as the similarity of histograms of the two adjacent frames (namely histogram characteristic difference), and controlling the clustering density by a predefined threshold value delta; wherein i, j and n are integers;
(2-2) selecting the first frame Fi,1Is the initial cluster center and calculates frame Fi,jSimilarity with initial cluster center, if the value is less than delta, judging that the distance between the frame and the cluster center frame is too large, therefore, Fi,jCannot be added to the cluster; if Fi,jSimilarity with all clustering centers is less than delta, Fi,jForm a new cluster, Fi,jIs a new cluster center; otherwise, adding the frame into the cluster with the maximum similarity to the frame, and enabling the distance between the frame and the center of the cluster to be minimum;
(2-3) repeating (2-2) to convert the original video data ViAfter the n frames extracted in (1) are respectively classified into different clusters, the key frames can be selected: extracting the frame nearest to the cluster center from each cluster as the representative frame of the cluster, wherein the representative frames of all clusters form the original video data ViThe key frame of (1).
The construction method of the database with the labels comprises the following steps:
the method comprises the steps of taking a large amount of vehicle running video data under a big data background as original video data, calling a key frame extraction algorithm based on clustering to the original video data to extract key frames, manually judging the light types of vehicles in the key frames, and adding labels to each key frame to enable the original key frames to become labeled data, wherein the label types comprise: three types of dipped headlight, fog light and high beam are respectively represented by-1, 0 and 1; storing the key frame data with the label into a labeled database, wherein the data in the labeled database are the original video data and the labeled key frame thereof, and the labeled key frame is represented as (F)i,jK), where k takes the value-1, 0 or 1.
As shown in fig. 2, the deep learning module based on CNN + LSE is constructed by adopting a LeNet5 convolutional neural network structure, wherein the module is divided into eight layers, the first six layers are a feature extraction part, and the second two layers are a classifier part, wherein the feature extraction layer adopts a classical convolutional neural network structure, and the classifier layer adopts a full-connection structure; the module takes data in a labeled database as training data, a CNN + LSE combined algorithm is adopted to train the deep learning module, a CNN method is adopted to train the feature extraction part, and an LSE method is adopted to train the classifier layer so as to realize the rapid learning of module parameters and enhance the generalization capability of the module. The specific method comprises the following steps: inputting a video key frame in a database with labels into a first layer of a CNN + LSE-based deep learning module; performing convolution operation on the upper layer output by adopting different convolution cores in the second layer; the third layer performs pooling (down-sampling) on the upper layer output; the fourth layer and the fifth layer repeat the operations of the second layer and the third layer; the sixth layer sequentially expands the output characteristics of the upper layer and arranges the output characteristics into a line; the seventh layer is fully interconnected with the upper layer output features; the last layer is also in a form of full interconnection with the upper layer. The output of the deep learning module based on CNN + LSE is in three cases: low beam, fog and high beam, denoted-1, 0 and 1, respectively.
The deep learning module based on CNN + LSE is trained as follows:
taking any sample from the tagged database (F)i,jK) to Fi,jFirstly, graying operation is carried out to change the key frame into a grayscale image, and then the grayed key frame F is converted into a grayscale imagei,j' input into the module, i.e. input data as (F)i,j', k); training the two parts of the deep learning module by adopting a CNN (common noise network) and LSE (least squares) method respectively; the parameter training method of the feature extraction part comprises the following steps:
(2-a1) initializing all connection weight parameters of the feature extraction part in the deep learning module;
(2-A2) calculating the actual output label O corresponding to the input key framek;
(2-A3) calculating actual output label OkDifference from the corresponding ideal output label k;
(2-A4) weight learning: reversely transmitting and adjusting a connection weight parameter matrix of a feature extraction part in the deep learning module by a method of minimizing errors;
(2-A5) until all the key frames of the video data are traversed, and the parameter training is finished;
the parameter training method of the classifier part is as follows:
(2-B1) connection weights and biases between the rasterized layer and the fully-connected layer are randomly generated and the fully-connected layer output is written
Wherein G (-) is an activation function, aiTo connect weights, biFor bias, L is the number of nodes of the full link layer, N is the number of all key frames, xjKey frame, i ═ 1,2, …, L, j ═ 1,2, …, N;
(2-B2) corresponding to the key frameThe network output result is written as an output vector Y ═ Y1y2… yn]TWherein y isjFor the jth key frame xjA corresponding output tag;
(2-B3) calculating an output weight β ═ PHY between the fully-connected layer and the output layer, where P ═ HTH)-1。
(3) And (3) taking the original video data and the key frame with the label obtained in the step (2) as the input of a recognition result processing module for judging whether the vehicle violates the regulations, embedding a license plate recognition system in the recognition result processing module, extracting the license plate of the target vehicle when the target vehicle has the high beam violation behaviors, acquiring the vehicle information, and importing the suspected violation video data into a database of the violation results to be detected.
The method for judging whether the high beam violation behaviors exist is as follows: keyframe labeled as high beamAnd its next key frameTime interval Δ T between j2-j1If the delta T is larger than or equal to theta, the vehicle has the phenomenon that the high beam violates the regulations, wherein the theta is a violation time threshold value.
(4) The data in the database of the result of the violation to be detected is the video data judged to be violating the regulations by the identification result processing module, wherein the result of the violation to be detected should be manually checked, then the information which is confirmed to be correct is imported into the database of the violation to be detected, and the misjudged information is deleted.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, the scope of the present invention is not limited thereto, and various modifications and variations which do not require inventive efforts and which are made by those skilled in the art are within the scope of the present invention.
Claims (2)
1. A method for recognizing a high beam of an automobile based on video deep learning is characterized by comprising the following specific steps:
(1) the road monitoring equipment module acquires driving video data of the automobile and transmits the driving video data to the video processing and identifying module;
(2) the video processing and identifying module calls a key frame extraction algorithm to extract key frames of original video data, then graying operation is carried out, the grayed key frames are used as input, a trained deep learning module with a label database based on CNN + LSE is called, output labels of all key frames, including dipped headlights, fog lights or high beam lights, are obtained, and the labels are assigned to corresponding key frame images;
(3) the original video data and the key frame with the label obtained in the step (2) are used as the input of an identification result processing module for judging whether the vehicle violates the regulations, a license plate identification system is embedded in the identification result processing module, when the target vehicle has a high beam violation behavior, a license plate is extracted to obtain vehicle information, and the suspected violation video data is imported into a database of the violation result to be detected;
in the step (3), the method for judging whether the high beam violation exists is as follows: keyframe labeled as high beamAnd its next key frameTime interval △ T ═ j between2-j1If △ T is larger than or equal to theta, the vehicle has a high beam violation phenomenon, wherein theta is a violation time threshold;
in the step (2), the key frame extraction algorithm is as follows:
(2-1) taking the ith segment V in the original video databaseiExtracting n frames at equal time intervals and using Fi,jNaming the frame at the jth moment of the ith video data, and representing the key frame sequence of the corresponding video data as { F }i,1,Fi,2,...,Fi,nIn which Fi,1Is the first frame, Fi,nIs a tail frame; defining neighborsThe similarity between two frames is the similarity of histograms of two adjacent frames, namely the histogram feature difference, and a predefined threshold value delta controls the clustering density; wherein i, j and n are integers;
(2-2) selecting the first frame Fi,1Is the initial cluster center and calculates frame Fi,jSimilarity with the initial cluster center, if the similarity is less than delta, the frame F is judgedi,jToo large a distance from the cluster center frame, and therefore, Fi,jCannot be added to the cluster; if Fi,jSimilarity with all clustering centers is less than delta, Fi,jForm a new cluster, Fi,jIs a new cluster center; otherwise, the frame F is processedi,jAdding the frame F into the cluster with the maximum similarity to the frame Fi,jThe distance from the center of this cluster is minimal;
(2-3) repeating (2-2) to convert the original video data ViAfter the n frames extracted in (1) are respectively classified into different clusters, the key frames can be selected: extracting the frame nearest to the cluster center from each cluster as the representative frame of the cluster, wherein the representative frames of all clusters form the original video data ViThe key frame of (1);
in the step (2), the construction method of the database with the tags comprises the following steps:
the method comprises the steps of taking a large amount of vehicle running video data under a big data background as original video data, calling a key frame extraction algorithm based on clustering to the original video data to extract key frames, manually judging the light types of vehicles in the key frames, and adding labels to each key frame to enable the original key frames to become labeled data, wherein the label types comprise: three types of dipped headlight, fog light and high beam are respectively represented by-1, 0 and 1; storing the key frame data with the label into a labeled database, wherein the data in the labeled database are the original video data and the labeled key frame thereof, and the labeled key frame is represented as (F)i,jK), wherein k is-1, 0 or 1;
in the step (2), a construction method of the CNN + LSE-based deep learning module is that a LeNet5 convolutional neural network structure is adopted, the module is divided into eight layers, the first six layers are a feature extraction part, the second two layers are a classifier part, wherein the feature extraction layer adopts a classical convolutional neural network structure, and the classifier layer adopts a full-connection structure; taking data in a database with labels as training data, training a deep learning module by adopting a CNN + LSE combined algorithm, training a characteristic extraction part by adopting a CNN method, and training a classifier layer by adopting an LSE method;
the deep learning module based on CNN + LSE is trained as follows:
taking any sample from the tagged database (F)i,jK) to Fi,jFirstly, graying operation is carried out to change the key frame into a grayscale image, and then the grayed key frame F is converted into a grayscale imagei,j' input into the module, i.e. input data as (F)i,j', k); training the two parts of the deep learning module by adopting a CNN (common noise network) and LSE (least squares) method respectively; the parameter training method of the feature extraction part comprises the following steps:
(2-a1) initializing all connection weight parameters of the feature extraction part in the deep learning module;
(2-A2) calculating the actual output label O corresponding to the input key framek;
(2-A3) calculating actual output label OkDifference from the corresponding ideal output label k;
(2-A4) weight learning: reversely transmitting and adjusting a connection weight parameter matrix of a feature extraction part in the deep learning module by a method of minimizing errors;
(2-A5) until all the key frames of the video data are traversed, and the parameter training is finished;
the parameter training method of the classifier part is as follows:
(2-B1) connection weights and biases between rasterized and fully-connected layers are randomly generated and the fully-connected layer output is written as a matrix
Wherein G (-) is an activation function, aiTo connect weights, biFor bias, L is the number of nodes of the full link layer, N is the number of all key frames, xjFor key frames, i ═1,2,…,L,j=1,2,…,N;
(2-B2) writing the net output result of the corresponding key frame as an output vector Y ═ Y1y2…yn]TWherein y isjFor the jth key frame xjA corresponding output tag;
(2-B3) calculating an output weight β ═ PHY between the fully-connected layer and the output layer, where P ═ HTH)-1。
2. The method as claimed in claim 1, wherein in step (3), the data in the database of the violation results to be detected is the video data judged to be a violation by the identification result processing module, wherein the violation results to be detected should be manually checked, and then the information without error is imported into the database of the violation rules and rules, and the information with error judgment is deleted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710156201.4A CN106934378B (en) | 2017-03-16 | 2017-03-16 | Automobile high beam identification system and method based on video deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710156201.4A CN106934378B (en) | 2017-03-16 | 2017-03-16 | Automobile high beam identification system and method based on video deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106934378A CN106934378A (en) | 2017-07-07 |
CN106934378B true CN106934378B (en) | 2020-04-24 |
Family
ID=59432614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710156201.4A Active CN106934378B (en) | 2017-03-16 | 2017-03-16 | Automobile high beam identification system and method based on video deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106934378B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6729516B2 (en) * | 2017-07-27 | 2020-07-22 | トヨタ自動車株式会社 | Identification device |
CN108229447B (en) * | 2018-02-11 | 2021-06-11 | 陕西联森电子科技有限公司 | High beam light detection method based on video stream |
CN108921060A (en) * | 2018-06-20 | 2018-11-30 | 安徽金赛弗信息技术有限公司 | Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method |
CN108932853B (en) * | 2018-06-22 | 2021-03-30 | 安徽科力信息产业有限责任公司 | Method and device for recording illegal parking behaviors of multiple motor vehicles |
CN109191419B (en) * | 2018-06-25 | 2021-06-29 | 国网智能科技股份有限公司 | Real-time pressing plate detection and state recognition system and method based on machine learning |
CN108986476B (en) * | 2018-08-07 | 2019-12-06 | 安徽金赛弗信息技术有限公司 | method, system and storage medium for recognizing non-use of high beam by motor vehicle according to regulations |
CN109934106A (en) * | 2019-01-30 | 2019-06-25 | 长视科技股份有限公司 | A kind of user behavior analysis method based on video image deep learning |
CN110046547A (en) * | 2019-03-06 | 2019-07-23 | 深圳市麦谷科技有限公司 | Report method, system, computer equipment and storage medium violating the regulations |
CN111680638B (en) * | 2020-06-11 | 2020-12-29 | 深圳北斗应用技术研究院有限公司 | Passenger path identification method and passenger flow clearing method based on same |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942751A (en) * | 2014-04-28 | 2014-07-23 | 中央民族大学 | Method for extracting video key frame |
CN105590102A (en) * | 2015-12-30 | 2016-05-18 | 中通服公众信息产业股份有限公司 | Front car face identification method based on deep learning |
CN106407931A (en) * | 2016-09-19 | 2017-02-15 | 杭州电子科技大学 | Novel deep convolution neural network moving vehicle detection method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9978013B2 (en) * | 2014-07-16 | 2018-05-22 | Deep Learning Analytics, LLC | Systems and methods for recognizing objects in radar imagery |
-
2017
- 2017-03-16 CN CN201710156201.4A patent/CN106934378B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942751A (en) * | 2014-04-28 | 2014-07-23 | 中央民族大学 | Method for extracting video key frame |
CN105590102A (en) * | 2015-12-30 | 2016-05-18 | 中通服公众信息产业股份有限公司 | Front car face identification method based on deep learning |
CN106407931A (en) * | 2016-09-19 | 2017-02-15 | 杭州电子科技大学 | Novel deep convolution neural network moving vehicle detection method |
Also Published As
Publication number | Publication date |
---|---|
CN106934378A (en) | 2017-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106934378B (en) | Automobile high beam identification system and method based on video deep learning | |
EP3289528B1 (en) | Filter specificity as training criterion for neural networks | |
CN107563372B (en) | License plate positioning method based on deep learning SSD frame | |
Sivaraman et al. | A general active-learning framework for on-road vehicle recognition and tracking | |
CN108921083B (en) | Illegal mobile vendor identification method based on deep learning target detection | |
KR101395094B1 (en) | Method and system for detecting object in input image | |
CN104809443A (en) | Convolutional neural network-based license plate detection method and system | |
CN106845487A (en) | A kind of licence plate recognition method end to end | |
CN112395951B (en) | Complex scene-oriented domain-adaptive traffic target detection and identification method | |
CN103810505A (en) | Vehicle identification method and system based on multilayer descriptors | |
CN113824684B (en) | Vehicle-mounted network intrusion detection method and system based on transfer learning | |
CN105590102A (en) | Front car face identification method based on deep learning | |
CN104766042A (en) | Method and apparatus for and recognizing traffic sign board | |
CN105825212A (en) | Distributed license plate recognition method based on Hadoop | |
CN112990065A (en) | Optimized YOLOv5 model-based vehicle classification detection method | |
CN110826415A (en) | Method and device for re-identifying vehicles in scene image | |
Chen et al. | Vehicle detection based on multifeature extraction and recognition adopting RBF neural network on ADAS system | |
CN115280373A (en) | Managing occlusions in twin network tracking using structured dropping | |
JP2023541967A (en) | Computer-implemented method for continuous adaptive detection of environmental features during automatic and assisted driving of own vehicle | |
CN113610770B (en) | License plate recognition method, device and equipment | |
CN111160282B (en) | Traffic light detection method based on binary Yolov3 network | |
CN110263788B (en) | Method and system for quickly identifying vehicle passing | |
CN114638320A (en) | Method and system for marking driving condition based on multi-source data and vehicle | |
CN113850112A (en) | Road condition identification method and system based on twin neural network | |
CN111091066A (en) | Method and system for evaluating ground state of automatic driving automobile |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |