[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110659593A - Urban haze visibility detection method based on improved DiracNet - Google Patents

Urban haze visibility detection method based on improved DiracNet Download PDF

Info

Publication number
CN110659593A
CN110659593A CN201910846903.4A CN201910846903A CN110659593A CN 110659593 A CN110659593 A CN 110659593A CN 201910846903 A CN201910846903 A CN 201910846903A CN 110659593 A CN110659593 A CN 110659593A
Authority
CN
China
Prior art keywords
diracnet
haze
improved
visibility
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910846903.4A
Other languages
Chinese (zh)
Inventor
任俊弛
成孝刚
钱俊鹏
耿鑫
王宏伟
李海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201910846903.4A priority Critical patent/CN110659593A/en
Publication of CN110659593A publication Critical patent/CN110659593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an improved DiracNet-based urban haze visibility detection method which mainly comprises the steps of collecting haze visibility pictures with different visibility in different scenes in a city to establish a picture library; establishing coordinates by taking a landmark building in the picture as a marker so as to measure the visibility of the haze picture; improving a DiracNet network, and enhancing the extraction of the detailed information of the haze picture; inputting the haze picture into an improved DiracNet network for training and storing the model; i.e. can be put into test validation and specific detection applications. According to the urban haze visibility detection method based on the improved DiracNet, the detection precision is improved, and based on a deep learning theory and an improved DiracNet model, a large data set can be processed favorably, and higher accuracy can be kept.

Description

Urban haze visibility detection method based on improved DiracNet
Technical Field
The invention relates to a method for detecting urban haze visibility, in particular to a method for detecting urban haze visibility based on an improved DiracNet network, and belongs to deep learning application in the technical field of computers.
Background
At present, the precision degree of visibility greatly influences life and traffic safety of people, and because haze weather can cause serious interference to visibility, the effect of the visibility detection technology on the haze weather is improved to become crucial.
At present, the visibility detection has the problems of low accuracy and low speed. Nathan Graves and Shawn Newsam (2011, 5) apply the pattern recognition technology to atmospheric visibility detection, and provide an estimation algorithm combining multi-region combination and unlabeled observation. Sami Varjo and Jari Hannuksela (2015, [6]) propose a visibility detection method based on feature vector, combine the high dynamic range to image, in order to improve the image quality; and carrying out planning processing on the brightness, and mapping the scene in the picture. In the aspect of China, Li Rong et al (2014, [7]) propose an image haze grade evaluation method and increase the visible distance of human eyes in haze weather through an image defogging algorithm; a haze visibility detection method based on a brightness curve and data driving is provided by a Nanjing post and telecommunications university filial piety (2018, 8). The extinction coefficient is obtained by utilizing the difference of the brightness curve, and then a piecewise stationary function is constructed for calibration.
However, the above research results have more or less defects, the detection precision does not meet the ideal requirement, and the processing capacity of the large data set is deviated.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an improved DiracNet-based urban haze visibility detection method, which can solve the problem of detection aiming at a large number of data sets and improve the precision.
In order to achieve the purpose, the technical solution of the invention is as follows: the urban haze visibility detection method based on the improved DiracNet is characterized by comprising the following steps:
s1, training:
establishing a database, collecting haze pictures in different scenes in a city to manufacture the haze pictures into a haze database, and establishing a clear picture library for clear weather pictures in the same scene;
visibility extraction, namely taking a landmark building in the haze picture as a marker, establishing corresponding coordinates, and extracting a visibility value of the haze picture;
constructing an improved DiracNet network, carrying out parameterization and adding an activation function on the local part of the ResNet convolutional neural network, and replacing the output of a full connection layer to obtain the improved DiracNet convolutional neural network;
improving DiracNet network training, and inputting the picture data set in the constructed haze database into the improved DiracNet network for training;
s2, network testing stage: and establishing a test set according to the haze picture of any scene, and detecting the visibility of the input haze picture by using the trained DiracNet network model.
In the method for detecting the urban haze visibility based on the improved DiracNet, in the step S1 of database establishment, pictures are extracted at a frequency of one picture per minute based on video data of haze weather in different scenes of any city, and a picture data set is established by the collected pictures.
In the method for detecting the urban haze visibility based on the improved Dirac Net, in the step S1 of constructing the improved Dirac Net, a Dirac weight parameterization method is adopted to carry out Dirac parameterization improvement on two residual modules in a ResNet convolutional neural network, a CReLU activation function is added behind each Dirac convolution module, and finally a convolution layer and global average pooling are used for replacing full-connection layer output.
According to the urban haze visibility detection method based on the improved DiracNet, in the step S1 of training of the improved DiracNet network, the improved DiracNet network is built on Tensflow, a picture is input into the DiracNet network during network training, then forward propagation is carried out, a predicted value of current visibility is output, then comparison is carried out with a true value of visibility, loss of current iteration is calculated, backward propagation is carried out, and network parameters are updated; and terminating training through repeated iteration of preset times, and storing a model with optimal training, wherein the loss function is a mean square error function:
Figure 100002_DEST_PATH_IMAGE002
in which the magnitude of the true value is represented,
Figure 100002_DEST_PATH_IMAGE006
indicating the size of the prediction value.
In the step S2 of the network testing stage, the constructed test set is input into the trained and stored network model to obtain the visibility predicted value, and then the visibility predicted value is compared with the visibility true value, and the average percentage error is calculated:in which the magnitude of the true value is represented,
Figure 451870DEST_PATH_IMAGE006
indicating the size of the prediction value.
In the method for detecting urban haze visibility based on the improved DiracNet, in the step S2 of network testing, the average percentage error calculated by the improved DiracNet network is compared with the average percentage error calculated by the ResNet convolutional neural network for verification.
Compared with the prior art, the invention has prominent substantive features and remarkable progressiveness, which are shown as follows: the detection method improves the detection precision by improving the DiracNet network, is based on the deep learning theory, is favorable for processing a large data set based on the improved DiracNet model, and can keep higher accuracy.
Drawings
Fig. 1 is a schematic network diagram of an improved dirac net of the present invention.
FIG. 2 is a schematic diagram of an algorithm flow for haze visibility detection according to the present invention.
Fig. 3 is a graph comparing predicted values obtained after the present invention is applied to a test set and predicted values obtained by ResNet50 with actual values.
Detailed Description
In order to pertinently research the defects of the prior art in urban haze visibility detection, particularly in various aspects of video imaging detection by using a ResNet convolutional neural network, the invention provides a method for detecting urban haze visibility based on improved Dirac Net, so as to optimize the detection precision and processing capacity of haze images.
Depending on the network processing technology of the computer system, as shown in fig. 2, the detection method mainly comprises two stages of training and testing. The specific operation steps are detailed as follows.
Firstly, from the training stage and the early preparation thereof, the method comprises the steps of (1) establishing a database, collecting haze pictures in different scenes in any city to manufacture the haze databases, and establishing a clear picture library for clear weather pictures in the same scene. (2) And (3) visibility extraction, namely taking a marker building in the haze picture as a marker, taking a straight line of the marker building perpendicular to the ground as a vertical coordinate, taking a building center point as an original point, taking a straight line perpendicular to the vertical coordinate as a horizontal coordinate, establishing a corresponding coordinate, and calculating to obtain the visibility value of the extracted haze picture according to the relative distance between each pixel point in the picture and the position of the marker building and the real distance between the marker building and a camera. (3) And (3) improving the construction of the DiracNet network. According to some limitations of ResNet convolutional neural network, two residual modules in original ResNet are improved into Dirac1_ conv2d and Dirac2_ conv2d by using Dirac weight parameterization method, a CReLU activation function is added after each Dirac convolution module, network convergence effect is accelerated, and finally convolutional layer and Global Average Pooling (GAP) are used for replacing original full-link layer output in order to prevent the model parameters from being too large and the risk of network overfitting, as shown in FIG. 1. (4) The DiracNet network training process comprises the steps of building an improved DiracNet network structure on Tensorflow, inputting a picture data set in a constructed haze database into the improved DiracNet network during network training, carrying out forward propagation, outputting a current visibility predicted value, comparing the current visibility predicted value with a visibility true value, calculating loss of current iteration, carrying out backward propagation, updating network parameters, and setting iteration times to be 50 to terminate training after repeated iteration. And finally saving a model which is optimally trained. The loss functions used are all mean square error functions (MSE), and the specific functions are as follows:
Figure DEST_PATH_IMAGE002A
in which the magnitude of the true value is represented,
Figure 499504DEST_PATH_IMAGE006
indicating the size of the prediction value.
And in the testing stage, a testing set is established according to the haze pictures of a certain scene, and the input haze visibility pictures are detected by using the detection system.
And in the network testing stage, the constructed test set database is input into a stored network model to obtain a visibility predicted value, then the visibility predicted value is compared with a visibility true value, and an Average Percentage Error (APE) is calculated. The calculation formula of APE is as follows:
Figure DEST_PATH_IMAGE012
in which the magnitude of the true value is represented,
Figure 590825DEST_PATH_IMAGE006
indicating the size of the prediction value.
The result shown in fig. 3 shows that the effect is significantly improved by taking the APE calculated in the experiment as a judgment standard and comparing the APE with the APE obtained by the original ResNet network training.
Although the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the specific embodiments, and modifications and equivalents within the scope of the claims may be made by those skilled in the art and are included in the scope of the present invention.

Claims (6)

1. The urban haze visibility detection method based on the improved DiracNet is characterized by comprising the following steps:
s1, training:
establishing a database, collecting haze pictures in different scenes in a city to manufacture the haze pictures into a haze database, and establishing a clear picture library for clear weather pictures in the same scene;
visibility extraction, namely taking a landmark building in the haze picture as a marker, establishing corresponding coordinates, and extracting a visibility value of the haze picture;
constructing an improved DiracNet network, carrying out parameterization and adding an activation function on the local part of the ResNet convolutional neural network, and replacing the output of a full connection layer to obtain the improved DiracNet convolutional neural network;
improving DiracNet network training, and inputting the picture data set in the constructed haze database into the improved DiracNet network for training;
s2, network testing stage: and establishing a test set according to the haze picture of any scene, and detecting the visibility of the input haze picture by using the trained DiracNet network model.
2. The improved DiracNet-based urban haze visibility detection method as claimed in claim 1, wherein: in the step S1, in the database establishment, based on the video data of the haze weather in different scenes of any city, pictures are extracted at the frequency of one picture per minute, and a picture data set is established by the collected pictures.
3. The improved DiracNet-based urban haze visibility detection method as claimed in claim 1, wherein: step S1, in the construction of the improved Dirac Net, the Dirac weight parameterization method is adopted to conduct Dirac parameterization improvement on two residual modules in the ResNet convolutional neural network, a CReLU activation function is added behind each Dirac convolution module, and finally convolution layers and global average pooling are used for replacing full connection layer output.
4. The improved DiracNet-based urban haze visibility detection method as claimed in claim 1, wherein: step S1, in the training of the improved DiracNet network, the improved DiracNet network is built on Tenflow, when in network training, the picture is input into the DiracNet network, then forward propagation is carried out, the predicted value of the current visibility is output, then the predicted value is compared with the true value of the visibility, the loss of the current iteration is calculated, then backward propagation is carried out, and the network parameters are updated; and terminating training through repeated iteration of preset times, and storing a model with optimal training, wherein the loss function is a mean square error function:
in which the magnitude of the true value is represented,
Figure DEST_PATH_IMAGE006
indicating the size of the prediction value.
5. The improved DiracNet-based urban haze visibility detection method as claimed in claim 1, wherein: in the step S2 network testing stage, inputting the constructed test set into the trained network model to obtain a visibility predicted value, then comparing the visibility predicted value with a visibility true value, and calculating an average percentage error:
Figure DEST_PATH_IMAGE008
in which the magnitude of the true value is represented,indicating the size of the prediction value.
6. The improved DiracNet-based urban haze visibility detection method as claimed in claim 1, wherein: in step S2, the average percentage error calculated by the modified dirac net network is compared with the average percentage error calculated by the ResNet convolutional neural network for verification.
CN201910846903.4A 2019-09-09 2019-09-09 Urban haze visibility detection method based on improved DiracNet Pending CN110659593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910846903.4A CN110659593A (en) 2019-09-09 2019-09-09 Urban haze visibility detection method based on improved DiracNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910846903.4A CN110659593A (en) 2019-09-09 2019-09-09 Urban haze visibility detection method based on improved DiracNet

Publications (1)

Publication Number Publication Date
CN110659593A true CN110659593A (en) 2020-01-07

Family

ID=69036841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910846903.4A Pending CN110659593A (en) 2019-09-09 2019-09-09 Urban haze visibility detection method based on improved DiracNet

Country Status (1)

Country Link
CN (1) CN110659593A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274911A (en) * 2020-01-17 2020-06-12 河海大学 Dense fog monitoring method based on wireless microwave attenuation characteristic transfer learning
CN114563203A (en) * 2022-03-11 2022-05-31 中国煤炭科工集团太原研究院有限公司 Method for simulating underground low visibility environment
CN114880958A (en) * 2022-07-12 2022-08-09 南京气象科技创新研究院 Visibility forecasting model based on multi-meteorological-factor intelligent deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274383A (en) * 2017-05-17 2017-10-20 南京邮电大学 A kind of haze visibility detecting method based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274383A (en) * 2017-05-17 2017-10-20 南京邮电大学 A kind of haze visibility detecting method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENWEN YANG,ET.AL,: "A Shallow ResNet with Layer Enhancement for Image-Based Particle Pollution Estimation", 《CHINESE CONFERENCE ON PATTERN RECOGNITION AND COMPUTER VISION (PRCV)》 *
机器之心V: "对比ResNet: 超深层网络DiracNet的PyTorch实现", 《HTTPS://BLOG.CSDN.NET/UWR44UOUQCNSUQB60ZK2/ARTICLE/DETAILS/78536813》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274911A (en) * 2020-01-17 2020-06-12 河海大学 Dense fog monitoring method based on wireless microwave attenuation characteristic transfer learning
CN111274911B (en) * 2020-01-17 2020-12-01 河海大学 Dense fog monitoring method based on wireless microwave attenuation characteristic transfer learning
CN114563203A (en) * 2022-03-11 2022-05-31 中国煤炭科工集团太原研究院有限公司 Method for simulating underground low visibility environment
CN114563203B (en) * 2022-03-11 2023-08-15 中国煤炭科工集团太原研究院有限公司 Method for simulating underground low-visibility environment
CN114880958A (en) * 2022-07-12 2022-08-09 南京气象科技创新研究院 Visibility forecasting model based on multi-meteorological-factor intelligent deep learning

Similar Documents

Publication Publication Date Title
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN108171141B (en) Attention model-based cascaded multi-mode fusion video target tracking method
WO2018214195A1 (en) Remote sensing imaging bridge detection method based on convolutional neural network
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN113673444B (en) Intersection multi-view target detection method and system based on angular point pooling
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
CN111462210B (en) Monocular line feature map construction method based on epipolar constraint
CN110189294B (en) RGB-D image significance detection method based on depth reliability analysis
CN111027415B (en) Vehicle detection method based on polarization image
CN110659593A (en) Urban haze visibility detection method based on improved DiracNet
CN111882620A (en) Road drivable area segmentation method based on multi-scale information
CN111681275B (en) Double-feature-fused semi-global stereo matching method
CN103729620B (en) A kind of multi-view pedestrian detection method based on multi-view Bayesian network
CN106408596A (en) Edge-based local stereo matching method
CN105809716A (en) Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method
CN110309765B (en) High-efficiency detection method for video moving target
CN111242026A (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN107944354A (en) A kind of vehicle checking method based on deep learning
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN114998251A (en) Air multi-vision platform ground anomaly detection method based on federal learning
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
CN113793472B (en) Image type fire detector pose estimation method based on feature depth aggregation network
CN111047636A (en) Obstacle avoidance system and method based on active infrared binocular vision
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200107