CN110738081B - Abnormal road condition detection method and device - Google Patents
Abnormal road condition detection method and device Download PDFInfo
- Publication number
- CN110738081B CN110738081B CN201810799435.5A CN201810799435A CN110738081B CN 110738081 B CN110738081 B CN 110738081B CN 201810799435 A CN201810799435 A CN 201810799435A CN 110738081 B CN110738081 B CN 110738081B
- Authority
- CN
- China
- Prior art keywords
- area
- road
- determining
- image
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 45
- 238000001514 detection method Methods 0.000 title claims description 49
- 238000000034 method Methods 0.000 claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 claims description 58
- 238000012549 training Methods 0.000 claims description 37
- 230000008569 process Effects 0.000 claims description 14
- 230000005856 abnormality Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 description 12
- 230000000694 effects Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000036626 alertness Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The application provides a method and a device for detecting abnormal road conditions, wherein the method comprises the following steps: determining a non-road surface area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein a first intersection area exists between the safe driving area and the non-road surface area; determining an obstacle area from the road image; determining a second intersection region between the first intersection region and the obstacle region, and determining the area of the remaining region except the second intersection region from the first intersection region; and determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area. The abnormal road condition is judged by the determined non-road surface area and the known obstacle area and effectively combining the safe driving area required by the safe driving of the vehicle, so that the potential danger is accurately pre-warned, and the safety of automatic driving is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting abnormal road conditions.
Background
At present, vehicles are popularized in people's lives, and automatic driving systems for realizing automatic driving decisions are widely applied to vehicles, and detect various road targets (such as vehicles, pedestrians, lane lines, road signs, traffic signals and the like) through image or video processing technologies so as to directly control the vehicles or remind drivers to ensure driving safety.
However, under complex road conditions, due to environmental interference (such as the front obstacle is shielded by rain and fog, the texture of the front obstacle is complex, etc.), the automatic driving system cannot accurately detect the front obstacle, and usually has a certain problem of missing detection or false alarm, so that the potential danger cannot be accurately warned, and the safety of automatic driving is reduced.
Disclosure of Invention
In view of this, the present application provides a method and a device for detecting abnormal road conditions, so as to solve the problem that the related art cannot accurately perform early warning on the potential danger.
According to a first aspect of an embodiment of the present application, a method for detecting an abnormal road condition is provided, where the method includes:
determining a non-road surface area and a safe driving area for safe driving of a vehicle where a vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein a first intersection area exists between the safe driving area and the non-road surface area;
determining an obstacle area from the off-road area;
determining a second intersection region between the first intersection region and the barrier region and determining an area of a remaining region from the first intersection region except the second intersection region;
And determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area.
According to a second aspect of the embodiments of the present application, there is provided an abnormal road condition detecting device, including:
the area determining module is used for determining a non-road area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein a first intersection area exists between the safe driving area and the non-road area;
an obstacle determination module for determining an obstacle area from the non-road surface area;
an area determination module configured to determine a second intersection area between the first intersection area and the obstacle area, and determine an area of a remaining area except the second intersection area from the first intersection area;
an anomaly determination module to determine a second intersection region between the first intersection region and the obstacle region, and to determine an area of a remaining region from the first intersection region except the second intersection region.
According to a third aspect of embodiments herein, there is provided an electronic device, the device comprising a readable storage medium and a processor;
Wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine-executable instruction on the readable storage medium, and execute the instruction to implement the steps of the abnormal road condition detection method.
According to a fourth aspect of embodiments of the present application, there is provided a chip comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instruction on the readable storage medium, and execute the instruction to implement the steps of the abnormal road condition detection method.
By applying the embodiment of the application, the non-road surface area and the safe driving area where the vehicle-mounted camera is located safely drives are determined from the road image acquired by the vehicle-mounted camera, a first intersection area exists between the safe driving area and the non-road surface area, then the obstacle area is determined from the non-road surface area, a second intersection area between the first intersection area and the obstacle area is determined, the area of the remaining area except the second intersection area is determined from the first intersection area, and whether the road surface in the road image is abnormal or not is determined according to the area of the remaining area and the area of the safe driving area. Based on the description, the abnormal road condition is judged through the determined non-road surface area and the known obstacle area and effectively combined with the safe driving area required by the safe driving of the vehicle, so that the potential danger is accurately pre-warned, and the safety of automatic driving is improved.
Drawings
Fig. 1A is a flowchart illustrating an embodiment of a method for detecting abnormal road conditions according to an exemplary embodiment of the present application;
FIG. 1B is a diagram illustrating a network model architecture of a first neural network according to the embodiment shown in FIG. 1A;
FIG. 1C is a schematic view of a road image marked with an off-road area and a safe driving area according to the embodiment shown in FIG. 1A;
FIG. 1D is a diagram illustrating a network model architecture of a second neural network according to the embodiment of FIG. 1A;
FIG. 1E is a schematic diagram of an obstacle region in a road image according to the embodiment shown in FIG. 1A;
fig. 2 is a flowchart illustrating another abnormal road condition detection method according to an exemplary embodiment of the present application;
FIG. 3 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
fig. 4 is a structural diagram of an embodiment of an abnormal road condition detection device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
When the driver is difficult to accurately distinguish the road condition ahead, the vehicle speed is usually reduced and the alertness is improved, and in an automatic driving system, when the acquired image is sensed abnormally, similar decisions can be made, and even the driving right is returned to the driver. However, the image perception in the automatic driving system is greatly influenced by environmental interference, a perception result has a certain missing detection rate and a certain false alarm rate, and especially in severe weather environments such as rainy days and foggy days, the automatic driving system cannot directly perceive the front obstacle due to the fact that the front obstacle is shielded by rain and fog, and therefore the potential danger cannot be accurately warned, and safety of automatic driving is reduced.
Based on the method, after the non-road surface area and the safety driving area required by the vehicle are determined from the image, the known obstacle area (the detectable obstacle area) is determined from the non-road surface area, then the first intersection area between the safety driving area and the non-road surface area and the residual area (the area where the obstacle cannot be detected) outside the known obstacle area are determined, the ratio of the area of the residual area to the area of the whole safety driving area is calculated, if the ratio exceeds a set occupation ratio threshold value, the area occupied by the unknown obstacle area in the non-road surface area contained in the safety driving area is relatively large, and the road surface in the image is determined to be abnormal.
Based on the above description, after the first intersection region (the region where the vehicle cannot travel) between the non-road surface region and the safe travel region is determined, the abnormal road condition is determined according to the area of the remaining region except for the known obstacle region and the first intersection region, because the remaining region refers to the unknown obstacle region, that is, the region where the obstacle cannot be directly detected, there may be an unknown obstacle that is not easily detected in these regions, and the unknown obstacle in a complex scene such as an obstacle blocked by rain fog or an obstacle with a complex texture cannot be directly detected, and the remaining region may also be understood as the unknown obstacle region, if the area of the unknown obstacle region is larger than the area of the safe travel region, it indicates that the road surface in the current road image is abnormal, so that the potential danger can be accurately warned, the safety of automatic driving is improved.
The technical solution of the present application is described in detail by specific examples below.
Fig. 1A is a flowchart of an embodiment of an abnormal road condition detection method according to an exemplary embodiment of the present application, where the abnormal road condition detection method may be applied to an electronic device of a vehicle, and the electronic device may operate an automatic driving system, and determine an abnormal road condition by analyzing and processing a road image collected by a vehicle-mounted camera installed on the vehicle. As shown in fig. 1A, the abnormal road condition detection method includes the following steps:
step 101: the method comprises the steps of determining a non-road surface area and a safe driving area where a vehicle where an on-vehicle camera is located safely drives from a road image acquired by the on-vehicle camera, wherein a first intersection area exists between the safe driving area and the non-road surface area.
In an embodiment, the road image may be input into a first neural network trained in advance, and the non-road surface area may be determined from the road image through a convolution network, a deconvolution network and an object detection network included in the first neural network.
The road image is divided into a road surface area and a non-road surface area, wherein the road surface area refers to a vehicle drivable area and may include lane lines, a road surface, a road edge and the like, and the non-road surface area refers to an area in which a vehicle cannot drive and may include an obstacle, a sky and the like. Since the input of the first neural network is an image, the output of the first neural network needs to be a semantically segmented image, and the image is segmented into a road surface area and a non-road surface area, the first neural network may adopt a Full Convolution Network (FCN). Compared with the traditional detection mode, the neural network through deep learning has the advantages that the output result is stable and accurate, and especially under the complex road condition in the severe weather environment, the output result is more stable.
It can be understood by those skilled in the art that the present application is not limited to the architecture of the full convolutional network, and for example, the full convolutional network may be based on an encoding-decoding architecture.
As shown in fig. 1B, the first neural network includes a convolution network, a deconvolution network, and a target detection network, wherein the convolution network is configured to perform convolution operation and pooling operation on an input road image to gradually reduce a spatial dimension of input data, the deconvolution network is configured to perform deconvolution operation on data output by the convolution network to gradually recover details of a target and a corresponding spatial dimension, and the target detection network is configured to perform obstacle frame operation on data output by the convolution network to assist the deconvolution network in better recovering the details of the target, so as to achieve a good semantic segmentation effect.
In one embodiment, the braking distance may be determined using the current running speed of the vehicle, and the turning radius may be determined using the current front wheel steering angle of the vehicle and the wheel base of the front and rear wheels, and then the safe running area of the vehicle may be determined according to the braking distance and the turning radius.
The safe driving area refers to a driving area required by the vehicle to safely drive, and the determination process of the safe driving area is derived in detail through the conversion relation between the image coordinate system and the road surface coordinate system.
(1) Assuming that the vehicle is in circular motion, the turning radius of the vehicle is obtained from the current front wheel steering angle theta of the vehicle and the wheelbase b of the front and rear wheels of the vehicle:
R=b/sinθ
(2) from the current running speed v of the vehicle, the braking distance of the vehicle can be obtained as follows:
S=v 2 /(2×g×μ)
wherein g is 9.8m/s 2 Mu is friction coefficient, fine weather is 0.8, and rainy weather is 0.2. It will be appreciated by those skilled in the art that the determination of the current weather conditions may be accomplished by related techniques and will not be described in detail herein.
The obtained braking distance and turning radius are the running track of the vehicle under the road surface coordinate system, and two points can be taken every L meters within the range of the braking distance S, for example, if S is 100 meters and L is 10 meters, 20 points can be taken. And then mapping the points to the image coordinate system through the conversion relation from the road surface coordinate system to the image coordinate system.
(3) Assuming a point coordinate M (x) in a road coordinate system g ,y g ) The formula of the conversion relationship to the point coordinates P (u, v) in the image coordinate system is as follows:
the conversion relation needs manual calibration of the installation height and angle of the vehicle-mounted camera on the vehicle in advance. (u) 0 ,v 0 ) And (f) x ,f y ) Respectively representing the optical center and the equivalent focal length of the vehicle-mounted camera, belonging to the internal reference of the vehicle-mounted camera; the external parameters of the calibrated vehicle-mounted camera comprise: camera height (h), pitch angle (α, where s1 ═ sin (α), c1 ═ cos (α)), yaw angle (β, where s2 ═ sin (β), c2 ═ cos (β)), rotation angle (γ, where s3 ═ sin (γ), and c3 ═ cos (γ)).
(4) In the road image, the safe travel area is obtained from the points mapped to the image coordinate system.
In one embodiment, since the non-road surface area is an area where the vehicle cannot travel, a first intersection area existing between the safe-travel area and the non-road surface area refers to an area where the vehicle cannot travel within a spatial range in the image. Since the safe driving area refers to a safe driving area in which the vehicle safely drives and does not include a sky area, and the non-road area refers to an area including an obstacle, sky, and the like, in a normal case, a first intersection area between the safe driving area and the non-road area should include an area of a detectable obstacle, and of course, a detectable movable obstacle may exist in the area, and an unknown obstacle that is not easily detected may be included.
It should be noted that, if there is no intersection between the safe driving area and the non-road surface area, that is, there is no first intersection area, it indicates that there is no need to detect the abnormal road condition on the road image.
In an exemplary scenario, as shown in fig. 1C, an area framed by a white line is a safe driving area required by a vehicle, a non-road area is above the white line, the non-road area refers to an area where the vehicle cannot drive, such as an obstacle and sky, and a road area (an area marked by a dotted line) is below the white line, the road area refers to an area where the vehicle can drive, such as a lane line, a road surface and a road edge, and an intersection of the non-road area and the safe driving area is a first intersection area.
Step 102: an obstacle area is determined from the road image.
In an embodiment, the road image may be input into a second neural network, from which the obstacle regions are determined.
In an embodiment, a sub-image corresponding to the non-road surface area may also be cut from the road image, the sub-image is input to the third neural network, and the third neural network determines the obstacle area from the sub-image.
Wherein the obstacle determined from the road image is a moving obstacle, such as a motor vehicle, a non-motor vehicle, a pedestrian, or the like. The second and third neural networks may employ convolutional neural networks, such as region-based Fast R-CNN, and the like.
The second neural Network shown in fig. 1D includes a convolutional Network, RPN (Region candidate Network), location and type regression Network. The convolutional network is used for performing convolution operation and pooling operation on an input road image to obtain a feature map, the RPN is used for performing candidate frame operation on the feature map output by the convolutional network to generate a candidate frame, and the position and category regression network is used for determining the position and category of an obstacle by using the feature map output by the convolutional network and the candidate frame output by the RPN.
It should be noted that the obstacle detection in step 102 may be implemented by the first neural network in step 101, that is, the output result of the target detection network in the first neural network in step 101 is output as the detection result of the obstacle area.
With respect to the processes described in the above steps 101 to 102, in an exemplary scenario, as shown in the road image shown in fig. 1E, there is one detected obstacle, i.e., a front obstacle, in the first intersection area between the non-road surface area and the safe driving area, and the category of the front obstacle is a motor vehicle.
Step 103: a second intersection region between the first intersection region and the obstacle region is determined, and the area of the remaining region from the first intersection region except the second intersection region is determined.
In one embodiment, the non-road surface region refers to a region where a vehicle cannot travel, such as an obstacle or sky, and the road surface region refers to a region where a vehicle can travel, such as a lane line, a road surface, or a road edge. Therefore, it is necessary to determine the area of the remaining area other than the second intersection area from the first intersection area of the safe driving area and the non-road surface area for determining whether the road condition is abnormal. The second intersection region refers to a region where detected obstacles are included in the intersection of the safe driving region and the non-road surface region, and the remaining region refers to a region where no obstacle is detected, that is, a region where an obstacle is unknown, such as an obstacle hidden by rain or fog, and a region where an obstacle with a complex texture may exist.
Based on the scenario shown in step 102, as shown in fig. 1E, one obstacle region, that is, a second intersection region, exists in a first intersection region where the vehicle cannot travel, and all the remaining regions except the obstacle region in the first intersection region are regions where an obstacle cannot be detected, that is, regions belonging to unknown obstacles.
Step 104: and determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area.
In one embodiment, a ratio of the area of the remaining area to the area of the safe driving area may be determined, and if the ratio exceeds a preset ratio threshold, it is determined that the road surface in the road image is abnormal; and if the ratio does not exceed a preset ratio threshold, determining that the road surface in the road image is normal.
The preset proportion threshold value can be set according to the test effect of a large number of samples. If the ratio exceeds the preset ratio threshold, the fact that the area occupied by the unknown obstacles in the non-road surface area contained in the safe driving area is too large can be determined, the fact that the road surface in the road image is abnormal can be determined, if the ratio does not exceed the preset ratio threshold, the fact that the area occupied by the unknown obstacles in the non-road surface area contained in the safe driving area is small can be determined, and the fact that the road surface in the road image belongs to the normal road condition can be determined.
In an embodiment, the number of frames of the road images with the abnormality may be counted, and when the number of frames of the road images with the abnormality continuously reaches a preset number, an abnormality alarm prompt is output to ensure stability of the abnormality determination result.
The preset number can also be set according to the test effect of a large number of samples. An abnormal alarm prompt can be output through an automatic driving system, so that a driver is informed that the road condition in front of the driver is potentially dangerous and needs to be carefully driven.
It should be noted that after determining that the road condition is normal, the automatic driving system may further make driving decisions by calculating the relative distance and the relative speed between the vehicle and the obstacle ahead.
In the embodiment of the application, a non-road surface area and a safe driving area where a vehicle where the vehicle-mounted camera is located safely drives are determined from a road image acquired by the vehicle-mounted camera, a first intersection area exists between the safe driving area and the non-road surface area, then an obstacle area is determined from the non-road surface area, a second intersection area between the first intersection area and the obstacle area is determined, the area of a residual area except the second intersection area is determined from the first intersection area, and whether the road surface in the road image is abnormal or not is determined according to the area of the residual area and the area of the safe driving area. Based on the description, the abnormal road condition is judged through the determined non-road surface area and the known obstacle area and effectively combined with the safe driving area required by the safe driving of the vehicle, so that the potential danger is accurately pre-warned, and the safety of automatic driving is improved.
Fig. 2 is a flowchart of another abnormal road condition detection method according to an exemplary embodiment of the present application, which is based on the embodiment shown in fig. 1A and is exemplarily described by combining the network model structure diagram of the first neural network shown in fig. 1B, with an example of how to train the first neural network, as shown in fig. 2, the process of training the first neural network may include the following steps:
step 201: the method comprises the steps of respectively obtaining a first type image, a second type image and a third type image, marking a target frame in the first type image, marking a road surface and a non-road surface for each pixel of the second type image, marking the road surface and the non-road surface for each pixel of the third type image, and marking the target frame in the third type image.
In one embodiment, since the first neural network belongs to a multi-task loss supervision model, one task is semantic segmentation and the other task is target detection, the shared parameter of the two tasks is a feature map output by the convolutional network. The first kind of images are used for training the target detection tasks of the convolutional network and the target detection network, and target frames need to be marked in the first kind of images. The second type of image is used for training semantic segmentation tasks of a convolutional network and a deconvolution network, and a road surface and an unpaved surface need to be marked on each pixel of the second type of image, and the third type of image is used for training a segmentation effect of the output of the first neural network, namely, the road surface and the unpaved surface need to be marked on each pixel, and a target frame also needs to be marked.
The second-class images used for training the semantic segmentation task need to be subjected to class marking on each pixel, marking workload is large, the first-class images used for training the target detection task only need to be marked with a target frame, and marking workload is small, so that the number of the second-class images can be properly reduced, the number of the first-class images is increased, and workload of manual marking is reduced.
Step 202: and training the convolution network and the target detection network by using the marked first class image, and continuously training the convolution network and the deconvolution network by using the marked second class image.
In one embodiment, since the convolutional network is a network shared by the semantic segmentation task and the target detection task, the matrix coefficients in the convolutional network need to be adjusted in both training processes. Wherein, the training times of the semantic segmentation task and the target detection task can be set according to the actual experience,
step 203: and determining the weight according to the number of the first type images and the number of the second type images.
In an embodiment, a weight can be determined according to a ratio of the number of the first type images to the number of the second type images, wherein the weight represents a proportion of a loss value of a subsequent target detection network in a loss value of the first neural network, and a value is 0-1.
The smaller the number of the second type images used for training the semantic segmentation task relative to the number of the first type images, the greater the weight should be, as shown in table 1, which is an exemplary relationship table of the ratio and the weight.
Ratio of | Weight of |
0.5 | 0.3 |
1 | 0.5 |
2 | 0.7 |
3 | 0.9 |
TABLE 1
Step 204: and continuously training the convolutional network, the deconvolution network and the target detection network by using the marked third type of image until the loss value of the first neural network is lower than a preset threshold value or the training times reach preset times, and stopping training, wherein the loss value of the first neural network is determined by the loss value of the deconvolution network, the loss value of the target detection network and the weight.
In an embodiment, since the labeled third type image is used for training the segmentation effect of the first neural network output, the matrix coefficients of the three networks, namely the convolution network, the deconvolution network and the target detection network, need to be adjusted.
The output segmentation effect can be determined according to the loss value of the first neural network until the loss value of the first neural network is lower than a preset threshold value, the output segmentation effect reaches an ideal effect, training is stopped, and the output segmentation effect can also be stopped by judging whether the total training times reach the preset times. The loss value derivation process of the first neural network is described in detail below.
(1) Deconvolution networks typically use Softmax loss as a cost function whose loss value is formulated as:
where N denotes the total number of pixels per image in the second type of image and l n Class truth values representing the nth pixel label, e.g. road label 0, off-road label 1, p n (l n ) Indicating the prediction probability corresponding to the correct class of the nth pixel.
(2) The target detection network is an auxiliary network, and MSE (Mean Squared Error) loss can be used as a cost function, and the loss value formula of the cost function is as follows:
where x represents the input image, g (x) represents the predicted frame result, Y represents the marked target frame result, and M represents the number of target frames marked by the input image.
(3) The total loss value of the first neural network is: l ═ L S +α·L D 。
So far, the flow shown in fig. 2 is completed, and the training of the first neural network is finally realized through the flow shown in fig. 2.
Fig. 3 is a diagram illustrating a hardware architecture of an electronic device according to an exemplary embodiment of the present application, the electronic device including: a communication interface 301, a processor 302, a machine-readable storage medium 303, and a bus 304; wherein the communication interface 301, the processor 302, and the machine-readable storage medium 303 communicate with each other via a bus 304. The processor 302 may execute the abnormal traffic detection method described above by reading and executing the machine-executable instructions corresponding to the control logic of the abnormal traffic detection method in the machine-readable storage medium 302, and the specific content of the method is referred to the above embodiments, which will not be described herein again.
The machine-readable storage medium 303 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Fig. 4 is a structural diagram of an embodiment of an abnormal road condition detection device according to an exemplary embodiment of the present application, where the abnormal road condition detection device may be applied to an electronic device of a vehicle, as shown in fig. 4, the abnormal road condition detection device includes:
the area determining module 410 is configured to determine, from a road image acquired by a vehicle-mounted camera, a non-road area and a safe driving area where a vehicle in which the vehicle-mounted camera is located safely drives, where a first intersection area exists between the safe driving area and the non-road area;
an obstacle determination module 420 for determining an obstacle area from the off-road area;
An area determination module 430 configured to determine a second intersection area between the first intersection area and the obstacle area, and determine an area of a remaining area except the second intersection area from the first intersection area;
an anomaly determination module 440 configured to determine a second intersection region between the first intersection region and the obstacle region, and determine an area of a remaining region from the first intersection region except the second intersection region.
In an optional implementation manner, the abnormality determining module 440 is specifically configured to determine a ratio of an area of the remaining region to an area of the safe driving region; if the ratio exceeds a preset ratio threshold, determining that the road surface in the road image is abnormal; and if the ratio does not exceed a preset ratio threshold, determining that the road surface in the road image is normal.
In an optional implementation manner, the area determining module 410 is specifically configured to, in a process of determining a non-road area in a road image acquired by a vehicle-mounted camera, input the road image into a first neural network obtained through pre-training, and determine the non-road area from the road image through a convolutional network, a deconvolution network, and a target detection network included in the first neural network.
In an alternative implementation, the apparatus further comprises (not shown in fig. 4):
the training module is used for respectively acquiring a first type of image, a second type of image and a third type of image, marking a target frame in the first type of image, marking each pixel of the second type of image with a pavement and a non-pavement, marking each pixel of the third type of image with a pavement and a non-pavement, and marking the target frame in the third type of image; training the convolution network and the target detection network by using the marked first class images, and continuing training the convolution network and the deconvolution network by using the marked second class images; determining weight according to the number of the first class images and the number of the second class images; continuing training the convolution network, the deconvolution network and the target detection network by using the marked third type of image until the loss value of the first neural network is lower than a preset threshold value or the training times reach preset times, and stopping training; wherein the loss value of the first neural network is determined by the loss value of the deconvolution network, the loss value of the target detection network, and the weight.
In an optional implementation, the obstacle determining module 420 is specifically configured to input the road image into a second neural network, and determine an obstacle region from the road image by the second neural network; or intercepting a sub-image corresponding to the non-road surface area from the road image; inputting the sub-images into a third neural network, determining by the third neural network the obstacle area from the sub-images.
In an optional implementation manner, the area determining module 410 is specifically configured to determine a braking distance by using a current driving speed of the vehicle in a process of determining a safe driving area where the vehicle-mounted camera is located safely drives from the road image; determining a turning radius by using the current front wheel turning angle of the vehicle and the wheelbases of the front wheel and the rear wheel; and determining a safe driving area for the vehicle to safely drive according to the braking distance and the turning radius.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The application also provides a chip, which includes a readable storage medium and a processor, where the readable storage medium is used for storing machine executable instructions, and the processor is used for reading the machine executable instructions and executing the instructions to implement the steps of the abnormal road condition detection method in the above embodiment.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (14)
1. A method for detecting abnormal road conditions is characterized by comprising the following steps:
determining a non-road surface area and a safe driving area for safe driving of a vehicle where a vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein a first intersection area exists between the safe driving area and the non-road surface area;
determining an obstacle region from a non-road surface region in the road image based on the trained neural network;
determining a second intersection region between the first intersection region and the obstacle region where regions representing the detected obstacles intersect, and determining an area of a remaining region of a region representing an unknown obstacle other than the second intersection region from the first intersection region;
and determining whether the road surface in the road image is abnormal or not according to the area of the residual area and the area of the safe driving area.
2. The method according to claim 1, wherein determining whether there is an abnormality in the road surface in the road image based on the area of the remaining region and the area of the safe driving region includes:
Determining a ratio of an area of the remaining region to an area of the safe driving region;
if the ratio exceeds a preset ratio threshold, determining that the road surface in the road image is abnormal;
and if the ratio does not exceed a preset ratio threshold, determining that the road surface in the road image is normal.
3. The method of claim 1, wherein determining the off-road region from the road image captured by the vehicle-mounted camera comprises:
and inputting the road image into a first neural network obtained by pre-training, and determining a non-road surface area from the road image through a convolution network, a deconvolution network and a target detection network contained in the first neural network.
4. The method of claim 3, wherein the first neural network is pre-trained by:
respectively acquiring a first type image, a second type image and a third type image, marking a target frame in the first type image, marking each pixel of the second type image with a pavement and a non-pavement, marking each pixel of the third type image with a pavement and a non-pavement, and marking the target frame in the third type image;
training the convolution network and the target detection network by using the marked first class images, and continuing training the convolution network and the deconvolution network by using the marked second class images;
Determining weight according to the number of the first class images and the number of the second class images;
continuing training the convolution network, the deconvolution network and the target detection network by using the marked third type of image until the loss value of the first neural network is lower than a preset threshold value or the training times reach preset times, and stopping training;
wherein the loss value of the first neural network is determined by the loss value of the deconvolution network, the loss value of the target detection network, and the weight.
5. The method of claim 1, wherein determining an obstacle region from the road image based on the trained neural network comprises:
inputting the road image into a second neural network, determining an obstacle region from the road image by the second neural network; or,
intercepting a sub-image corresponding to the non-road surface area from the road image; inputting the sub-images into a third neural network, determining by the third neural network the obstacle area from the sub-images.
6. The method of claim 1, wherein determining a safe driving area for safe driving of a vehicle in which the vehicle-mounted camera is located from the road image comprises:
Determining a braking distance by using the current running speed of the vehicle;
determining a turning radius by using the current front wheel turning angle of the vehicle and the wheelbases of the front wheel and the rear wheel;
and determining a safe driving area for the vehicle to safely drive according to the braking distance and the turning radius.
7. An abnormal road condition detecting device, comprising:
the area determining module is used for determining a non-road area and a safe driving area for safe driving of a vehicle where the vehicle-mounted camera is located from a road image acquired by the vehicle-mounted camera, wherein a first intersection area exists between the safe driving area and the non-road area;
the obstacle determining module is used for determining an obstacle area from a non-road surface area in the road image based on the trained neural network;
an area determination module for determining a second intersection area between the first intersection area and the obstacle area, which is an intersection of areas representing the detected obstacles, and determining an area of a remaining area except the second intersection area from the first intersection area;
an anomaly determination module to determine a second intersection region between the first intersection region and the obstacle region, and to determine an area of a remaining region of a region representing an unknown obstacle from the first intersection region except the second intersection region.
8. The device according to claim 7, characterized in that the anomaly determination module is specifically configured to determine a ratio of an area of the remaining region to an area of the safe driving region; if the ratio exceeds a preset ratio threshold, determining that the road surface in the road image is abnormal; and if the ratio does not exceed a preset ratio threshold, determining that the road surface in the road image is normal.
9. The device according to claim 7, wherein the area determination module is specifically configured to, in a process of determining the non-road area from the road image captured by the vehicle-mounted camera, input the road image into a first neural network trained in advance, and determine the non-road area from the road image through a convolution network, a deconvolution network, and an object detection network included in the first neural network.
10. The apparatus of claim 9, further comprising:
the training module is used for respectively acquiring a first type of image, a second type of image and a third type of image, marking a target frame in the first type of image, marking each pixel of the second type of image with a pavement and a non-pavement, marking each pixel of the third type of image with a pavement and a non-pavement, and marking the target frame in the third type of image; training the convolution network and the target detection network by using the marked first class images, and continuing training the convolution network and the deconvolution network by using the marked second class images; determining weight according to the number of the first class images and the number of the second class images; continuing training the convolution network, the deconvolution network and the target detection network by using the marked third type of image until the loss value of the first neural network is lower than a preset threshold value or the training times reach preset times, and stopping training; wherein the loss value of the first neural network is determined by the loss value of the deconvolution network, the loss value of the target detection network, and the weight.
11. The apparatus according to claim 7, wherein the obstacle determination module is configured to input the road image into a second neural network, determine an obstacle region from the road image by the second neural network; or intercepting a sub-image corresponding to the non-road surface area from the road image; inputting the sub-images into a third neural network, determining by the third neural network the obstacle area from the sub-images.
12. The device according to claim 7, wherein the area determination module is specifically configured to determine a braking distance by using a current driving speed of the vehicle in the process of determining a safe driving area where the vehicle with the vehicle-mounted camera is safely driven from the road image; determining a turning radius by using the current front wheel turning angle of the vehicle and the wheelbases of the front wheel and the rear wheel; and determining a safe driving area for the vehicle to safely drive according to the braking distance and the turning radius.
13. An electronic device, characterized in that the device comprises a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
The processor configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any one of claims 1-6.
14. A chip comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810799435.5A CN110738081B (en) | 2018-07-19 | 2018-07-19 | Abnormal road condition detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810799435.5A CN110738081B (en) | 2018-07-19 | 2018-07-19 | Abnormal road condition detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110738081A CN110738081A (en) | 2020-01-31 |
CN110738081B true CN110738081B (en) | 2022-07-29 |
Family
ID=69235582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810799435.5A Active CN110738081B (en) | 2018-07-19 | 2018-07-19 | Abnormal road condition detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738081B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283273B (en) * | 2020-04-17 | 2024-05-24 | 上海锐明轨交设备有限公司 | Method and system for detecting front obstacle in real time based on vision technology |
CN112639821B (en) * | 2020-05-11 | 2021-12-28 | 华为技术有限公司 | Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system |
CN111767831B (en) * | 2020-06-28 | 2024-01-12 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for processing image |
US12008762B2 (en) * | 2022-02-19 | 2024-06-11 | Huawei Technologies Co., Ltd. | Systems and methods for generating a road surface semantic segmentation map from a sequence of point clouds |
CN114721404B (en) * | 2022-06-08 | 2022-09-13 | 超节点创新科技(深圳)有限公司 | Obstacle avoidance method, robot and storage medium |
CN116343176B (en) * | 2023-05-30 | 2023-08-11 | 济南城市建设集团有限公司 | Pavement abnormality monitoring system and monitoring method thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106558058A (en) * | 2016-11-29 | 2017-04-05 | 北京图森未来科技有限公司 | Parted pattern training method, lane segmentation method, control method for vehicle and device |
CN107454969A (en) * | 2016-12-19 | 2017-12-08 | 深圳前海达闼云端智能科技有限公司 | Obstacle detection method and device |
KR20180058624A (en) * | 2016-11-24 | 2018-06-01 | 고려대학교 산학협력단 | Method and apparatus for detecting sudden moving objecj appearance at vehicle |
CN108227712A (en) * | 2017-12-29 | 2018-06-29 | 北京臻迪科技股份有限公司 | The avoidance running method and device of a kind of unmanned boat |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ATE301483T1 (en) * | 2001-11-30 | 2005-08-15 | Brainlab Ag | DEVICE FOR PLANNING AN INFUSION |
JP5757900B2 (en) * | 2012-03-07 | 2015-08-05 | 日立オートモティブシステムズ株式会社 | Vehicle travel control device |
CN106428000B (en) * | 2016-09-07 | 2018-12-21 | 清华大学 | A kind of vehicle speed control device and method |
-
2018
- 2018-07-19 CN CN201810799435.5A patent/CN110738081B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180058624A (en) * | 2016-11-24 | 2018-06-01 | 고려대학교 산학협력단 | Method and apparatus for detecting sudden moving objecj appearance at vehicle |
CN106558058A (en) * | 2016-11-29 | 2017-04-05 | 北京图森未来科技有限公司 | Parted pattern training method, lane segmentation method, control method for vehicle and device |
CN107454969A (en) * | 2016-12-19 | 2017-12-08 | 深圳前海达闼云端智能科技有限公司 | Obstacle detection method and device |
CN108227712A (en) * | 2017-12-29 | 2018-06-29 | 北京臻迪科技股份有限公司 | The avoidance running method and device of a kind of unmanned boat |
Also Published As
Publication number | Publication date |
---|---|
CN110738081A (en) | 2020-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738081B (en) | Abnormal road condition detection method and device | |
JP7499256B2 (en) | System and method for classifying driver behavior - Patents.com | |
CN110517521B (en) | Lane departure early warning method based on road-vehicle fusion perception | |
CN106652468B (en) | The detection and from vehicle violation early warning alarm set and method in violation of rules and regulations of road vehicle front truck | |
CN112700470B (en) | Target detection and track extraction method based on traffic video stream | |
US10922817B2 (en) | Perception device for obstacle detection and tracking and a perception method for obstacle detection and tracking | |
CN109919074B (en) | Vehicle sensing method and device based on visual sensing technology | |
CN111382768A (en) | Multi-sensor data fusion method and device | |
CN112349144B (en) | Monocular vision-based vehicle collision early warning method and system | |
CN114898296B (en) | Bus lane occupation detection method based on millimeter wave radar and vision fusion | |
US20200371524A1 (en) | Eccentricity image fusion | |
US10635915B1 (en) | Method and device for warning blind spot cooperatively based on V2V communication with fault tolerance and fluctuation robustness in extreme situation | |
US10836356B2 (en) | Sensor dirtiness detection | |
DE112016006962B4 (en) | Detection region estimating device, detection region estimation method and detection region estimation program | |
CN110837800A (en) | Port severe weather-oriented target detection and identification method | |
CN115273023A (en) | Vehicle-mounted road pothole identification method and system and automobile | |
CN115240471B (en) | Intelligent factory collision avoidance early warning method and system based on image acquisition | |
CN109753841B (en) | Lane line identification method and device | |
JP7226368B2 (en) | Object state identification device | |
CN113942503A (en) | Lane keeping method and device | |
CN110705495A (en) | Detection method and device for vehicle, electronic equipment and computer storage medium | |
CN116524454A (en) | Object tracking device, object tracking method, and storage medium | |
JP7402753B2 (en) | Safety support system and in-vehicle camera image analysis method | |
CN112070839A (en) | Method and equipment for positioning and ranging rear vehicle transversely and longitudinally | |
Wang | Automated disconnected towing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |