CN110619651B - Driving road segmentation method based on monitoring video - Google Patents
Driving road segmentation method based on monitoring video Download PDFInfo
- Publication number
- CN110619651B CN110619651B CN201910846973.XA CN201910846973A CN110619651B CN 110619651 B CN110619651 B CN 110619651B CN 201910846973 A CN201910846973 A CN 201910846973A CN 110619651 B CN110619651 B CN 110619651B
- Authority
- CN
- China
- Prior art keywords
- road
- image
- moving target
- monitoring video
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a driving road segmentation method based on a monitoring video, which comprises the following steps: s1, acquiring a road monitoring video through a monitoring camera; s2, detecting a moving target in the road monitoring video by using a moving target detection method to obtain a moving target mask image; s3, dividing the road monitoring video into a plurality of segments of division result mask images according to the time segment sequence length, and inputting the moving target mask images in the time segment sequence length into the division result mask images; s4, detecting the outermost layer outline boundary of each segmentation result mask image, counting the number of pixels in the outline boundary, and setting a connected domain with a smaller number of threshold value elimination pixels to obtain a morphologically processed video image; and S5, setting a road voting mechanism for a long time period for the video image, and outputting a driving road mask map after removing low-frequency moving targets.
Description
Technical Field
The invention relates to the field of image processing, in particular to a driving road segmentation method based on a monitoring video.
Background
The monitored video driving road segmentation is an important technology in image processing and computer vision, and has wide application value in the fields of intelligent traffic and intelligent monitoring. Two types of methods are usually adopted for traditional road segmentation: binocular visual depth map based methods and motion based methods. In recent years, deep learning research is widely applied to the field of road segmentation, most of the current methods are based on a deep pixel level image semantic segmentation network, and the semantic segmentation network structure can be generally regarded as an encoder-decoder network.
The above road segmentation methods are all image segmentation methods based on pixel classification. Because the deep learning semantic segmentation model is very dependent on training data, most of the current road segmentation data sets are automatic driving data sets based on the view angle in front of a vehicle, and the data sets for monitoring urban high-point road monitoring scenes and urban low-point roads are very lacking. According to the practical application scene, the urban high-point monitoring is very complex due to the high visual angle, and the low-point monitoring is widely distributed, so that a lot of roads are not obvious, and the scenes are not typical. The complexity and multilateral of the road condition also become great challenges of the task, and the current pixel-level semantic segmentation model cannot be directly applied to urban monitoring road scenes due to limited actual segmentation precision, particularly old roads which are non-ideal scenes and high-altitude overlooking roads of urban high points; in video monitoring, factors such as local shielding, illumination, video image quality and camera shake influence the precision of moving target detection, so that a driving road segmentation method capable of improving accuracy is urgently needed to meet market demands.
Disclosure of Invention
The invention aims to solve the problems and provides a driving road segmentation method based on a monitoring video, which improves the accuracy.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a driving road segmentation method based on a monitoring video comprises the following steps:
s1, acquiring a road monitoring video through a monitoring camera;
s2, detecting a moving target in the road monitoring video by using a moving target detection method to obtain a moving target mask image;
s3, dividing the road monitoring video into a plurality of segments of division result mask images according to the time segment sequence length, and inputting the moving target mask images in the time segment sequence length into the division result mask images;
s4, detecting the outermost layer contour boundary of each segment of the segmentation result mask image, counting the number of pixels in the contour boundary, and setting a connected domain with a smaller threshold value rejection pixel number to obtain a morphologically processed video image;
and S5, setting a road voting mechanism for a long time period for the video image, and outputting a driving road mask map after removing low-frequency moving targets.
Further, the moving target detection method in step S2 is an inter-frame difference method, which obtains an absolute value of a gray difference of a corresponding pixel point in different frame images by performing difference operation on two or three continuous frame images in the road surveillance video, and determines that the moving target is a moving target when the absolute value exceeds a threshold.
Further, in the step S2, the nth frame image and the nth-1 frame image in the road monitoring video are set as fn and fn-1, the gray values of the corresponding pixel points of the two frames are recorded as fn (x, y) and fn-1 (x, y), the gray values of the corresponding pixel points of the two frames images are subtracted, and an absolute value of the subtraction is taken to obtain a difference image Dn; the calculation formula is as follows:
Dn(x,y)=|fn(x,y)-fn-1(x,y)|;
setting a threshold value as T, and carrying out differential calculation on pixel points one by one in the road monitoring video to obtain an image R' n (ii) a To image R' n Performing connectivity analysis to obtain a mask image Rn containing a complete moving target; the image R' n The calculation formula of (c) is:
further, the moving object detection method in step S2 is a moving object detection method based on the Vibe algorithm, and includes the following steps:
s201, firstly, establishing a sample set for each pixel point in the road monitoring video, wherein the gray value in the sample set is the gray value of the pixel point in the previous frame or the gray value of the neighborhood of the pixel point;
s202, comparing the gray value of the current pixel with the gray value of the sample set of the pixel, and judging whether the current pixel belongs to a background point or not according to whether the difference value of the gray values exceeds a threshold value or not;
and S203, screening out pixel points which do not belong to the background points to obtain a mask image containing the moving target.
Compared with the prior art, the invention has the advantages and positive effects that:
the invention provides a method for dividing a driving road by using a surveillance video, which detects a moving target in the video by analyzing a continuous image sequence in the surveillance video, removes noises in the surveillance video such as video image quality, camera shake and leaf disturbance by adopting a morphological processing method, and finally reduces the interference of pedestrians and vehicles outside the road by adding a voting mechanism; obtaining the high-quality traffic road mask map. The method effectively avoids the situation that the road segmentation is not accurate enough due to the interference of pedestrians and vehicles outside the road, and greatly improves the accuracy of the road segmentation; on the other hand, the method does not need to construct a training model, has small calculated amount, obviously improves the detection rate, and has stronger generalization capability and higher accuracy in a complex scene compared with the traditional method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a frame of an inter-frame difference method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments of the present invention by a person skilled in the art without any creative effort, should be included in the protection scope of the present invention.
The invention provides a method for dividing a driving road by using a monitoring video, which comprises two parts, namely a moving target detection part and a driving road forming part; the moving object detection method comprises the methods of interframe difference, background check, vibe + and the like. The method can detect the moving object in the video by analyzing the video continuous image sequence, and adopts a morphological processing method for the detected moving object, such as corrosion expansion operation, removal of a small connected domain and hole filling, in order to remove noise in the video, such as video image quality, camera shake and leaf disturbance, and enable the moving object to have better performance in the following steps. After processing, the moving objects mainly comprise moving automobiles on the driving roads, pedestrians moving beside the roads and moving automobiles outside the roads. The accumulation of the moving target area can roughly depict the driving road, but the road segmentation is not accurate enough due to the interference of pedestrians and vehicles outside the road. Therefore, corrosion expansion and connected domain screening in a time period are required, and a voting mechanism is added, so that interference of pedestrians and off-road vehicles is reduced.
The traffic lane dividing process in the embodiment can be divided into two parts, wherein the first part is the detection of the moving object based on the inter-frame difference, and the second part is the estimation of the traffic lane and the generation of the mask map by reasonably utilizing the moving object in the continuous image sequence.
The video sequence collected by the camera has the characteristic of continuity. If there are no moving objects in the scene, the change in consecutive frames is weak, and if there are moving objects, there will be significant changes from frame to frame. The interframe Difference method (Temporal Difference) is used for the above idea. As objects in the scene are moving, the images of the objects are in different positions in different image frames. The algorithm performs differential operation on two or three continuous frames of images in time, pixel points corresponding to different frames are subtracted, the absolute value of the gray difference is judged, and when the absolute value exceeds a certain threshold value, the motion target can be judged, so that the target detection function is realized.
The operation process of the two-frame difference method is shown in fig. 1, and the images of the nth frame and the (n-1) th frame in the video sequence are recorded as fn and fn-1, the gray values of the corresponding pixel points of the two frames are recorded as fn (x, y) and fn-1 (x, y), the gray values of the corresponding pixel points of the two frames are subtracted according to the following formula, and the absolute value of the subtraction is taken to obtain a difference image Dn:
Dn(x,y)=|fn(x,y)-fn-1(x,y)|;
setting a threshold value T, and carrying out binarization processing on the pixel points one by one according to a formula 2.14 to obtain a binarized image Rn'. Wherein, the point with the gray value of 255 is the foreground (moving object) point, and the point with the gray value of 0 is the background point; and performing connectivity analysis on the image Rn', and finally obtaining the image Rn containing the complete moving target.
The above interframe difference method can obtain a moving object mask image in one frame of image. In an actual scene, the moving objects include not only moving vehicles in a traffic lane, but also moving pedestrians beside the traffic lane, moving vehicles outside the traffic lane, leaf disturbance and image noise. In the process of obtaining a road segmentation image by a moving target mask image of an image sequence, aiming at the interference scenes, small connected domain elimination and morphological processing in short time periods and a road voting mechanism in long time periods are adopted. The method comprises the following specific steps:
for small connected component elimination and morphological processing in a short time period, firstly, a time period sequence length is selected to be 200 frames, and in the first 200 frame sequence, a moving target mask map of each frame contributes to a segmentation result mask map. Then, the boundary of the outermost layer outline is detected for a mask image of a segmentation result obtained from the first sequence, large and small connected domains in the mask are found, the number of pixels contained in each boundary is counted, and a threshold value is set to eliminate the small connected domains. Minor noise, disturbance and the like accumulated in the 200 frames can be eliminated through the operation, and as the number of frames increases, the noise disturbance is accumulated more and more, so that the noise disturbance is not easy to remove. Because the moving target in a short time period is limited, the moving distance is short, and the road information cannot be completely described, the operation of 200-frame image sequence is continuously carried out for a plurality of times, so that the road details are completely described.
For the road voting mechanism for a long time period, due to the fact that low-frequency but strong-interference pedestrians and moving vehicles outside a traffic lane can appear in the video, the moving objects cannot be eliminated in the operation, and the accuracy of the result can be directly influenced. Due to the characteristic of low frequency occurrence, a long-time sequence voting mechanism is set. The driving road area can be reserved in the voting due to the high-frequency motion of the vehicle, and low-frequency off-road moving objects such as pedestrians and vehicles which only appear in a certain short time can be eliminated in the voting.
Through the inter-frame difference-based moving object detection, image morphological processing and voting mechanism, the traffic road mask map with high quality is finally output.
Embodiment 2 road segmentation for detecting moving target based on Vibe algorithm
In the embodiment, the traffic lane dividing process can be divided into two parts, wherein the first part is the moving target detection based on the Vibe algorithm, and the second part is the step of reasonably utilizing the moving target in the continuous image sequence to estimate the traffic lane and generate the mask map.
Visual Background Extractor (ViBe), a pixel-level video Background modeling algorithm. The method is characterized in that a sample set is stored in a pixel, the value in the sample set is the past value of the pixel or the value of the neighborhood of the pixel, then the value of the current pixel is compared with the sample set to judge whether the current pixel belongs to a background point, and if so, the value needing to be replaced is selected from a background model. This method differs from many methods in that the replacement background value is not directly replacing the oldest, but is chosen randomly, which has the effect of getting a longer time window.
The background modeling working principle of the ViBe can be used as a classification problem, and whether each current pixel point belongs to a background point is judged according to a sample set, so that the sample set is very important. In this method, one sample set is provided for each pixel. Let v (x) be the pixel at point x in the image color space, then the value in the sample set is v (x) i And the number of samples is N. The sample set may be expressed as: m (x) = { v = 1 ,v 2 ,…,v N ,}. To compare a pixel point with a value in a sample set, S is defined R (v (x)) is an area with x as the center and the radius of R, and if the number of the distance between the value in the sample and the current pixel point in the range is larger than a preset threshold value, the pixel point is classified as a background point. If the threshold is min, the formula is as follows:min is typically set to 20.
The background model initialization process is the process of filling the sample set. However, because the time information is not contained in one frame of image, the characteristic that similar pixel points have similar time distribution is utilized, and for each pixel point, the pixel value of the neighborhood is randomly selected as the model sample value of the pixel point. Note N G (x) In the neighborhood of point x, then there is M 0 (x)={v 0 (y|y∈N G (x) }; where neighborhood selection is important because the statistical correlation between values at different locations decreases as the neighborhood increases. At 640 × 480 video pictures, eight connected domains can be selected as their neighbors.
The initialization method has the advantages of being sensitive in response to noise, small in calculation amount and fast in speed, and capable of quickly detecting the moving target, but has the defects that a Ghost area is easily introduced, and the Ghost area can be quickly eliminated when a proper background gunn strategy is selected.
Background model update is a key part of the algorithm, which enables the background model to adapt to changes in the background, such as sudden changes in illumination and changes in objects in the scene. The traditional update strategy has a conservative update strategy and a Blind update strategy.
The conservative updating strategy adopts the following steps: the current pixel can only be added into the background model when being judged as the background point, and the foreground point is never used for filling the background model. The method has the advantages that the detection of the moving object is very sensitive, and the method has the defects that deadlock and continuous Ghost areas are easily caused. Such as when the background is misjudged as a foreground or stationary object suddenly moving. And the corresponding solution is to introduce spatial information or adopt a foreground point counting method.
The adoption of the Blind update strategy is as follows: whether the pixel points are judged as foreground points or background points, the pixel points are added into the background model. Its advantage is not sensitivity to deadlock, but it is not detectable for slow moving objects to be easily merged into background model. The corresponding solution is to enlarge the background model sample, but this increases the computational burden of the memory. The ViBe algorithm adopts a method of combining a conservative updating strategy and spatial information. A policy of foreground point count may also be added. Such an integrated update strategy has three more characteristics: first, there is no memory refresh. When the sample value needing to be replaced in the sample set is selected, one sample value is randomly selected for updating, so that the sample set can be ensured to have the latest historical pixel point and also contain the older sample value, namely the time window of the sample set is longer. Second, time sampling. In many practical situations, it is not necessary to update each background pixel model of each frame of data, so when a pixel is divided into background points, whether the point is used for updating the background model is determined according to a certain probability 1/Φ (Φ is a time sampling factor, and can be 5). This allows the background model sample set to contain data for a longer time window. Third, spatial propagation. As mentioned above, the choice of conservative update strategy has to solve its drawbacks, so that it needs to introduce spatial information. The specific operation is as follows: when a pixel is selected to update the sample value in the background model, the background model sample of any pixel in the neighborhood is also updated at the same time.
The above ViBe method can obtain a moving object mask map in one frame image. In an actual scene, the moving objects include not only moving vehicles in a traffic lane, but also moving pedestrians beside the traffic lane, moving vehicles outside the traffic lane, leaf disturbance and image noise. In the process of obtaining a road segmentation image by a moving target mask image of an image sequence, aiming at the interference scenes, small connected domain elimination and morphological processing in short time periods and a road voting mechanism in long time periods are adopted. The method comprises the following specific steps:
for small connected domain elimination and morphological processing of a short time period, firstly, a time period sequence length is selected to be 200 frames, and in the first 200 frame sequence, a moving target mask map of each frame contributes to a segmentation result mask map. Then, the boundary of the outermost layer outline is detected for a mask image of a segmentation result obtained from the first sequence, large and small connected domains in the mask are found, the number of pixels contained in each boundary is counted, and a threshold value is set to eliminate the small connected domains. Minor noise, disturbance and the like accumulated in the 200 frames can be eliminated through the operation, and the noise disturbance is accumulated more and more along with the increase of the frame number and becomes difficult to remove. Because the moving target in a short time period is limited, the moving distance is short, and the road information cannot be completely described, the operation of 200-frame image sequence is continuously carried out for a plurality of times, so that the road details are completely described.
For the road voting mechanism for a long period of time, due to the fact that low-frequency but strong-interference pedestrians and moving vehicles outside the traffic lane can appear in the video, the moving objects cannot be eliminated in the operation, but the accuracy of the result can be directly influenced. Due to the characteristic of low frequency occurrence, a long-time sequence voting mechanism is set. The driving road area can be reserved in the voting due to the high-frequency motion of the vehicle, and low-frequency off-road moving objects such as pedestrians and vehicles which only appear in a certain short time can be eliminated in the voting.
Through the moving target detection, the image morphology processing, the image sequence segmentation processing and the voting mechanism based on the Vibe algorithm, the traffic road mask image with higher quality is finally output.
Claims (4)
1. A driving road segmentation method based on a monitoring video is characterized in that: the method comprises the following steps:
s1, acquiring a road monitoring video through a monitoring camera;
s2, detecting a moving target in the road monitoring video by using a moving target detection method to obtain a moving target mask image;
s3, dividing the road monitoring video into a plurality of segments of division result mask images according to the time segment sequence length, and inputting the moving target mask images in the time segment sequence length into the division result mask images;
s4, detecting the outermost layer outline boundary of each segmentation result mask image, counting the number of pixels in the outline boundary, and setting a connected domain with a smaller number of threshold value elimination pixels to obtain a morphologically processed video image;
and S5, setting a road voting mechanism for a long time period for the video image, and outputting a driving road mask map after removing low-frequency moving targets.
2. The method for segmenting the driving road based on the surveillance video as claimed in claim 1, characterized in that: the moving target detection method in step S2 is an inter-frame difference method, which obtains an absolute value of a gray difference of a corresponding pixel point in images of different frames by performing difference operation on two or three continuous frames of images in the road monitoring video, and determines that the moving target is a moving target when the absolute value exceeds a threshold.
3. The monitoring video-based driving road segmentation method as claimed in claim 2, characterized in that: in the step S2, setting the nth frame image and the nth-1 frame image in the road monitoring video as fn and fn-1, recording the gray values of the corresponding pixel points of the two frames as fn (x, y) and fn-1 (x, y), subtracting the gray values of the corresponding pixel points of the two frames, and taking the absolute value to obtain a difference image Dn; the calculation formula is as follows:
Dn(x,y)=|fn(x,y)-fn-1(x,y)|;
setting a threshold value as T, and carrying out differential calculation on pixel points one by one in the road monitoring video to obtain an image R' n (ii) a To image R' n Performing connectivity analysisObtaining a mask image Rn containing a complete moving target; the image R' n The calculation formula of (2) is as follows:
4. the monitoring video-based driving road segmentation method as claimed in claim 1, wherein: the moving target detection method in the step S2 is a moving target detection method based on the Vibe algorithm, and includes the following steps:
s201, firstly, establishing a sample set for each pixel point in the road monitoring video, wherein the gray value in the sample set is the gray value of the pixel point in the previous frame or the gray value of the neighborhood of the pixel point;
s202, comparing the gray value of the current pixel point with the gray value of the sample set of the pixel point, and judging whether the current pixel point belongs to a background point according to whether the difference value of the gray values exceeds a threshold value;
and S203, screening out pixel points which do not belong to the background points to obtain a mask image containing the moving target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910846973.XA CN110619651B (en) | 2019-09-09 | 2019-09-09 | Driving road segmentation method based on monitoring video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910846973.XA CN110619651B (en) | 2019-09-09 | 2019-09-09 | Driving road segmentation method based on monitoring video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110619651A CN110619651A (en) | 2019-12-27 |
CN110619651B true CN110619651B (en) | 2023-01-17 |
Family
ID=68923064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910846973.XA Active CN110619651B (en) | 2019-09-09 | 2019-09-09 | Driving road segmentation method based on monitoring video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110619651B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111353452A (en) * | 2020-03-06 | 2020-06-30 | 国网湖南省电力有限公司 | Behavior recognition method, behavior recognition device, behavior recognition medium and behavior recognition equipment based on RGB (red, green and blue) images |
CN113743151B (en) * | 2020-05-27 | 2024-08-02 | 顺丰科技有限公司 | Method, device and storage medium for detecting road surface casting object |
CN113222999B (en) * | 2021-04-14 | 2024-09-03 | 江苏省基础地理信息中心 | Remote sensing target segmentation and automatic stitching method based on voting mechanism |
CN115457381B (en) * | 2022-08-18 | 2023-09-05 | 广州从埔高速有限公司 | Method, system, device and storage medium for detecting illegal land of expressway |
CN115529462A (en) * | 2022-09-30 | 2022-12-27 | 中国电信股份有限公司 | Video frame processing method and device, electronic equipment and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101876705A (en) * | 2009-11-03 | 2010-11-03 | 清华大学 | Frequency domain vehicle detecting method based on single-frequency continuous wave radar |
CN103034863A (en) * | 2012-12-24 | 2013-04-10 | 重庆市勘测院 | Remote-sensing image road acquisition method combined with kernel Fisher and multi-scale extraction |
CN103262139A (en) * | 2010-12-15 | 2013-08-21 | 本田技研工业株式会社 | Lane recognition device |
CN104077757A (en) * | 2014-06-09 | 2014-10-01 | 中山大学 | Road background extraction and updating method with fusion of real-time traffic state information |
CN105184240A (en) * | 2015-08-27 | 2015-12-23 | 广西师范学院 | Scan line clustering-based security video road automatic identification algorithm |
CN107301776A (en) * | 2016-10-09 | 2017-10-27 | 上海炬宏信息技术有限公司 | Track road conditions processing and dissemination method based on video detection technology |
CN108198207A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Multiple mobile object tracking based on improved Vibe models and BP neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015117072A1 (en) * | 2014-01-31 | 2015-08-06 | The Charles Stark Draper Laboratory, Inc. | Systems and methods for detecting and tracking objects in a video stream |
-
2019
- 2019-09-09 CN CN201910846973.XA patent/CN110619651B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101876705A (en) * | 2009-11-03 | 2010-11-03 | 清华大学 | Frequency domain vehicle detecting method based on single-frequency continuous wave radar |
CN103262139A (en) * | 2010-12-15 | 2013-08-21 | 本田技研工业株式会社 | Lane recognition device |
CN103034863A (en) * | 2012-12-24 | 2013-04-10 | 重庆市勘测院 | Remote-sensing image road acquisition method combined with kernel Fisher and multi-scale extraction |
CN104077757A (en) * | 2014-06-09 | 2014-10-01 | 中山大学 | Road background extraction and updating method with fusion of real-time traffic state information |
CN105184240A (en) * | 2015-08-27 | 2015-12-23 | 广西师范学院 | Scan line clustering-based security video road automatic identification algorithm |
CN107301776A (en) * | 2016-10-09 | 2017-10-27 | 上海炬宏信息技术有限公司 | Track road conditions processing and dissemination method based on video detection technology |
CN108198207A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Multiple mobile object tracking based on improved Vibe models and BP neural network |
Non-Patent Citations (2)
Title |
---|
Accurate Urban Road Centerline Extraction from VHR Imagery via Multiscale Segmentation and Tensor Voting;Guangliang Cheng等;《arXiv.org》;20160225;第1-14页 * |
一种面向全景视频的交通状态检测方法;王国林等;《清华大学学报(自然科学版)》;20110115(第01期);第30-35页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110619651A (en) | 2019-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110619651B (en) | Driving road segmentation method based on monitoring video | |
CN101739686B (en) | Moving target tracking method and system thereof | |
CN112036254B (en) | Moving vehicle foreground detection method based on video image | |
US7418134B2 (en) | Method and apparatus for foreground segmentation of video sequences | |
CN111444854B (en) | Abnormal event detection method, related device and readable storage medium | |
CN101739550B (en) | Method and system for detecting moving objects | |
EP1697901B1 (en) | Method for modeling background and foreground regions | |
US20080112606A1 (en) | Method for moving cell detection from temporal image sequence model estimation | |
JP2009533778A (en) | Video segmentation using statistical pixel modeling | |
CN106022243B (en) | A kind of retrograde recognition methods of the car lane vehicle based on image procossing | |
CN111723644A (en) | Method and system for detecting occlusion of surveillance video | |
CN111860120B (en) | Automatic shielding detection method and device for vehicle-mounted camera | |
CN105046719B (en) | A kind of video frequency monitoring method and system | |
Zhang et al. | Moving vehicles segmentation based on Bayesian framework for Gaussian motion model | |
CN108765453B (en) | Expressway agglomerate fog identification method based on video stream data | |
CN113781516B (en) | High-altitude parabolic detection method | |
Balisavira et al. | Real-time object detection by road plane segmentation technique for ADAS | |
abd el Azeem Marzouk | Modified background subtraction algorithm for motion detection in surveillance systems | |
KR20070104999A (en) | Moving vehicle tracking and parked vehicle extraction system and method for illegal-parking management | |
Ganesan et al. | Video object extraction based on a comparative study of efficient edge detection techniques. | |
RU2676028C1 (en) | Method of detecting left object in video stream | |
Fadhel et al. | Real-Time detection and tracking moving vehicles for video surveillance systems using FPGA | |
Ren et al. | High-efficient detection of traffic parameters by using two foreground temporal-spatial images | |
Bondzulic et al. | Multisensor background extraction and updating for moving target detection | |
Chandrasekhar et al. | A survey of techniques for background subtraction and traffic analysis on surveillance video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |