CN104978735B - It is suitable for the background modeling method of random noise and illumination variation - Google Patents
It is suitable for the background modeling method of random noise and illumination variation Download PDFInfo
- Publication number
- CN104978735B CN104978735B CN201410147830.7A CN201410147830A CN104978735B CN 104978735 B CN104978735 B CN 104978735B CN 201410147830 A CN201410147830 A CN 201410147830A CN 104978735 B CN104978735 B CN 104978735B
- Authority
- CN
- China
- Prior art keywords
- background
- image
- mask
- frame image
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 49
- 238000001914 filtration Methods 0.000 claims abstract description 36
- 238000004364 calculation method Methods 0.000 claims abstract description 25
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000012937 correction Methods 0.000 claims description 7
- 238000012935 Averaging Methods 0.000 claims description 4
- JXASPPWQHFOWPL-UHFFFAOYSA-N Tamarixin Natural products C1=C(O)C(OC)=CC=C1C1=C(OC2C(C(O)C(O)C(CO)O2)O)C(=O)C2=C(O)C=C(O)C=C2O1 JXASPPWQHFOWPL-UHFFFAOYSA-N 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 abstract 1
- 238000004458 analytical method Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 2
- 241000287196 Asthenes Species 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiments of the invention provide a kind of background modeling method for being suitable for random noise and illumination variation, comprise the following steps:Step A:The training image of video is filtered based on Gaussian filter group, initializes multiple single Gauss models;Step B:According to single Gauss model parameter, calculated with reference to multiple filtering images and refer to background mask;Step C:According to illumination variation probability calculation diversity factor mask, illumination compensation is carried out with reference to background mask to described, and calculate amendment background mask;Step D:Robust background mask is calculated, and segmentation prospect background is carried out to the training image by the robust background mask, and updates single Gauss model parameter.The background modeling method improves the ability of the adaptation background change of model, and improves the accuracy split under the disturbed conditions such as illumination to the prospect background of video image.
Description
Technical Field
The invention relates to the technical field of media communication, in particular to a background modeling method capable of adapting to random noise and illumination change.
Background
In recent years, video acquisition equipment for security and protection and other purposes is gradually popularized, generated video data has the characteristics of large data size, various data types, low value density, high processing speed requirement and the like, and the intelligent video analysis technology is difficult to process in a manual mode, so that the intelligent video analysis technology is widely concerned and applied. In modern object recognition technology, information such as shape, size, proportion and the like of an object is mostly adopted to analyze an extracted foreground object based on a heuristic rule mode, so that the type of the object is obtained.
Foreground extraction under complex conditions is a difficult task, and the difficulty mainly derives from the high complexity of the background and the change of the background. Background subtraction is a widely used method that assumes that the background is known and dynamic. The foreground is extracted by comparing a video frame with a background image pixel by pixel, the method is invalid under the condition that the background changes, background modeling is one of important technologies of intelligent video analysis for solving the problem of local change of the background, the mainstream method is to model pixel values in the video image by adopting a mixed Gaussian model based on the general rule that a background area keeps relatively unchanged in time, and the foreground and the background are segmented according to the matching degree of the pixel values relative to the model.
The disadvantages of the background modeling method in the prior art are as follows: it cannot adapt to the rapid change of illumination, and has poor application effect when the random noise is serious. For example, dust and illumination in a warehouse environment can seriously affect the use performance of the video analysis system, so that false detection and missed detection of foreground objects are caused, and the obstacle of intelligent video analysis is caused.
Disclosure of Invention
The embodiment of the invention provides a background modeling method capable of adapting to random noise and illumination change, and improves the accuracy of image foreground and background segmentation and the adaptability of a model when illumination changes.
A background modeling method capable of adapting to random noise and illumination change comprises the following steps:
step A: selecting a plurality of frames of images in a video image as training images, filtering the training images based on a set Gaussian filter group to obtain a plurality of groups of filtering images, and initializing a plurality of groups of single Gaussian models according to each group of filtering images and the training images;
and B: calculating a plurality of groups of reference background masks by combining a plurality of groups of filtering images and the current frame image according to each group of single Gaussian model parameters;
step C: calculating a difference mask according to the illumination change probability, performing illumination compensation on each group of reference background masks, and calculating a plurality of groups of correction background masks;
step D: and averaging all the corrected background masks to obtain a robust background mask, segmenting the foreground background of the current frame image through the robust background mask, and updating each group of single Gaussian model parameters.
The step A comprises the following steps:
selecting a plurality of frame images of a starting part in a video image as training images, setting a plurality of Gaussian filters with different variances to form a Gaussian filter group, and filtering each training image by using each Gaussian filter in the Gaussian filter group to obtain a plurality of groups of filtering images;
training a single Gaussian model by adopting an expectation-maximization (EM) algorithm for pixel values of corresponding pixel points at the same position in each training image and each group of filtering images;
and setting the last frame of training image as the current frame of image.
The step B comprises the following steps:
when the current frame image is any frame image behind the training image, filtering the current frame image through the Gaussian filter bank to obtain a plurality of filtered images;
calculating a probability value that pixel values of corresponding pixel points at the same position in the current frame image and the multiple filtering images belong to the corresponding single Gaussian model, and taking the probability value as a reference background mask;
when the current frame image is the last frame training image, taking the reference background mask as a correction background mask of the current frame image, and executing the step D; otherwise, executing step C.
The step C comprises the following steps:
generating a new image according to the difference degree of the corrected background mask corresponding to each reference background mask and the previous frame image of the current frame image, wherein the value of each pixel point in the new image increases along with the exponential increase of the difference degree at the corresponding pixel point, and the new image is used as the difference degree mask corresponding to the reference background mask;
and calculating the probability of illumination change of each pixel point according to the difference between the current frame image and the previous frame image, and obtaining the correction background mask corresponding to each reference background mask through illumination compensation.
The step D comprises the following steps:
c, carrying out average calculation on all the corrected background masks obtained in the step C, taking the obtained value as a robust background mask of the current frame image, taking the robust background mask as a segmentation threshold of the foreground background of the current frame image, and taking the pixel value of each pixel point in the current frame image as the background if the pixel value is larger than the segmentation threshold of the foreground background; otherwise, the pixel point in the current frame image is a foreground;
if the current frame image is the last frame image of the video needing foreground and background segmentation, the foreground and background segmentation process is finished; otherwise, updating all single Gaussian model parameters according to the difference degree mask and the robust background mask; and taking the next frame image as the current frame image, and executing the step B.
The calculation formula of the reference background mask in the step B is as follows:
setting the serial numbers of all filters as i, wherein i = 0., N, k are parameters determined by experience, and controlling more than 90% of pixel points in a reference background mask to be between 0 and 1; wherein, the t frame image is recorded asThe filtered image after the ith filter isFor i = 0.. N, the trained gaussian model is constructed from the mean image μ i Sum variance image σ i And (4) showing.
The mask of step CThe calculation formula of (c) is:
wherein a is a response coefficient related to a background change speed, and a>1;Is the ith modified background mask of frame t-1.
The corrected background maskThe calculation formula of (2) is as follows:
wherein,the pixel values of the corresponding pixel points in the two images are multiplied to generate a new image; b is a response coefficient related to the probability of change in illumination, and 0<b<1;To representThe calculation formula of the illumination change coefficient in (1) is:
if i =0, the data is transmitted,is the t-1 frame image of all training images and all filtering images generated by a single Gaussian filter, otherwiseFiltering the t-1 frame of images in all training images and all filtering images generated by a single Gaussian filter by an ith filter to obtain filtering images; r P ,G P ,B P Respectively representing the single-channel images of the red, green and blue channels of the image P, wherein the value of P isOr
Robust background mask R in said step D t The calculation formula of (c) is:
where N is the number of all gaussian filters.
The calculation formula for updating the parameters of the single Gaussian model in the step D is as follows:
wherein,representAnd C, after the pixel value of each pixel point in the image is subjected to square calculation, and t = t +1, continuing to execute the step B.
It can be seen from the technical solutions provided by the embodiments of the present invention that, in the embodiments of the present invention, a video image to be subjected to foreground and background segmentation is filtered through a plurality of gaussian filters, a reference background mask is calculated through a training gaussian model, illumination compensation is performed on the basis of the reference background mask, a corrected background mask is calculated, and a robust background mask is finally calculated, so that the video image is subjected to foreground and background segmentation through the robust background mask, background modeling is performed through estimating the speed of background change and the hysteresis of the model on the background change, illumination compensation can be performed on the image according to the illumination change, the update speed of a single gaussian model is adaptively adjusted, and therefore, the accuracy of performing foreground and background segmentation on the image under the condition of illumination change is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a processing flow chart of a background modeling method capable of adapting to random noise and illumination change according to an embodiment of the present invention.
Detailed Description
For the convenience of understanding of the embodiments of the present invention, the following detailed description will be given by way of example with reference to the accompanying drawings, and the embodiments are not limited to the embodiments of the present invention.
The method comprises the steps of intercepting a plurality of frames of video images as training images for videos of which the foreground and the background need to be segmented, training a single Gaussian model by using an EM (Expectation Maximization) algorithm through pixel values of corresponding pixel points of the training images and the filtering images, calculating a reference background mask of each training image and each filtering image through the single Gaussian model, performing illumination compensation on the basis of the reference background mask, calculating to obtain a corrected background mask, and finally calculating a robust background mask so as to segment the foreground and the background of the video images. Therefore, the background modeling method can improve the accuracy of foreground and background segmentation of the image under the interference conditions of illumination and the like.
The embodiment of the invention provides a processing flow chart of a background modeling method capable of adapting to random noise and illumination change, which is shown in figure 1 and comprises the following processing steps:
step S100: and d frames of images are selected from the beginning part of the video image to serve as training images, and the resolution of the video image is recorded as n rows and m columns. Setting a Gaussian filter group consisting of N more than 0 Gaussian filters, wherein the variance of each Gaussian filter is different, filtering the training image based on the set Gaussian filter group to obtain a plurality of groups of filtering images, and initializing a plurality of groups of single Gaussian models according to each group of filtering images and the training image. Let the filter number be i, and i = 0.,. N, when i =0, it represents that there is no filter, i.e., the image has no filtering function of the filter, i.e., the frame of image being processed is a training image.
Then the following two steps are respectively carried out:
the first step is as follows: if i =0, namely no filter is used for filtering, all training images are used as a training sample set of a single Gaussian model; otherwise, filtering all training images one by using an ith Gaussian filter, and taking d frames of filtered images generated after filtering by the ith filter and all original training images as a training sample set, wherein the training sample set comprises i training sample sets, and the resolution of each frame of filtered image is n rows and m columns;
the second step is that: calculating the pixel value of each corresponding pixel point at the corresponding position of all the images in the training sample set by using an EM (effective noise) algorithm, training a plurality of groups of single Gaussian models, namely i groups of single Gaussian models in total, wherein the trained single Gaussian models are all formed by a mean image mu i Sum variance image σ i Representation in which the mean image μ i The resolution of the image is n rows and m columns, and the pixel value of each pixel point represents the mean value of the pixel values of the corresponding pixel points at the same position in all the images; sigma i The resolution of the image processing system is also n rows and m columns, the pixel value of each pixel point represents the variance of the pixel value of the corresponding pixel point at the same position in all the images, the sequence number of the currently processed image is marked as t, and the value is set as d.
Step S110: calculating a plurality of groups of reference background masks by combining a plurality of groups of filtered images and the current frame image according to the parameters of each group of single Gaussian models in the step S100;
first, the t frame image is recorded asFor i =0,.. And N, the following two operations are performed, respectively:
the first step is as follows: using the ith Gaussian filter pair in step S100Filtering, when the current frame image is any frame image after the training image, filtering the current frame image through a Gaussian filter bank to correspondingly obtain a plurality of filtered images, and marking the generated filtered images as the filtered imagesWhereinThe resolution of the optical fiber is n rows and m columns;
the second step is that: calculate the ith reference background mask of the t frame, noted Is an image with resolution of n rows and m columns, the reference background maskIs that the pixel value of each pixel point inThe pixel value of the corresponding pixel point in the image is corresponding to the mean value image mu i The sum of the pixel values of the corresponding pixels i The probability value of the single Gaussian model represented by the pixel value of the corresponding pixel point is calculated as follows:
equation 1
And k is a parameter determined by experience, and the value of more than 90% of pixel points in the reference background mask is controlled to be between 0 and 1.
Step S120: if t = d, that is, the current processed image is the last frame image of the training sample set, then step S130 is executed; otherwise, step S140 is executed.
Step S130: will be calculated for i =0,.. N, respectivelySequentially as the ith corrected background mask of the d frame, and is marked asThen, step S160 is executed; for the t-th frame image to be processed, t is set to t = t +1, and execution of step S120 is continued.
Step S140: calculating a difference mask according to the illumination change probability, and performing illumination compensation on each reference background mask;
generating a new image according to the difference degree of the corrected background mask corresponding to each reference background mask and the previous frame image of the current frame image, wherein the value of each pixel point in the new image is exponentially increased along with the difference degree of the corresponding pixel point, and the image is used as the difference degree mask corresponding to the reference background mask;
calculate the ith disparity mask for the tth frame image, denoted as i =0 The image with the resolution of n rows and m columns is calculated as follows:
equation 2
In the above calculation formula, a is a response coefficient relating to a background change speed, and a>1;Is the ith corrected background mask of the t-1 frame image;
step S150: according to the difference degree mask calculated in the step S140 to the reference background maskPerforming illumination compensation calculation to calculate the ith correction background mask of the t frame image, and recording the ith correction background mask The image with the resolution of n rows and m columns is obtained by the following calculation method:
equation 3
In the above-mentioned calculation formula, the calculation formula,the pixel values of corresponding pixel points in the two images are multiplied to generate a new image; b is a response coefficient related to the probability of change in illumination, and 0<b<1;RepresentThe calculation formula of the illumination change coefficient in (1) is as follows:
formula (la)4
If i =0, i.e. no filter action is applied to the training image using any filter, thenIs the t-1 th frame training image; if not, then,the frame is a filtered image generated after the frame t-1 is filtered by an ith filter; r P ,G P ,B P Respectively representing the single-channel images of the red, green and blue channels of the image P, wherein the value of P isOr
Step S160: and averaging all the corrected background masks to obtain a robust background mask, segmenting the foreground background of the current frame image through the robust background mask, and updating each group of single Gaussian model parameters.
Averaging all the corrected background masks to obtain a value as a robust background mask of the current frame image, and calculating the robust background mask of the t-th frame image according to different corrected background masks respectively calculated in the steps S130 and S150, and recording the robust background mask as R t ,R t Is an image with the size of n rows and m columns, and takes the robust background mask as a segmentation threshold of the foreground background of the current frame image, and the calculation method is shown as the following formula,
equation 5
The video image is divided into foreground and background according to the above calculation formula, and if the pixel value of each pixel point in the current frame image is greater than the foreground and background division threshold value, the threshold value can be determined by the algorithm such as OTSU method (maximum inter-class variance method), etc., thenThe corresponding pixel point in the image is a background; if not, then the mobile terminal can be switched to the normal mode,the corresponding pixel point in the image is the foreground, and then step S170 is executed.
Step S170: if the t frame image is the last frame image requiring foreground and background segmentation, executing step S180; otherwise, step S190 is executed.
Step S180: the foreground background segmentation process ends.
Step S190: updating all single Gaussian model parameters according to the difference degree mask and the robust background mask; and the next frame image is taken as the current frame image, and step S110 is performed. Update μ for i =0 i And σ i The update method is as follows,
equation 6
Equation 7
In the formulaPresentation pairThen, the t-th frame is set to t = t +1, and step S110 is continuously executed.
In summary, the embodiment of the present invention adaptively adjusts the update speed by estimating the speed of the background change and the hysteresis of the model to the background change, so that the background modeling method can adapt to the capability of the background change more quickly; the illumination compensation is carried out on the pixel points which have higher error possibility and are judged to have illumination change in the background identification in the image, so that the accuracy of foreground and background segmentation and the adaptability of the model under the condition of interference such as illumination change are improved; and a plurality of Gaussian filters are used for filtering and denoising the video image, and the results of different filters are comprehensively utilized to respectively model the background, so that the robustness of the background modeling on multi-granularity random noise is further improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A background modeling method capable of adapting to random noise and illumination change is characterized by comprising the following steps:
step A: selecting a plurality of frame images of a starting part in a video image as training images, setting a plurality of Gaussian filters with different variances to form a Gaussian filter group, filtering each training image by using each Gaussian filter in the Gaussian filter group to obtain a plurality of groups of filtering images, forming each group of filtering images and all training images into a training sample set to obtain a plurality of training sample sets, calculating the pixel value of each corresponding pixel point at the same position of all images in each training sample set by adopting an expectation maximization EM (effective maximum EM) algorithm, training a plurality of groups of single Gaussian models, and setting the last frame of training image as a current frame image;
and B, step B: calculating a plurality of reference background masks by combining a plurality of filtering images according to a plurality of groups of single Gaussian model parameters of the current frame image;
and C: generating a new image according to the difference degree of each reference background mask of the current frame image and the correction background mask corresponding to the previous frame image of the current frame image, wherein the value of each pixel point in the new image is exponentially increased along with the difference degree at the corresponding pixel point, and the image is used as the difference degree mask corresponding to the reference background mask;
step D: calculating the probability of illumination change of each pixel point in the difference degree mask corresponding to each reference background mask, and obtaining the corrected background mask corresponding to each reference background mask through illumination compensation;
step E: and averaging all the corrected background masks of the current frame image to obtain a robust background mask, segmenting the foreground background of the current frame image through the robust background mask, and updating each group of single Gaussian model parameters.
2. The background modeling method adaptable to random noise and illumination variation according to claim 1, wherein said step B comprises:
when the current frame image is any frame image behind the training image, filtering the current frame image through the Gaussian filter bank to obtain a plurality of filtered images;
calculating a probability value that pixel values of corresponding pixel points at the same position in the current frame image and the multiple filtered images belong to the corresponding single Gaussian model, and using the probability value as a reference background mask;
when the current frame image is the last frame training image, taking the reference background mask as a correction background mask of the current frame image, and executing the step E; otherwise, executing step C.
3. The background modeling method adaptable to random noise and illumination variation according to claim 1, wherein said step E comprises:
d, carrying out average calculation on all the corrected background masks obtained in the step D, taking the obtained value as a robust background mask of the current frame image, taking the robust background mask as a segmentation threshold of the foreground background of the current frame image, and regarding the pixel value of each pixel point in the current frame image to be greater than the segmentation threshold of the foreground background, taking the pixel point in the current frame image as the background; otherwise, the pixel point in the current frame image is a foreground;
if the current frame image is the last frame image of the video needing foreground and background segmentation, the foreground and background segmentation process is finished; otherwise, updating all single Gaussian model parameters according to the difference degree mask and the robust background mask; and taking the next frame image as the current frame image, and executing the step B.
4. A background modeling method adaptable to random noise and illumination variation according to any one of claims 1 to 3, characterized in that the calculation formula of the reference background mask in the step B is:
setting the serial numbers of all filters as i, wherein i = 0.., N, k are parameters determined by experience, and controlling more than 90% of pixel points in a reference background mask to take values between 0 and 1; wherein, the t frame image is recorded asThe filtered image after the ith filter isFor i = 0.. N, the trained gaussian model is constructed from the mean image μ i Sum variance image σ i And (4) showing.
5. The background modeling method capable of adapting to random noise and illumination variation as claimed in claim 4, wherein said difference degree mask in step CThe calculation formula of (2) is as follows:
wherein a is a response coefficient related to a background change speed, and a>1;Is the ith modified background mask of frame t-1.
6. The background modeling method capable of adapting to random noise and illumination variations as claimed in claim 5, wherein said modified background maskThe calculation formula of (c) is:
wherein,the pixel values of the corresponding pixel points in the two images are multiplied to generate a new image; b is a response coefficient related to the probability of change in illumination, and 0<b<1;To representThe calculation formula of the illumination change coefficient in (1) is:
if the ratio of i =0, the controller,all training images and t-1 frame images in all filtering images generated by a single Gaussian filter, and whether the training images and the filtering images are notThen theFiltering t-1 frame images in all training images and all filtering images generated by a single Gaussian filter by an ith filter to obtain filtering images; r is P ,G P ,B P Respectively representing the single-channel images of the red, green and blue channels of the image P, wherein the value of P isOr
7. The background modeling method capable of adapting to random noise and illumination variation as claimed in claim 6, wherein said robust background mask R in step E t The calculation formula of (c) is:
where N is the number of all gaussian filters.
8. The background modeling method capable of adapting to random noise and illumination variation according to claim 7, wherein the calculation formula for updating the single-Gaussian model parameters in the step E is as follows:
wherein,representAnd D, after the pixel value of each pixel point in the image is subjected to square calculation, and the step B is continuously executed after t = t + 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410147830.7A CN104978735B (en) | 2014-04-14 | 2014-04-14 | It is suitable for the background modeling method of random noise and illumination variation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410147830.7A CN104978735B (en) | 2014-04-14 | 2014-04-14 | It is suitable for the background modeling method of random noise and illumination variation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104978735A CN104978735A (en) | 2015-10-14 |
CN104978735B true CN104978735B (en) | 2018-02-13 |
Family
ID=54275216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410147830.7A Active CN104978735B (en) | 2014-04-14 | 2014-04-14 | It is suitable for the background modeling method of random noise and illumination variation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104978735B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112150499B (en) * | 2019-06-28 | 2024-08-27 | 华为技术有限公司 | Image processing method and related device |
CN114820684A (en) * | 2022-04-07 | 2022-07-29 | 广州方硅信息技术有限公司 | Image segmentation mask correction method, device, equipment and medium thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1941850A (en) * | 2005-09-29 | 2007-04-04 | 中国科学院自动化研究所 | Pedestrian tracting method based on principal axis marriage under multiple vedio cameras |
CN101420536A (en) * | 2008-11-28 | 2009-04-29 | 江苏科海智能系统有限公司 | Background image modeling method for video stream |
CN101719216A (en) * | 2009-12-21 | 2010-06-02 | 西安电子科技大学 | Movement human abnormal behavior identification method based on template matching |
CN103679704A (en) * | 2013-11-22 | 2014-03-26 | 中国人民解放军第二炮兵工程大学 | Video motion shadow detecting method based on lighting compensation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130294669A1 (en) * | 2012-05-02 | 2013-11-07 | University Of Louisville Research Foundation, Inc. | Spatial-spectral analysis by augmented modeling of 3d image appearance characteristics with application to radio frequency tagged cardiovascular magnetic resonance |
-
2014
- 2014-04-14 CN CN201410147830.7A patent/CN104978735B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1941850A (en) * | 2005-09-29 | 2007-04-04 | 中国科学院自动化研究所 | Pedestrian tracting method based on principal axis marriage under multiple vedio cameras |
CN101420536A (en) * | 2008-11-28 | 2009-04-29 | 江苏科海智能系统有限公司 | Background image modeling method for video stream |
CN101719216A (en) * | 2009-12-21 | 2010-06-02 | 西安电子科技大学 | Movement human abnormal behavior identification method based on template matching |
CN103679704A (en) * | 2013-11-22 | 2014-03-26 | 中国人民解放军第二炮兵工程大学 | Video motion shadow detecting method based on lighting compensation |
Also Published As
Publication number | Publication date |
---|---|
CN104978735A (en) | 2015-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765325B (en) | Small unmanned aerial vehicle blurred image restoration method | |
US9842382B2 (en) | Method and device for removing haze in single image | |
US8995781B2 (en) | Method and apparatus for deblurring non-uniform motion blur using multi-frame including blurred image and noise image | |
Xu et al. | A switching weighted vector median filter based on edge detection | |
CN104978715A (en) | Non-local mean image denoising method based on filtering window and parameter self-adaption | |
CN106097256B (en) | A kind of video image fuzziness detection method based on Image Blind deblurring | |
WO2017185772A1 (en) | Method and device for video image enhancement and computer storage medium | |
CN109584198B (en) | Method and device for evaluating quality of face image and computer readable storage medium | |
CN105118027A (en) | Image defogging method | |
CN103729828B (en) | video rain removing method | |
US9183671B2 (en) | Method for accelerating Monte Carlo renders | |
Yu et al. | Image and video dehazing using view-based cluster segmentation | |
CN105787892A (en) | Monte Carlo noise removal method based on machine learning | |
CN104751426A (en) | High density impulse noise removing method based on three dimensional block match switching | |
CN104978735B (en) | It is suitable for the background modeling method of random noise and illumination variation | |
CN107451986B (en) | Single infrared image enhancement method based on fusion technology | |
CN103871035B (en) | Image denoising method and device | |
CN104484865A (en) | Method for removing raindrops in video image | |
CN110929574A (en) | Infrared weak and small target rapid detection method | |
CN110889817A (en) | Image fusion quality evaluation method and device | |
CN105809677B (en) | Image edge detection method and system based on bilateral filter | |
RU2405200C2 (en) | Method and device for fast noise filtration in digital images | |
Khursheed et al. | A hybrid logarithmic gradient algorithm for Poisson noise removal in medical images | |
CN111028159B (en) | Image stripe noise suppression method and system | |
CN108875630B (en) | Moving target detection method based on video in rainy environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |