CN116977228B - Image noise reduction method, electronic device and storage medium - Google Patents
Image noise reduction method, electronic device and storage medium Download PDFInfo
- Publication number
- CN116977228B CN116977228B CN202311240519.2A CN202311240519A CN116977228B CN 116977228 B CN116977228 B CN 116977228B CN 202311240519 A CN202311240519 A CN 202311240519A CN 116977228 B CN116977228 B CN 116977228B
- Authority
- CN
- China
- Prior art keywords
- image
- filtering
- value
- detected
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 230000009467 reduction Effects 0.000 title claims abstract description 61
- 238000001914 filtration Methods 0.000 claims abstract description 256
- 230000033001 locomotion Effects 0.000 claims abstract description 150
- 230000003068 static effect Effects 0.000 claims abstract description 46
- 230000008569 process Effects 0.000 claims description 33
- 230000007704 transition Effects 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 230000035945 sensitivity Effects 0.000 claims description 2
- 230000003993 interaction Effects 0.000 abstract description 3
- 230000002123 temporal effect Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000036039 immunity Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Image Processing (AREA)
- Picture Signal Circuits (AREA)
Abstract
The invention discloses an image noise reduction method, electronic equipment and a storage medium, wherein the method comprises the following steps: detecting the noise type of the acquired image to be detected; when the fact that non-impact noise exists in the image to be detected is determined, low-pass filtering is conducted on the image to be detected, non-local mean filtering is conducted on the image to be detected, and a first filtering result and a second filtering result are obtained; fusing the first filtering result and the second filtering result to obtain a mixed filtering value; determining a sample filtering value of a frame before the current frame corresponding to the mixed filtering value to calculate a brightness difference value; comparing the brightness difference value with a preset dynamic and static threshold value, and determining motion information of an image to be detected; and performing time domain filtering operation on the motion information based on the preset delay control parameter and the preset noise reduction intensity gain parameter to obtain a target image. In the embodiment of the invention, benign interaction of time-space domain filtering can be realized while the system overhead is reduced so as to realize clean denoising of the image and avoid the condition of motion blur.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image noise reduction method, an electronic device, and a storage medium.
Background
In image display devices such as televisions, cameras, and displays, improvement of image quality is an important issue in the development of image video technology. In the actual video or image generation process, various mixed noise such as white noise, interference noise, photon shot noise, quantization noise and the like are inevitably introduced into the video in the acquisition process due to the influence of various environments and the like. The good video image sequence denoising technology can be used as a preprocessor embedded into a video coding system, saves video transmission bandwidth and storage space, can be applied to post-processing of video images, and improves subjective image quality.
In order to effectively remove noise such as white noise, interference noise, photon shot noise and the like, two types of techniques of spatial domain and transform domain are generally adopted in the related art to be developed. Spatial denoising can utilize the spatial correlation and noise characteristics of images to reduce noise, but cannot accurately distinguish noise from texture, so that unclean denoising or detail blurring is easy to cause. The time domain denoising mainly removes noise through time correlation, and is divided into two modes of denoising of motion detection and motion compensation. The motion compensation method can solve the problem of motion blur, but has huge calculation amount, and most of devices cannot run in real time at present, so that the design of a time domain noise reduction algorithm with balanced effect and complexity becomes particularly critical.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein.
The embodiment of the invention provides an image noise reduction method, electronic equipment and a storage medium, which can reduce the system overhead required by noise reduction, and realize benign interaction of time-space domain filtering so as to realize clean noise reduction of images and avoid the condition of motion blur.
In a first aspect, an embodiment of the present invention provides an image noise reduction method, including:
detecting the noise type of the acquired image to be detected;
when the fact that non-impact noise exists in the image to be detected is determined, low-pass filtering is conducted on the image to be detected, non-local mean filtering is conducted on the image to be detected, and a first filtering result and a second filtering result are obtained;
fusing the first filtering result and the second filtering result to obtain a mixed filtering value;
determining a sample filtering value of a frame before the current frame corresponding to the mixed filtering value, and carrying out weighting operation on the mixed filtering value and the sample filtering value to obtain a brightness difference value;
comparing the brightness difference value with a preset dynamic and static threshold value to determine the motion information of the image to be detected;
And performing time domain filtering operation on the motion information based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image.
The image noise reduction method provided by the embodiment of the invention has at least the following beneficial effects: firstly, the acquired image to be measured is subjected to noise type detection to determine the noise type existing in the image to be measured, a noise reduction algorithm is convenient to select the noise in the image to be measured in a targeted mode, when the fact that non-impact noise exists in the image to be measured is determined, low-pass filtering is carried out on the image to be measured to reduce high-frequency noise in the image to be measured, details and textures of the image to be measured are smoothed, non-local mean filtering is carried out on the image to be measured, a first filtering result and a second filtering result are obtained, the details of the image are reserved while noise is reduced, a higher-quality noise reduction effect is provided, then the first filtering result and the second filtering result are fused, a fusion filtering value is obtained, so that balance between filtering intensity and edge protection is achieved, flexibility of noise reduction is improved, sample filtering values of a previous frame of a current frame corresponding to the hybrid filtering value is further determined, weighting operation is carried out on the mixed filtering value and the sample filtering value, and a brightness difference value is obtained, therefore adjustment of brightness of the image to be measured is achieved, random point and texture distortion in the image to be measured is effectively reduced, the noise level is kept, the noise level is reduced, the noise level is compared with a preset threshold value, the noise level is kept, the motion state is accurately is well, the motion state is well-controlled, and the motion is well is in the motion state is well has a noise is improved.
In some embodiments, the performing noise type detection on the acquired image to be detected includes:
selecting a current pixel point from the image to be detected, and acquiring a pixel coordinate of the current pixel point, wherein the pixel coordinate is used for representing the position information of the current pixel point in the image to be detected;
determining a plurality of adjacent pixel points around the current pixel point according to the pixel coordinates;
combining the adjacent pixel points, and carrying out average value calculation on the combined adjacent pixel points to obtain a plurality of azimuth average values;
performing difference comparison on the pixel value of the current pixel point and the azimuth mean values to obtain a plurality of neighborhood difference values;
and detecting the noise type of the image to be detected according to the neighborhood difference values and a preset pixel threshold value.
In some embodiments, the determining that the image to be measured has non-impulse noise includes:
when all the neighborhood difference values are smaller than or equal to the pixel threshold value, determining that non-impact noise exists in the image to be detected;
after the noise type detection is performed on the acquired image to be detected, the method further comprises the following steps:
when all the neighborhood difference values are larger than the pixel threshold value, determining that impact noise exists in the image to be detected;
Calculating the average value of all the azimuth average values;
and updating the pixel value of the current pixel point according to the average value.
In some embodiments, the low-pass filtering the image to be measured and the non-local mean filtering the image to be measured to obtain a first filtering result and a second filtering result, including:
performing convolution operation on the image to be detected based on a preset Gaussian template and a preset search window to obtain a brightness filtering result and a chromaticity filtering result;
generating a first filtering result according to the brightness filtering result and the chromaticity filtering result;
non-local mean filtering is carried out on the image to be detected in the search window according to a preset matching window, and an absolute difference weighted mean of pixels in the search window is obtained;
determining a plurality of weight values in the search window according to the absolute difference weighted average value, and determining a target pixel point corresponding to the weight value;
and obtaining the brightness values of all the target pixel points, and carrying out weighted average on the weight values and the brightness values to obtain a second filtering result.
In some embodiments, before the fusing the first filtering result and the second filtering result to obtain the mixed filtering value, the method further includes:
In the process of carrying out low-pass filtering on the image to be detected, calculating a horizontal gradient value and a vertical gradient value of the image to be detected;
setting a gradient index value according to the horizontal gradient value and the vertical gradient value;
and determining the texture weight value of the image to be detected based on the gradient index value, a preset weight threshold value, a preset gradient gain value and a preset offset value.
In some embodiments, the comparing the brightness difference value with a preset dynamic and static threshold value, and determining the motion information of the image to be measured includes:
comparing the brightness difference value with a preset dynamic and static threshold value;
when the brightness difference value is larger than or equal to the dynamic and static threshold value, determining that the image to be detected is in a motion state;
determining a moving value of the image to be detected based on a preset dynamic and static transition slope, the brightness difference value and the dynamic and static threshold value;
and determining the motion information of the image to be detected according to the motion value and the texture weight value.
In some embodiments, the performing a temporal filtering operation on the motion information based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image includes:
Determining sample motion information of a frame before a current frame corresponding to the motion information, and calculating motion information weight of the sample motion information;
calculating a motion time domain filtering result of the image to be detected according to the sample motion information, the motion information and the motion information weight;
performing delay processing on the motion time domain filtering result based on a preset delay control parameter to obtain a delay parameter;
performing horizontal low-pass filtering on the delay parameter based on a preset low-pass filter check to obtain a target motion parameter;
and performing time domain filtering on the image to be detected through a preset noise reduction intensity gain parameter and the target motion parameter to obtain a target image.
In some embodiments, the noise reduction intensity gain parameters include a first gain parameter and a second gain parameter; the time domain filtering is performed on the image to be detected through a preset noise reduction intensity gain parameter and the target motion parameter to obtain a target image, and the method comprises the following steps:
calculating the target motion parameter through the first gain parameter to obtain brightness filtering weight;
calculating the target motion parameter through the second gain parameter to obtain a chromaticity filtering weight;
Performing time domain filtering on the image to be detected according to the brightness filtering weight, the mixed filtering value and the sample filtering value to obtain a third filtering result;
performing time domain filtering on the image to be detected according to the chromaticity filtering weight and the target motion parameter to obtain a fourth filtering result;
and denoising the image to be detected according to the three filtering results and the fourth filtering result to obtain a target image.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores a computer program, and the processor implements the image noise reduction method according to the first aspect when executing the computer program.
In a third aspect, embodiments of the present invention further provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the image denoising method according to the first aspect.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
FIG. 1 is an overall flowchart of an image denoising method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific method of step S101 in FIG. 1;
FIG. 3 is a flowchart of a specific method of step S102 in FIG. 1;
FIG. 4 is a general flow chart of an image denoising method according to another embodiment of the present invention;
FIG. 5 is another flowchart of a specific method of step S102 in FIG. 1;
FIG. 6 is a general flow chart of an image denoising method according to another embodiment of the present invention;
FIG. 7 is a flowchart of a specific method of step S105 in FIG. 1;
FIG. 8 is a flowchart of a specific method of step S106 in FIG. 1;
FIG. 9 is a flowchart of a specific method of step S805 in FIG. 8;
FIG. 10 is a system diagram of an image noise reduction system according to an embodiment of the present invention;
fig. 11 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In image display devices such as televisions, cameras, and displays, improvement of image quality is an important issue in the development of image video technology. In the actual video or image generation process, various mixed noise such as white noise, interference noise, photon shot noise, quantization noise and the like are inevitably introduced into the video in the acquisition process due to the influence of various environments and the like. The good video image sequence denoising technology can be used as a preprocessor embedded into a video coding system, saves video transmission bandwidth and storage space, can be applied to post-processing of video images, and improves subjective image quality.
In order to effectively remove noise such as white noise, interference noise, photon shot noise and the like, two types of techniques of spatial domain and transform domain are generally adopted in the related art to be developed. Spatial denoising can utilize the spatial correlation and noise characteristics of images to reduce noise, but cannot accurately distinguish noise from texture, so that unclean denoising or detail blurring is easy to cause. The time domain denoising mainly removes noise through time correlation, and is divided into two modes of denoising of motion detection and motion compensation. The motion compensation method can solve the problem of motion blur, but has huge calculation amount, and most of devices cannot run in real time at present, so that the design of a time domain noise reduction algorithm with balanced effect and complexity becomes particularly critical.
In order to solve the above problems, an embodiment of the present invention provides an image noise reduction method, an electronic device, and a storage medium, in which, firstly, noise type detection is performed on an acquired image to be measured to determine a noise type existing in the image to be measured, so as to facilitate a targeted selection of a noise reduction algorithm to remove noise in the image to be measured, when it is determined that non-impact noise exists in the image to be measured, low-pass filtering is performed on the image to be measured to reduce high-frequency noise in the image to be measured, and detail and texture of the image to be measured are smoothed, non-local mean filtering is performed on the image to be measured, so as to obtain a first filtering result and a second filtering result, the detail of the image is preserved while noise is reduced, a noise reduction effect of higher quality is provided, then, the first filtering result and the second filtering result are fused to obtain a fused filtering value, therefore, the balance of filtering intensity and edge protection can be achieved, the flexibility of spatial domain noise reduction is improved, the sample filtering value of the previous frame of the current frame corresponding to the mixed filtering value is determined, the mixed filtering value and the sample filtering value are weighted to obtain a brightness difference value, the brightness of an image to be detected is adjusted, the noise level is effectively reduced, random noise points and texture distortion in the image to be detected are reduced, the brightness difference value is compared with a preset dynamic and static threshold value, the motion information of the image to be detected is determined, the judgment of the motion state of the image to be detected is realized, the edge definition is maintained while the motion state is judged, finally, the time domain filtering operation is carried out on the motion information based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image, the accuracy of motion judgment is improved, and the problem of tailing and blurring is reduced, the noise reduction image quality is improved.
Referring to fig. 1, an embodiment of the present invention provides an image noise reduction method, which includes, but is not limited to, the following steps S101 to S106.
Step S101, detecting the noise type of the acquired image to be detected;
in some embodiments, noise type detection is performed on the acquired image to be detected, so that whether pollution of impact noise exists in the image to be detected can be determined, accurate recognition of the noise type is achieved, and filtering is more targeted.
Step S102, when it is determined that non-impact noise exists in the image to be detected, low-pass filtering is performed on the image to be detected, and non-local mean filtering is performed on the image to be detected, so as to obtain a first filtering result and a second filtering result;
in some embodiments, when it is determined that non-impact noise exists in the image to be detected, low-pass filtering is performed on the image to be detected, multi-scale analysis and processing are achieved, so that filtering is more comprehensive and thorough, noise of the image is further reduced, non-local average filtering is performed on the image to be detected, a first filtering result and a second filtering result are obtained, noise level can be effectively reduced, random noise points and texture distortion in the image are reduced, noise suppression and detail preservation can be balanced better, and a clearer and more natural image result is generated.
Step S103, fusing the first filtering result and the second filtering result to obtain a mixed filtering value;
in some embodiments, the first filtering result and the second filtering result are fused based on a preset fusion weight, so as to balance the noise reduction effect and the smoothing effect, obtain a mixed filtering value, and improve the follow-up adjustable flexibility.
The specific procedure for obtaining the mixed filter value is as follows:
;
wherein,for the second filtering result, < >>For fusion weight, ++>Is the first filtering result.
It will be appreciated that the number of components,smaller and more edge-protecting +.>As a result, higher filtering strength gaussians are favored otherwise。
Step S104, determining a sample filtering value of a frame before the current frame corresponding to the mixed filtering value, and carrying out weighting operation on the mixed filtering value and the sample filtering value to obtain a brightness difference value;
in some embodiments, a sample filtering value of a frame before a current frame corresponding to the mixed filtering value is determined, so that a motion state of an image to be detected can be obtained, and the mixed filtering value and the sample filtering value are weighted to obtain a brightness difference value, so that dynamic and static judgment of the image to be detected can be conveniently realized through the brightness difference value, and accuracy of motion information judgment is guaranteed.
Step S105, comparing the brightness difference value with a preset dynamic and static threshold value, and determining motion information of an image to be detected;
in some embodiments, the brightness difference value is compared with a preset dynamic and static threshold value, and the motion information of the image to be detected is determined, so that abrupt switching or flaws between dynamic and static areas are reduced, a smooth transition effect is realized, and visual continuity and smoothness are improved.
And S106, performing time domain filtering operation on the motion information based on the preset delay control parameter and the preset noise reduction intensity gain parameter to obtain a target image.
In some embodiments, the motion information is subjected to time domain filtering operation based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image, so that noise immunity, edge protection, delay, front and back frame motion information and smooth multi-dimensional combination can be achieved in the dynamic and static judgment process, the dynamic and static judgment is accurate, the positioning accuracy of edges is improved, the detection result is more accurate and reliable, artifacts can be removed, and the visual perception effect is improved.
Referring to fig. 2, in some embodiments, step S101 may include, but is not limited to including, steps S201 through S205:
Step S201, selecting a current pixel point from an image to be detected, and acquiring pixel coordinates of the current pixel point;
it should be noted that, the pixel coordinates are used to represent the position information of the current pixel point in the image to be measured.
In some embodiments, a current pixel point to be tested is selected from the image to be tested, and a pixel coordinate of the current pixel point in the image to be tested is obtained, so that subsequent judgment of the noise type is facilitated.
It should be noted that, the current pixel point is any pixel point randomly selected on the image to be measured, and the embodiment is not limited specifically.
Step S202, determining a plurality of adjacent pixel points around the current pixel point according to the pixel coordinates;
in some embodiments, coordinates of pixel points in all images to be measured are obtained, distances between all coordinates and the pixel coordinates are calculated, and a plurality of adjacent pixel points around the current pixel point are determined so as to analyze high-frequency components in the images to be measured and judge whether amplitude changes occur in the pixel points.
Step S203, combining adjacent pixel points, and carrying out mean value calculation on the combined adjacent pixel points to obtain a plurality of azimuth mean values;
in some embodiments, adjacent pixel points are combined according to coordinates of the adjacent pixel points, and average value calculation is performed on the combined adjacent pixel points, so that average values of all directions around the current pixel point can be accurately calculated, a plurality of azimuth average values are obtained, and subsequent judgment of noise types is facilitated.
It should be noted that, in the process of combining the adjacent pixel points, the positional relationship between each adjacent pixel point and the current pixel point may be determined according to coordinates, and then the adjacent pixel points in the same direction are combined to obtain a mean value of a plurality of directions, where each direction in the embodiment may include four directions including east, south, west and north, and may be subdivided into eight directions including east, south, west, north, northeast, southeast, northwest and southwest; or dividing according to the angles of the adjacent pixel points and the current pixel point, etc., the embodiment is not particularly limited.
Step S204, comparing the pixel value of the current pixel point with a plurality of azimuth mean values to obtain a plurality of neighborhood difference values;
in some embodiments, the pixel value of the current pixel point is compared with the azimuth mean values in a difference manner, so that the difference value between the current pixel point and the azimuth mean values is determined, a plurality of neighborhood difference values are obtained, and whether the current pixel point is polluted by impact noise or not is conveniently determined later.
It should be noted that, in this embodiment, by calculating the mean difference between the current pixel point and four surrounding direction points, the neighborhood difference is shown in the following formula:
;
Wherein,for the current pixel point, +.>、/>、And +.>The azimuth mean values of the four directions are respectively.
Step S205, detecting the noise type of the image to be detected according to the plurality of neighborhood difference values and the preset pixel threshold value.
In some embodiments, the noise type of the image to be detected is detected according to the plurality of neighborhood difference values and the preset pixel threshold value, so that the noise type in the image to be detected is determined, the accurate identification of the noise type is realized, and the filtering is more targeted.
It should be noted that, the pixel threshold in this embodiment may be set according to the requirement of the user.
Referring to fig. 3, in some embodiments, step S102 may include, but is not limited to including step S301:
step S301, when all neighborhood difference values are smaller than or equal to a pixel threshold value, determining that non-impact noise exists in the image to be detected;
in some embodiments, when all the neighborhood difference values are smaller than or equal to the pixel threshold value, determining that the image to be detected has non-impact noise, wherein a specific judging process is as follows:
;
wherein,for pixel threshold, +.>For the mixed filter value, +.>Is the target filtered value of the current pixel point.
Referring to fig. 4, fig. 4 is an overall flowchart of an image denoising method according to another embodiment of the present invention, including, but not limited to, steps S401 to S403:
It should be noted that, steps S401 to S403 occur after the noise type detection is performed on the image to be detected.
Step S401, when all neighborhood difference values are larger than a pixel threshold value, determining that impact noise exists in the image to be detected;
step S402, calculating an average value of all azimuth average values;
step S403, updating the pixel value of the current pixel point according to the average value.
In steps S401 to S403 of some embodiments, when all the neighborhood differences are greater than the pixel threshold, it is indicated that the current pixel point is greatly different from the neighborhood thereof, it is determined that the impact noise exists in the image to be measured, then, the average value of all the azimuth averages is calculated, the pixel value of the current pixel point is updated according to the average value, and the value is replaced by the surrounding average value, so as to avoid interference of the impact noise.
It will be appreciated that impulse noise typically appears as a high frequency component in the frequency domain, as its waveform changes suddenly, containing rapid amplitude changes. The non-impact noise contains more low-frequency components, and the waveform change is relatively gentle, so the embodiment can judge the noise type through the difference of the mean value of the current pixel point and the surrounding pixel points.
Referring to fig. 5, in some embodiments, step S102 may include, but is not limited to including, step S501 to step S505:
Step S501, carrying out convolution operation on an image to be detected based on a preset Gaussian template and a preset search window to obtain a brightness filtering result and a chromaticity filtering result;
in some embodiments, convolution operation is performed on an image to be detected based on a preset Gaussian template and a preset search window, so that spatial filtering of brightness and chromaticity of a current frame is achieved, a brightness filtering result and a chromaticity filtering result are obtained, multi-scale analysis and processing are achieved, filtering is more comprehensive and thorough, and noise of the image is further reduced.
It should be noted that, in this embodiment, the low-pass filter is used to perform low-pass filtering on the image to be detected, and the window size of the search window may be adjusted according to the needs of the user, for example, the search window is 5*5, 8×8, 7*7, etc., which is not limited in this embodiment.
It can be understood that taking the convolution of the brightness of the image to be measured as an example, the specific convolution process of the image to be measured is as follows:
;
wherein,for the first filtering result, < >>For the brightness value of the image to be measured, +.>Is a gaussian template.
Step S502, generating a first filtering result according to the brightness filtering result and the chromaticity filtering result;
in some embodiments, the first filtering result is generated according to the luminance filtering result and the chrominance filtering result, so that spatial filtering on luminance and chrominance of the current frame is achieved, spatial correlation in an image can be captured, and detailed information such as edges and textures can be reserved.
Step S503, non-local average filtering is carried out on the image to be detected in the search window according to a preset matching window, and an absolute difference weighted average of pixels in the search window is obtained;
in some embodiments, a matching window with a preset size is set, non-local average filtering is performed on an image to be detected in the search window according to the size of the matching window, and an absolute difference weighted average of pixels in the search window is obtained, so that noise level can be effectively reduced, random noise points and texture distortion in the image can be reduced, noise suppression and detail reservation can be balanced better, and a clearer and more natural image result can be generated.
It should be noted that, the area of the matching window is smaller than the area of the search window, for example, the search window is 5*5, and the matching window is 3*3; the search window is 6*6, the matching window is 2×2, etc., and the embodiment is not particularly limited.
It will be appreciated that the specific non-local mean filtering process is as follows:
|);
wherein,weighted mean of absolute differences>For image blocks in the search window, +.>Image block for the center of the image to be measured, +.>Mean processing of the absolute value difference is shown.
Step S504, determining a plurality of weight values in the search window according to the absolute difference weighted average value, and determining a target pixel point corresponding to the weight value;
Step S505, obtaining the brightness values of all the target pixel points, and carrying out weighted average on the weight values and the brightness values to obtain a second filtering result.
In some embodiments, a plurality of weight values in a search window are determined according to an absolute difference weighted average value, target pixel points corresponding to the weight values are determined, then brightness values of all the target pixel points are obtained, and weighted average is carried out on all the weight values and the brightness values to obtain a second filtering result, so that noise and discontinuity in an image can be reduced, isolated noise points or abnormal values in the image to be detected can be eliminated, filtering is more accurate, and noise reduction quality is improved.
Referring to fig. 6, fig. 6 is an overall flowchart of an image denoising method according to another embodiment of the present invention, including but not limited to steps S601 to S603:
it should be noted that steps S601 to S603 occur before the mixed filter value is obtained.
Step S601, calculating a horizontal gradient value and a vertical gradient value of an image to be detected in the process of carrying out low-pass filtering on the image to be detected;
in some embodiments, considering that the image edge may be offset in the motion determination, in this embodiment, in the process of performing low-pass filtering on the image to be detected, a horizontal gradient value and a vertical gradient value of the image to be detected are calculated to detect the edge of the image to be detected, and the direction of the edge is determined, so that structure, texture, shape information and the like in the image to be detected can be obtained.
It should be noted that, the specific process of calculating the horizontal gradient value and the vertical gradient value of the image to be measured is as follows:
;
;
wherein,and->Respectively horizontal and vertical gaussian gradientsCore (S)>For horizontal gradient value, +.>Is a vertical gradient value.
Step S602, setting a gradient index value according to the horizontal gradient value and the vertical gradient value;
in some embodiments, the gradient index value is set according to the horizontal gradient value and the vertical gradient valueTherefore, the occurrence of misjudgment of edge variation is avoided, the gradient index value is used for representing the index of local gradient intensity in the measured image and is used for calculating gradient gain or weighting other texture characteristics, and the specific process is as follows:
;
step S603, determining a texture weight value of the image to be detected based on the gradient index value, the preset weight threshold value, the preset gradient gain value and the preset offset value.
In some embodiments, the texture weight value of the image to be measured is determined based on the gradient index value, the preset weight threshold value, the preset gradient gain value and the preset offset value, so that the motion judgment is corrected, the influence of redundant information is reduced, different scenes can be adapted, and the situation that the edge is counteracted in the motion process is avoided.
The process of calculating the texture weight value specifically comprises the following steps:
;
for texture weight value, ++>For the weight threshold, ++>For gradient index value ++>For gradient gain value +.>Is an offset value.
It can be understood that the smaller the gaussian gradient value is, the larger the flat region weight value is, the closer to the flat region is, and the more the flat region is, and the edge region is, so that the accurate detection of the image edge of the image to be detected can be realized.
It should be noted that the gradient gain value in this embodiment is used to adjust the weight of the texture feature, and is also used to determine the importance of the texture feature in a certain region based on the local gradient intensity in the image to be measured; the weight threshold is used for limiting the weight value range of the texture features, and by setting the weight threshold, which texture features are reserved or removed can be controlled; the offset value is used for calculating texture features in different directions, detecting the texture directions in the image, adjusting the perception range of the texture features by setting different offset values, and calculating the texture weight value by the parameters so as to adapt to different texture features and task requirements, thereby improving the performance and adaptability of the algorithm.
Referring to fig. 7, in some embodiments, step S105 may include, but is not limited to including, steps S701 through S704:
Step S701, comparing the brightness difference value with a preset dynamic and static threshold value;
in some embodiments, the brightness difference value is compared with a preset dynamic and static threshold value, so that the motion state of the image to be detected is detected, the dynamic and static judgment of the image to be detected is realized, and the accuracy of the motion information judgment is ensured.
The specific brightness difference calculation process is as follows:
;
for the target filtered value of the current pixel, < >>Filtering values for samples of a frame preceding the current frame, for samples of the current frame>Is the luminance difference.
Step S702, when the brightness difference value is greater than or equal to the dynamic and static threshold value, determining that the image to be detected is in a motion state;
in some embodiments, when the brightness difference value is greater than or equal to the dynamic and static threshold value, the image to be detected is determined to be in a motion state, and when the brightness difference value is less than the dynamic and static threshold value, the image to be detected is determined to be in a static state, so that the motion state of the image to be detected is determined.
Step S703, determining a moving value of the image to be detected based on a preset dynamic and static transition slope, a brightness difference value and a dynamic and static threshold value;
In some embodiments, the movement value of the image to be measured is determined based on a preset dynamic and static transition slope, a brightness difference value and a dynamic and static threshold value, so that abrupt switching or flaws between dynamic and static areas are reduced, a smooth transition effect is realized, and visual continuity and smoothness are improved.
Wherein the movement value is expressed as follows:
;
for moving values,/>For the brightness difference +.>For dynamic and static threshold value->Is the slope of the dynamic and static transition.
It should be noted that, in this embodiment, the dynamic and static thresholds and the dynamic and static transition slopes are set by themselves according to the sensitivity values, and the dynamic and static transition slopes are larger and the transition is steeper.
Step S704, determining motion information of the image to be detected according to the motion value and the texture weight value.
In some embodiments, since abrupt switching or drastic changes between dynamic and static areas may cause visual discontinuity and affect viewing experience, the present embodiment determines motion information of an image to be measured according to a motion value and a texture weight value, so as to avoid possible risk of cancellation of edges in a motion determination process, reduce flaws caused by abrupt switching between dynamic and static areas, and gradually fade out dynamically changed areas while gradually fading in static areas by using dynamic and static transition slopes, thereby improving visual continuity and smoothness and avoiding introducing additional blurring or distortion in the transition process.
The process of calculating the motion information is as follows:
;
for sports information +.>For the movement value +.>Is a texture weight value.
Referring to fig. 8, in some embodiments, step S106 may include, but is not limited to including, step S801 to step S805:
step S801, determining sample motion information of a frame previous to a current frame corresponding to the motion information, and calculating motion information weight of the sample motion information;
in some embodiments, sample motion information of a frame before a current frame corresponding to the motion information is determined to determine frame changes of an image to be detected, capture of details is achieved, and motion information weights of the sample motion information are calculated, so that the motion information of the frame before and after is associated, motion is smoother, tailing risk is reduced, and convergence is faster.
Step S802, calculating a motion time domain filtering result of an image to be detected according to sample motion information, motion information and motion information weight;
in some embodiments, a motion time domain filtering result of the image to be detected is calculated according to the sample motion information, the motion information and the motion information weight, so that the noise reduction strength and the edge protection fusion self-adaption are achieved. Noise immunity, edge protection, delay, front and back frame motion information and smooth multidimensional combination are achieved in the dynamic and static judgment process, so that the dynamic and static judgment is accurate.
It should be noted that the motion temporal filtering result is a temporal filtering result of the motion information of the sample and the motion information.
The process of calculating the motion time domain filtering result is as follows:
;
for sports information +.>For sample movement information +.>For transportingDynamic information weight.
Step S803, delay processing is carried out on the motion time domain filtering result based on a preset delay control parameter, and a delay parameter is obtained;
in some embodiments, the motion time domain filtering result is delayed based on a preset delay control parameter, so that artifacts are removed, the motion time domain filtering result is displayed more clearly and accurately, the delay parameter is obtained, and the image tailing condition is reduced.
The calculation process for obtaining the delay parameter is as follows:
;
for delay control parameters->For the motion temporal filtering result, < >>For the information of the movement of the sample,is a delay parameter.
It should be noted that by calculating the delay parameter, it is also possible to let the stationary zoneInformation is delayed, let the area after movement +.>The information is persistent, wherein the smaller the delay control parameter, the longer the delay.
It will be appreciated that the delay parameter is introduced so that the transition from motion to stationary is slowed down, the motion information is delayed longer, the motion information is present in more frames, and the tailing phenomenon is reduced more advantageously so that the transition is more natural.
Step S804, horizontal low-pass filtering is performed on the basis of a preset low-pass filtering check delay parameter to obtain a target motion parameter;
in some embodiments, considering that tailing is easier during large movement, in order to further reduce occurrence of a smear phenomenon and consider implementation cost, the embodiment performs horizontal low-pass filtering based on a preset low-pass filtering check delay parameter to obtain a target movement parameter, so as to reduce influence of motion blur, improve positioning accuracy of edges, enable a detection result to be more accurate and reliable, remove artifacts, and improve visual perception effect.
The specific process for obtaining the target motion parameters is as follows:
;
is a low-pass filter kernel->For delay parameter +.>Is a target motion parameter.
Step S805, performing time domain filtering on the image to be detected through a preset noise reduction intensity gain parameter and a target motion parameter to obtain a target image.
In some embodiments, the time domain filtering is performed on the image to be detected through the preset noise reduction intensity gain parameter and the target motion parameter to obtain the target image, so that noise reduction of different areas in the image to be detected is realized, and the noise reduction quality is improved.
In practical applications, the image itself has a bright and dark area division, and in general, the noise in the dark area will be larger than that in the bright area, so that it is difficult to obtain an ideal denoising effect with the same denoising strength. Even blurring of the highlight detail may result, with low-light noise not removed cleanly. According to weber's theorem, human vision is less sensitive to noise in a bright area than in a dark area, so that protection details can be adopted for a bright area layer to reduce noise reduction; and the noise reduction strength is enhanced for the dark area, the large edge is kept flat, and no obvious noise is generated.
Referring to fig. 9, in some embodiments, step S805 may include, but is not limited to including, steps S901 through S905:
the noise reduction intensity gain parameter includes a first gain parameter for performing temporal filtering of luminance and a second gain parameter for performing temporal filtering of chrominance.
Step S901, calculating a target motion parameter through a first gain parameter to obtain brightness filtering weight;
in some embodiments, the target motion parameter is calculated through the first gain parameter to obtain the brightness filtering weight, so that the noise reduction effect can be enhanced, and the visual quality is improved, wherein the process of specifically obtaining the brightness filtering weight is as follows:
;
for the movement parameters of the target->For the first gain parameter, +.>The weights are luminance filtering.
Step S902, calculating a target motion parameter through a second gain parameter to obtain a chromaticity filtering weight;
in some embodiments, the target motion parameter is calculated through the second gain parameter to reduce noise while keeping the detail information of the image to the maximum extent, avoiding excessive smoothing or blurring, obtaining the chroma filtering weight, and improving the quality and stability of the image to be measured, wherein the process of obtaining the luma filtering weight is as follows:
;
For the movement parameters of the target->For the second gain parameter, +.>Is the chroma filtering weight.
Step S903, performing time domain filtering on the image to be detected according to the brightness filtering weight, the mixed filtering value and the sample filtering value to obtain a third filtering result;
in some embodiments, the temporal filtering is performed on the image to be detected according to the brightness filtering weight, the mixed filtering value and the sample filtering value to obtain a third filtering result, and the temporal filtering of the motion information of the previous and subsequent frames is adopted to obtain more refined motion information, so that the temporal filtering of the brightness or chromaticity of the image is facilitated, and the noise reduction image quality is improved.
It should be noted that, the specific process of obtaining the third filtering result is as follows:
;
for the third filtering result, +.>For brightness filtering weights, +.>For the target filtered value of the current pixel, < >>The values are filtered for the samples.
Step S904, performing time domain filtering on the image to be detected according to the chromaticity filtering weight and the target motion parameter to obtain a fourth filtering result;
in some embodiments, in the process of performing temporal filtering on an image to be detected according to the chrominance filtering weight, firstly, determining a chrominance motion parameter in the image to be detected through a target motion parameter, determining a chrominance value of a frame before a current frame corresponding to the chrominance motion parameter, and then performing temporal filtering through the chrominance motion parameter and the chrominance value to obtain a fourth filtering result, so that image quality and color information can be better maintained, and luminance motion information is used as guidance of chrominance motion, and noise reduction consistency is good.
It should be noted that, during the chroma time domain filtering process, cb and Cr are filtered respectively, and Cb and Cr represent the chroma components of blue and red respectively, and the specific process is as follows:
;
;
wherein,for filtering Cb, filtering result, +.>For the filtering result of Cr, +.>For chroma filtering weights, ++>、/>Chrominance motion parameters, respectively->、/>The chrominance values of the frame preceding the current frame, respectively.
And step S905, denoising the image to be detected according to the three filtering results and the second filtering result to obtain a target image.
In some embodiments, the noise reduction is performed on the image to be detected according to the three filtering results and the fourth filtering result, the control of different intensity curves of brightness and darkness is combined on the time domain filtering, and the chromaticity is guided by the motion information of brightness, so that the noise reduction can be accurately and finely performed on the image to be detected, a target image is obtained, and the processing of the image details and the color information is better balanced.
In some embodiments, referring to fig. 10, an image noise reduction system is further provided in the embodiments of the present application, where the image noise reduction method may be implemented, and the system includes:
the noise detection module 901 is used for detecting the noise type of the acquired image to be detected;
The low-pass filtering module 902 is used for carrying out low-pass filtering on the image to be detected when determining that the image to be detected has non-impact noise, and carrying out non-local mean filtering on the image to be detected to obtain a first filtering result and a second filtering result;
the result fusion module 903 fuses the first filtering result and the second filtering result to obtain a mixed filtering value;
the filtering weighting module 904 determines a sample filtering value of a frame before the current frame corresponding to the mixed filtering value, and performs weighting operation on the mixed filtering value and the sample filtering value to obtain a brightness difference value;
the threshold comparison module 905 compares the brightness difference value with a preset dynamic and static threshold value to determine the motion information of the image to be detected;
the temporal filtering module 906 performs temporal filtering operation on the motion information based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image.
The image denoising system of the embodiment of the present application is used for executing the image denoising method of the embodiment, and the specific processing procedure is the same as that of the image denoising method of the embodiment, and is not described here again.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor is used for executing the image noise reduction method in the embodiment of the application when the computer program is executed by the processor.
Referring to fig. 11, fig. 11 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 1001 may be implemented by using a general-purpose CPU (Central Processing Unit ), a microprocessor, an application-specific integrated circuit (Application SpecificIntegrated Circuit, ASIC), or one or more integrated circuits, etc. to execute related programs to implement the technical solutions provided by the embodiments of the present application;
the Memory 1002 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). The memory 1002 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present application are implemented by software or firmware, relevant program codes are stored in the memory 1002, and the processor 1001 invokes an image noise reduction method to perform the embodiments of the present application;
an input/output interface 1003 for implementing information input and output;
the communication interface 1004 is configured to implement communication interaction between the present device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
A bus 1005 for transferring information between the various components of the device (e.g., the processor 1001, memory 1002, input/output interface 1003, and communication interface 1004);
wherein the processor 1001, the memory 1002, the input/output interface 1003, and the communication interface 1004 realize communication connection between each other inside the device through the bus 1005.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the image noise reduction method when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-9 are not limiting to embodiments of the present application and may include more or fewer steps than shown, or may combine certain steps, or different steps.
The system embodiments described above are merely illustrative, in that the units illustrated as separate components may or may not be physically separate, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the division of the above elements is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, system or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.
Claims (9)
1. A method of image denoising, the method comprising:
detecting the noise type of the acquired image to be detected;
when the fact that non-impact noise exists in the image to be detected is determined, low-pass filtering is conducted on the image to be detected, non-local mean filtering is conducted on the image to be detected, and a first filtering result and a second filtering result are obtained;
fusing the first filtering result and the second filtering result to obtain a mixed filtering value;
determining a sample filtering value of a frame before the current frame corresponding to the mixed filtering value, and carrying out weighting operation on the mixed filtering value and the sample filtering value to obtain a brightness difference value;
comparing the brightness difference value with a preset dynamic and static threshold value to determine the motion information of the image to be detected;
performing time domain filtering operation on the motion information based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image, wherein the delay control parameter is used for controlling the delay degree of the motion information of the image to be detected, and the noise reduction intensity gain parameter is used for performing time domain filtering of brightness and time domain filtering of chromaticity on the image to be detected;
The time domain filtering operation is performed on the motion information based on a preset delay control parameter and a preset noise reduction intensity gain parameter to obtain a target image, and the time domain filtering operation comprises the following steps:
determining sample motion information of a frame before a current frame corresponding to the motion information, and calculating motion information weight of the sample motion information;
calculating a motion time domain filtering result of the image to be detected according to the sample motion information, the motion information and the motion information weight;
performing delay processing on the motion time domain filtering result based on a preset delay control parameter to obtain a delay parameter;
performing horizontal low-pass filtering on the delay parameter based on a preset low-pass filter check to obtain a target motion parameter;
and performing time domain filtering on the image to be detected through a preset noise reduction intensity gain parameter and the target motion parameter to obtain a target image.
2. The method for image noise reduction according to claim 1, wherein the performing noise type detection on the acquired image to be detected includes:
selecting a current pixel point from the image to be detected, and acquiring a pixel coordinate of the current pixel point, wherein the pixel coordinate is used for representing the position information of the current pixel point in the image to be detected;
Determining a plurality of adjacent pixel points around the current pixel point according to the pixel coordinates;
combining the adjacent pixel points, and carrying out average value calculation on the combined adjacent pixel points to obtain a plurality of azimuth average values;
performing difference comparison on the pixel value of the current pixel point and the azimuth mean values to obtain a plurality of neighborhood difference values;
and detecting the noise type of the image to be detected according to the neighborhood difference values and a preset pixel threshold value.
3. The method of image denoising according to claim 2, wherein the determining that the image to be measured has non-impulse noise comprises:
when all the neighborhood difference values are smaller than or equal to the pixel threshold value, determining that non-impact noise exists in the image to be detected;
after the noise type detection is performed on the acquired image to be detected, the method further comprises the following steps:
when all the neighborhood difference values are larger than the pixel threshold value, determining that impact noise exists in the image to be detected;
calculating the average value of all the azimuth average values;
and updating the pixel value of the current pixel point according to the average value.
4. The method for image noise reduction according to claim 1, wherein the low-pass filtering the image to be measured and the non-local mean filtering the image to be measured to obtain a first filtering result and a second filtering result, includes:
Performing convolution operation on the image to be detected based on a preset Gaussian template and a preset search window to obtain a brightness filtering result and a chromaticity filtering result;
generating a first filtering result according to the brightness filtering result and the chromaticity filtering result;
non-local mean filtering is carried out on the image to be detected in the search window according to a preset matching window, and an absolute difference weighted mean of pixels in the search window is obtained;
determining a plurality of weight values in the search window according to the absolute difference weighted average value, and determining a target pixel point corresponding to the weight value;
and obtaining the brightness values of all the target pixel points, and carrying out weighted average on the weight values and the brightness values to obtain a second filtering result.
5. The image denoising method according to claim 1, wherein before the fusing the first filtering result and the second filtering result to obtain a mixed filtering value, further comprising:
in the process of carrying out low-pass filtering on the image to be detected, calculating a horizontal gradient value and a vertical gradient value of the image to be detected;
setting a gradient index value according to the horizontal gradient value and the vertical gradient value;
Determining texture weight values of the image to be detected based on the gradient index values, preset weight thresholds, preset gradient gain values and preset offset values, wherein the offset values are used for calculating texture features of the image to be detected in different directions, and the gradient gain values are used for adjusting weights of the texture features of the image to be detected.
6. The method for image noise reduction according to claim 5, wherein comparing the brightness difference value with a preset dynamic and static threshold value to determine motion information of the image to be detected comprises:
comparing the brightness difference value with a preset dynamic and static threshold value;
when the brightness difference value is larger than or equal to the dynamic and static threshold value, determining that the image to be detected is in a motion state;
determining a moving value of the image to be detected based on a preset dynamic and static transition slope, the brightness difference value and the dynamic and static threshold value, wherein the dynamic and static transition slope is related to a sensitivity value;
and determining the motion information of the image to be detected according to the motion value and the texture weight value.
7. The image denoising method according to claim 1, wherein the denoising intensity gain parameter comprises a first gain parameter and a second gain parameter; the time domain filtering is performed on the image to be detected through a preset noise reduction intensity gain parameter and the target motion parameter to obtain a target image, and the method comprises the following steps:
Calculating the target motion parameter through the first gain parameter to obtain brightness filtering weight;
calculating the target motion parameter through the second gain parameter to obtain a chromaticity filtering weight;
performing time domain filtering on the image to be detected according to the brightness filtering weight, the mixed filtering value and the sample filtering value to obtain a third filtering result;
performing time domain filtering on the image to be detected according to the chromaticity filtering weight and the target motion parameter to obtain a fourth filtering result;
and denoising the image to be detected according to the third filtering result and the fourth filtering result to obtain a target image.
8. An electronic device comprising a memory storing a computer program and a processor that when executing the computer program implements the image noise reduction method of any of claims 1 to 7.
9. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the image denoising method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311240519.2A CN116977228B (en) | 2023-09-25 | 2023-09-25 | Image noise reduction method, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311240519.2A CN116977228B (en) | 2023-09-25 | 2023-09-25 | Image noise reduction method, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116977228A CN116977228A (en) | 2023-10-31 |
CN116977228B true CN116977228B (en) | 2024-02-09 |
Family
ID=88473470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311240519.2A Active CN116977228B (en) | 2023-09-25 | 2023-09-25 | Image noise reduction method, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116977228B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118279177B (en) * | 2024-05-29 | 2024-08-16 | 西安易诺敬业电子科技有限责任公司 | Video image enhancement method for production monitoring |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1599446A (en) * | 2004-07-26 | 2005-03-23 | 西安交通大学 | Adaptive motion detecting method fordigital TV. post-processing deintestage technology |
CN1929552A (en) * | 2006-09-30 | 2007-03-14 | 四川长虹电器股份有限公司 | Spatial domain pixel data processing method |
CN101098462A (en) * | 2007-07-12 | 2008-01-02 | 上海交通大学 | Chroma deviation and brightness deviation combined video moving object detection method |
CN105208376A (en) * | 2015-08-28 | 2015-12-30 | 青岛中星微电子有限公司 | Digital noise reduction method and device |
CN108122204A (en) * | 2016-11-29 | 2018-06-05 | 深圳市中兴微电子技术有限公司 | A kind of method and apparatus of image denoising |
CN109146816A (en) * | 2018-08-22 | 2019-01-04 | 浙江大华技术股份有限公司 | A kind of image filtering method, device, electronic equipment and storage medium |
CN115965537A (en) * | 2021-10-11 | 2023-04-14 | 深圳开阳电子股份有限公司 | Video image denoising method and device and computer storage medium |
CN116310889A (en) * | 2023-02-03 | 2023-06-23 | 深圳市城市公共安全技术研究院有限公司 | Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170084007A1 (en) * | 2014-05-15 | 2017-03-23 | Wrnch Inc. | Time-space methods and systems for the reduction of video noise |
US9489720B2 (en) * | 2014-09-23 | 2016-11-08 | Intel Corporation | Non-local means image denoising with detail preservation using self-similarity driven blending |
CN108550158B (en) * | 2018-04-16 | 2021-12-17 | Tcl华星光电技术有限公司 | Image edge processing method, electronic device and computer readable storage medium |
-
2023
- 2023-09-25 CN CN202311240519.2A patent/CN116977228B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1599446A (en) * | 2004-07-26 | 2005-03-23 | 西安交通大学 | Adaptive motion detecting method fordigital TV. post-processing deintestage technology |
CN1929552A (en) * | 2006-09-30 | 2007-03-14 | 四川长虹电器股份有限公司 | Spatial domain pixel data processing method |
CN101098462A (en) * | 2007-07-12 | 2008-01-02 | 上海交通大学 | Chroma deviation and brightness deviation combined video moving object detection method |
CN105208376A (en) * | 2015-08-28 | 2015-12-30 | 青岛中星微电子有限公司 | Digital noise reduction method and device |
CN108122204A (en) * | 2016-11-29 | 2018-06-05 | 深圳市中兴微电子技术有限公司 | A kind of method and apparatus of image denoising |
CN109146816A (en) * | 2018-08-22 | 2019-01-04 | 浙江大华技术股份有限公司 | A kind of image filtering method, device, electronic equipment and storage medium |
CN115965537A (en) * | 2021-10-11 | 2023-04-14 | 深圳开阳电子股份有限公司 | Video image denoising method and device and computer storage medium |
CN116310889A (en) * | 2023-02-03 | 2023-06-23 | 深圳市城市公共安全技术研究院有限公司 | Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116977228A (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6225256B2 (en) | Image processing system and computer-readable recording medium | |
TWI767985B (en) | Method and apparatus for processing an image property map | |
CN112311962B (en) | Video denoising method and device and computer readable storage medium | |
CN103369209A (en) | Video noise reduction device and video noise reduction method | |
CN109982012B (en) | Image processing method and device, storage medium and terminal | |
JP2009534902A (en) | Image improvement to increase accuracy smoothing characteristics | |
CN116977228B (en) | Image noise reduction method, electronic device and storage medium | |
EP2372638B1 (en) | A black and white stretch method for dynamic range extension | |
CN115345774A (en) | Method and system for fusing infrared image and visible light image | |
EP3616399B1 (en) | Apparatus and method for processing a depth map | |
CN111539895B (en) | Video denoising method and device, mobile terminal and storage medium | |
CN110023957B (en) | Method and apparatus for estimating drop shadow region and/or highlight region in image | |
JP2015095779A (en) | Image processing apparatus, image processing method, and electronic equipment | |
WO2023215371A1 (en) | System and method for perceptually optimized image denoising and restoration | |
KR101907451B1 (en) | Filter based high resolution color image restoration and image quality enhancement apparatus and method | |
CN112752064A (en) | Processing method and system for power communication optical cable monitoring video | |
Bai et al. | Video quality temporal pooling using a visibility measure | |
Okarma et al. | Nonlinear background estimation methods for video vehicle tracking systems | |
CN113538255A (en) | Motion fusion noise reduction method and device and computer readable storage medium | |
Kumar et al. | Reducing image distortions and measuring the excellence in multi-camera images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |