CN111402172A - Image denoising method, system and device and computer readable storage medium - Google Patents
Image denoising method, system and device and computer readable storage medium Download PDFInfo
- Publication number
- CN111402172A CN111402172A CN202010213216.1A CN202010213216A CN111402172A CN 111402172 A CN111402172 A CN 111402172A CN 202010213216 A CN202010213216 A CN 202010213216A CN 111402172 A CN111402172 A CN 111402172A
- Authority
- CN
- China
- Prior art keywords
- gradient
- value
- image
- map
- weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000004927 fusion Effects 0.000 claims abstract description 126
- 238000001914 filtration Methods 0.000 claims abstract description 117
- 238000004590 computer program Methods 0.000 claims description 41
- 238000010606 normalization Methods 0.000 claims description 6
- 238000002156 mixing Methods 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 33
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The application discloses an image denoising method, a system, equipment and a computer readable storage medium, which are used for acquiring an image to be processed; filtering the image to be processed by a non-local mean filtering method to obtain a filtering image; taking the absolute value of the difference between the image to be processed and the filtering image as a residual image; calculating the gradient value of the image to be processed to obtain a gradient map; respectively calculating the weights of the residual image and the filtering image by a non-local mean filtering method; calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image; and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image. After the filtering graph and the residual graph are obtained, the weights of the residual graph and the filtering graph are calculated again through a non-local mean filtering method, the weight fusion value of the weights of the residual graph and the filtering graph is calculated based on the gradient graph, the obtained weight fusion value has good detail distinguishing capacity, and therefore the noise reduction image with more details can be obtained based on the weight fusion value.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image denoising method, system, device, and computer-readable storage medium.
Background
In the application process of the image, in order to reduce the influence of noise on the image, the image may be subjected to a non-local mean filtering process, which may be as follows: dividing the original image into a filter image and a residual image, fusing the residual image and the filter image by utilizing gradient information to obtain a fused image, recalculating the weight value of the non-local mean value filtering on the basis of the fused image, and applying the weight value to the original image for denoising to obtain a final processed image.
However, the above method causes the details of the image to be lost more on the basis of suppressing the influence of noise on the matching inaccuracy.
In summary, how to reduce the loss amount of image details under the non-local mean filtering method is a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The application aims to provide an image denoising method, which can solve the technical problem of reducing the loss of image details under a non-local mean filtering method to a certain extent. The application also provides an image noise reduction system, equipment and a computer readable storage medium.
In order to achieve the above purpose, the present application provides the following technical solutions:
an image denoising method, comprising:
acquiring an image to be processed;
filtering the image to be processed by a non-local mean filtering method to obtain a filtering image;
taking the absolute value of the difference between the image to be processed and the filter map as a residual map;
calculating the gradient value of the image to be processed to obtain a gradient map;
calculating the weights of the residual image and the filtering image by the non-local mean filtering method;
calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image;
and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
Preferably, the calculating the gradient value of the image to be processed to obtain a gradient map includes:
and calculating the gradient value of the image to be processed based on a gradient operator template to obtain the gradient map.
Preferably, the calculating a gradient value of the image to be processed based on the gradient operator template to obtain the gradient map includes:
calculating gradient values of the image to be processed based on the gradient operator template;
and processing the gradient value based on a median filtering method to obtain the gradient map.
Preferably, the processing the gradient values based on the median filtering method to obtain the gradient map includes:
determining a first matching template;
determining a first gradient value of a center point of the first matching template from the gradient values selected by the first matching template;
determining a gradient threshold range based on the gradient value selected by the first matching template;
judging whether the first gradient value is out of the gradient threshold range or not, wherein the absolute values of the difference values of the first gradient value and the boundary value of the gradient threshold range are all larger than a first critical value;
if yes, performing median filtering on the first gradient value;
if not, keeping the first gradient value unchanged;
and processing all the gradient values based on the first matching template to obtain the gradient map.
Preferably, the calculating a weight fusion value of weights of the residual map and the filter map based on the gradient map includes:
acquiring a second critical value for distinguishing image flatness and image detail;
acquiring a fusion adjustment coefficient;
determining a second matching template and a search window;
and calculating the weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjusting coefficient, the gradient map, the weight of the residual map and the weight of the filter map.
Preferably, the calculating the weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjustment coefficient, the gradient map, the weight of the residual map, and the weight of the filter map includes:
determining a second gradient value of the center point of the second matching template in the gradient map;
for any gradient value to be fused selected by the second matching template, calculating an initial fusion value corresponding to the gradient value to be fused based on a difference value between the gradient value to be fused and the second gradient value, a difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, a weight of the residual map and a weight of the filter map;
and summing all the initial fusion values in the search window and carrying out normalization processing to obtain the corresponding weight fusion value.
Preferably, the calculating an initial fusion value corresponding to the gradient value to be fused based on the difference between the gradient value to be fused and the second gradient value, the difference between the gradient value to be fused and the second critical value, the fusion adjustment coefficient, the weight of the residual map, and the weight of the filter map includes:
calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual map and the weight of the filter map by using a weight fusion operation formula;
the weight fusion operation formula comprises:
w1=1/(1+e(c1(|g4|-T2)+c2|g4-g5|));
w2=1-w1;w5=w1w3+w2w4;
wherein c1, c2 represent the fusion adjustment coefficient; g4 represents the gradient value to be fused; t2 represents the second critical value; g5 represents the second gradient value; w3 represents the corresponding weight value of the position of the gradient value to be fused in the filter map; w4 represents the corresponding weight value of the position of the gradient value to be fused in the residual map; w5 represents the initial fusion value.
An image noise reduction system comprising:
the first acquisition module is used for acquiring an image to be processed;
the first filtering module is used for filtering the image to be processed by a non-local mean filtering method to obtain a filtering image;
a first residual error module, configured to use an absolute value of a difference between the image to be processed and the filter map as a residual error map;
the first gradient module is used for calculating the gradient value of the image to be processed to obtain a gradient map;
the first weight module is used for calculating the weights of the residual error graph and the filtering graph by the non-local mean filtering method;
the first fusion module is used for calculating a weight fusion value of the weights of the residual image and the filtering image based on the gradient image;
and the first noise reduction module is used for carrying out noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
An image noise reduction apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image noise reduction method as described in any one of the above when executing the computer program.
A computer readable storage medium having stored thereon a computer program for execution by a processor for carrying out the steps of the method of image noise reduction as set out in any of the preceding claims.
The image denoising method provided by the application acquires an image to be processed; filtering the image to be processed by a non-local mean filtering method to obtain a filtering image; taking the absolute value of the difference between the image to be processed and the filtering image as a residual image; calculating the gradient value of the image to be processed to obtain a gradient map; calculating weights of a residual error graph and a filtering graph by a non-local mean filtering method; calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image; and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image. In the application, after the filtering graph and the residual graph are obtained, the weights of the residual graph and the filtering graph are calculated again through a non-local mean filtering method, and the weight fusion value of the weights of the residual graph and the filtering graph is calculated based on the gradient graph. The image noise reduction system, the image noise reduction equipment and the computer readable storage medium solve the corresponding technical problems.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an image denoising method according to an embodiment of the present application;
FIG. 2 is a flow chart of a gradient map obtained based on a non-local mean filtering method;
fig. 3 is a schematic structural diagram of an image noise reduction system according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image noise reduction device according to an embodiment of the present application;
fig. 5 is another schematic structural diagram of an image noise reduction apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of an image denoising method according to an embodiment of the present disclosure.
The image noise reduction method provided by the embodiment of the application can comprise the following steps:
step S101: and acquiring an image to be processed.
In practical application, the image to be processed may be obtained first, and the type of the image to be processed may be determined according to actual needs, for example, the image to be processed may be a photo taken by a mobile phone of a user, a photo taken by a monitor, or the like.
Step S102: and filtering the image to be processed by a non-local mean filtering method to obtain a filtering image.
In practical application, after the image to be processed is obtained, the image to be processed can be filtered by a non-local mean filtering method to obtain a corresponding filtering image. In a specific application scenario, a matching template of 5X5 and a search window of 21X21 may be used to perform non-local mean filtering processing on an image to be processed, and certainly, matching templates of other specifications and other search windows may also be used, and the present application is not limited specifically herein.
Step S103: and taking the absolute value of the difference between the image to be processed and the filtering image as a residual image.
In practical application, after the filter map is obtained, the difference between the image to be processed and the filter map can be calculated, and the absolute value of the difference is used as a residual map of the image to be processed.
Step S104: and calculating the gradient value of the image to be processed to obtain a gradient map.
In practical application, gradient values of a flat region and a detail region of an image are different, so that a gradient map is introduced to perform subsequent weight fusion calculation, and after a residual map is obtained, the gradient value of the image to be processed can be calculated to obtain a corresponding gradient map.
Step S105: and calculating weights of the residual image and the filtering image by a non-local mean filtering method.
In practical application, the weights of the residual image and the filtering image can be calculated by a non-local mean filtering method, and in the process, the sizes of the applied matching template and the applied search window can be the same as those of the matching template and the applied search window in the process of calculating the filtering image.
Step S106: and calculating a weight fusion value of the weights of the residual image and the filtering image based on the gradient image.
In practical application, the weights of the residual image and the filtering image are obtained through calculation, and then, the weight fusion value of the weights of the residual image and the filtering image can be calculated based on the gradient image.
Step S107: and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
In practical application, on the basis that the weight fusion value can distinguish a detail region from a flat region of an image, after the noise reduction processing is performed on the image to be processed based on the weight fusion value, a noise reduction image with better image details reserved can be obtained.
The image denoising method provided by the application acquires an image to be processed; filtering the image to be processed by a non-local mean filtering method to obtain a filtering image; taking the absolute value of the difference between the image to be processed and the filtering image as a residual image; calculating the gradient value of the image to be processed to obtain a gradient map; calculating weights of a residual error graph and a filtering graph by a non-local mean filtering method; calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image; and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image. In the application, after the filtering graph and the residual graph are obtained, the weights of the residual graph and the filtering graph are calculated again through a non-local mean filtering method, and the weight fusion value of the weights of the residual graph and the filtering graph is calculated based on the gradient graph.
In the image denoising method provided by the embodiment of the application, in the process of calculating the gradient value of the image to be processed to obtain the gradient map, in order to quickly and conveniently obtain the gradient map, the gradient value of the image to be processed may be calculated based on a gradient operator template, such as a laplace template, to obtain the gradient map. In a specific application scenario, a laplace template of 3X3 may be selected to calculate a gradient value of an image to be processed, and the like, and certainly, laplace templates of other scales may also be adopted, which is not specifically limited herein.
In practical application, the gradient value of the image to be processed is calculated based on the gradient operator template, and in the process of obtaining the gradient map, in order to further reduce the influence of noise, after the gradient value of the image to be processed is calculated based on the gradient operator template, the gradient value can be processed based on a median filtering method to obtain the gradient map.
Referring to fig. 2, fig. 2 is a flow chart of a gradient map obtained based on a median filtering method.
Referring to fig. 2, in practical application, the process of processing the gradient values based on the median filtering method to obtain the gradient map may specifically be:
step S201: a first matching template is determined.
In practical application, the first matching template is used for performing median filtering on the gradient values, the scale of the first matching template may be the same as that of the laplace template, for example, the scale of the laplace template is 3X3, the scale of the first matching template may also be 3X3, and the like, and of course, the scale of the first matching template may also be other scales, and the like, which is not specifically limited in this application.
Step S202: and determining a first gradient value of the center point of the first matching template from the gradient values selected by the first matching template.
In practical application, after the first matching template is determined, in the process of processing the gradient values by applying the median filtering method, the gradient values may be selected by using the first matching template, and then the first gradient value of the center point of the first matching template is determined.
Step S203: based on the gradient values selected by the first matching template, a gradient threshold range is determined.
In practical applications, after the first gradient value is determined, a gradient threshold range for determining whether to perform median filtering may be determined based on the gradient value selected by the first matching template. In a specific application scenario, the second largest value and the second smallest value in the gradient values selected by the first matching module may be used as two boundary values of the gradient threshold range, and the second largest value, the second smallest value and a value therebetween may be used as the gradient threshold range, and the like.
Step S204: judging whether the first gradient value is out of the gradient threshold range or not, wherein the absolute values of the difference values of the first gradient value and the boundary value of the gradient threshold range are all larger than a first critical value; if yes, go to step S205, otherwise go to step S206.
Step S205: median filtering is performed on the first gradient values.
Step S206: keeping the first gradient value unchanged; and performing the above processing on all gradient values based on the first matching template to obtain a gradient map.
In practical applications, the first critical value can be determined according to actual needs. In addition, in the present application, the first gradient value is outside the gradient threshold range, and the first gradient value is median filtered only when the absolute value of the difference between the first gradient value and the boundary value of the gradient threshold range is greater than the first critical value, and in addition, the first gradient value is kept unchanged.
In the image denoising method provided in the embodiment of the present application, in order to facilitate a user to clearly control details of a denoised image, a process of calculating a weight fusion value of weights of a residual map and a filter map based on a gradient map may specifically be: acquiring a second critical value for distinguishing image flatness and image detail; acquiring a fusion adjustment coefficient; determining a second matching template and a search window; and calculating a weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjusting coefficient, the gradient map, the weight of the residual map and the weight of the filter map.
In practical application, the process of calculating the weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjustment coefficient, the gradient map, the weight of the residual map, and the weight of the filter map may specifically be: determining a second gradient value of the center point of a second matching template in the gradient map; for any gradient value to be fused selected by the second matching template, calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image; and summing all the initial fusion values in the search window and carrying out normalization processing to obtain corresponding weight fusion values.
In practical application, the process of calculating the initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjustment coefficient, the weight of the residual map, and the weight of the filter map may specifically be: calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image through a weight fusion operation formula; the weight fusion operation formula may include:
w1=1/(1+e(c1(|g4|-T2)+c2|g4-g5|));
w2=1-w1;w5=w1w3+w2w4;
wherein c1 and c2 represent fusion regulation coefficients; g4 represents the gradient value to be fused; t2 denotes a second critical value; g5 denotes a second gradient value; w3 represents the corresponding weight values of the positions of the gradient values to be fused in the filter map; w4 represents the corresponding weight value of the position of the gradient value to be fused in the residual map; w5 denotes the initial fusion value.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an image denoising system according to an embodiment of the present disclosure.
An image noise reduction system provided by an embodiment of the present application may include:
a first obtaining module 101, configured to obtain an image to be processed;
the first filtering module 102 is configured to perform filtering processing on an image to be processed by using a non-local mean filtering method to obtain a filtering map;
a first residual error module 103, configured to use an absolute value of a difference between the image to be processed and the filter map as a residual error map;
a first gradient module 104, configured to calculate a gradient value of the image to be processed to obtain a gradient map;
a first weight module 105, configured to calculate weights of the residual error map and the filter map by a non-local mean filtering method;
the first fusion module 106 is configured to calculate a weight fusion value of weights of the residual map and the filter map based on the gradient map;
and the first noise reduction module 107 is configured to perform noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise-reduced image.
In an image denoising system provided by an embodiment of the present application, the first gradient module may include:
and the first gradient submodule is used for calculating the gradient value of the image to be processed based on the gradient operator template to obtain a gradient map.
In an image denoising system provided in an embodiment of the present application, the first gradient sub-module may include:
the first calculation submodule is used for calculating the gradient value of the image to be processed based on the gradient operator template;
and the first processing submodule is used for processing the gradient values based on a median filtering method to obtain a gradient map.
In an image denoising system provided in an embodiment of the present application, the first processing sub-module may include:
a first determining unit for determining a first matching template;
the second determining unit is used for determining a first gradient value of the center point of the first matching template from the gradient values selected by the first matching template;
a third determining unit, configured to determine a gradient threshold range based on the gradient value selected by the first matching template;
the first judgment unit is used for judging whether the first gradient value is out of the gradient threshold range or not, and the absolute values of the difference values of the first gradient value and the boundary value of the gradient threshold range are all larger than a first critical value; if so, performing median filtering on the gradient value of the central point of the first matching template; if not, keeping the gradient value of the central point of the first matching template unchanged; and performing the above processing on all gradient values based on the first matching template to obtain a gradient map.
In an image denoising system provided in an embodiment of the present application, the first fusion module may include:
the first acquisition submodule is used for acquiring a second critical value for distinguishing image flatness and image detail;
the first determining submodule is used for acquiring a fusion adjusting coefficient;
the second determining submodule is used for determining a second matching template and a searching window;
and the second calculation submodule is used for calculating a weight fusion value based on a second critical value, the second matching template, the search window, the fusion adjusting coefficient, the weight of the gradient map and the residual map and the weight of the filter map.
In an image denoising system provided in an embodiment of the present application, the second calculating sub-module may include:
the third determining submodule is used for determining a second gradient value of the central point of the second matching template in the gradient map;
the third computation submodule is used for computing an initial fusion value corresponding to any gradient value to be fused selected by the second matching template based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image;
and the second processing submodule is used for summing all the initial fusion values in the search window and carrying out normalization processing to obtain corresponding weight fusion values.
In an image denoising system provided in an embodiment of the present application, the third computation submodule may include:
the first calculation unit is used for calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image through a weight fusion operation formula;
the weight fusion operation formula comprises:
w1=1/(1+e(c1(|g4|-T2)+c2|g4-g5|));
w2=1-w1;w5=w1w3+w2w4;
wherein c1 and c2 represent fusion regulation coefficients; g4 represents the gradient value to be fused; t2 denotes a second critical value; g5 denotes a second gradient value; w3 represents the corresponding weight values of the positions of the gradient values to be fused in the filter map; w4 represents the corresponding weight value of the position of the gradient value to be fused in the residual map; w5 denotes the initial fusion value.
The application also provides an image noise reduction device and a computer readable storage medium, which have the corresponding effects of the image noise reduction method provided by the embodiment of the application. Referring to fig. 4, fig. 4 is a schematic structural diagram of an image noise reduction apparatus according to an embodiment of the present disclosure.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program:
acquiring an image to be processed;
filtering the image to be processed by a non-local mean filtering method to obtain a filtering image;
taking the absolute value of the difference between the image to be processed and the filtering image as a residual image;
calculating the gradient value of the image to be processed to obtain a gradient map;
calculating weights of a residual error graph and a filtering graph by a non-local mean filtering method;
calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image;
and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program: and calculating the gradient value of the image to be processed based on the gradient operator template to obtain a gradient map.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program: calculating a gradient value of the image to be processed based on the gradient operator template; and processing the gradient value based on a median filtering method to obtain a gradient map.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program: determining a first matching template; determining a first gradient value of the center point of the first matching template from the gradient values selected by the first matching template; determining a gradient threshold range based on the gradient value selected by the first matching template; judging whether the first gradient value is out of the gradient threshold range or not, wherein the absolute values of the difference values of the first gradient value and the boundary value of the gradient threshold range are all larger than a first critical value; if so, performing median filtering on the gradient value of the central point of the first matching template; if not, keeping the gradient value of the central point of the first matching template unchanged; and performing the above processing on all gradient values based on the first matching template to obtain a gradient map.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program: acquiring a second critical value for distinguishing image flatness and image detail; acquiring a fusion adjustment coefficient; determining a second matching template and a search window; and calculating a weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjusting coefficient, the gradient map, the weight of the residual map and the weight of the filter map.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program: determining a second gradient value of the center point of a second matching template in the gradient map; for any gradient value to be fused selected by the second matching template, calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image; and summing all the initial fusion values in the search window and carrying out normalization processing to obtain corresponding weight fusion values.
The image noise reduction device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the processor 202 implements the following steps when executing the computer program: calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image through a weight fusion operation formula; the weight fusion operation formula comprises:
w1=1/(1+e(c1(|g4|-T2)+c2|g4-g5|));
w2=1-w1;w5=w1w3+w2w4;
wherein c1 and c2 represent fusion regulation coefficients; g4 represents the gradient value to be fused; t2 denotes a second critical value; g5 denotes a second gradient value; w3 represents the corresponding weight values of the positions of the gradient values to be fused in the filter map; w4 represents the corresponding weight value of the position of the gradient value to be fused in the residual map; w5 denotes the initial fusion value.
Referring to fig. 5, another image noise reduction device provided in an embodiment of the present application may further include an input port 203 connected to the processor 202 for transmitting an externally input command to the processor 202, a display unit 204 connected to the processor 202 for displaying a processing result of the processor 202 to the outside, and a communication module 205 connected to the processor 202 for enabling communication between the image noise reduction device and the outside, where the display unit 204 may be a display panel, a laser scanning display, or the like, and the communication mode adopted by the communication module 205 includes, but is not limited to, mobile high-definition link technology (HM L), Universal Serial Bus (USB), high-definition multimedia interface (HDMI), wireless connection, wireless fidelity technology (WiFi), bluetooth communication technology, low-power bluetooth communication technology, and ieee802.11 s-based communication technology.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps:
acquiring an image to be processed;
filtering the image to be processed by a non-local mean filtering method to obtain a filtering image;
taking the absolute value of the difference between the image to be processed and the filtering image as a residual image;
calculating the gradient value of the image to be processed to obtain a gradient map;
calculating weights of a residual error graph and a filtering graph by a non-local mean filtering method;
calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image;
and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps: and calculating the gradient value of the image to be processed based on the laplace template to obtain a gradient map.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps: calculating a gradient value of the image to be processed based on the gradient operator template; and processing the gradient value based on a median filtering method to obtain a gradient map.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps: determining a first matching template; determining a first gradient value of the center point of the first matching template from the gradient values selected by the first matching template; determining a gradient threshold range based on the gradient value selected by the first matching template; judging whether the first gradient value is out of the gradient threshold range or not, wherein the absolute values of the difference values of the first gradient value and the boundary value of the gradient threshold range are all larger than a first critical value; if so, performing median filtering on the gradient value of the central point of the first matching template; if not, keeping the gradient value of the central point of the first matching template unchanged; and performing the above processing on all gradient values based on the first matching template to obtain a gradient map.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps: acquiring a second critical value for distinguishing image flatness and image detail; acquiring a fusion adjustment coefficient; determining a second matching template and a search window; and calculating a weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjusting coefficient, the gradient map, the weight of the residual map and the weight of the filter map.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps: determining a second gradient value of the center point of a second matching template in the gradient map; for any gradient value to be fused selected by the second matching template, calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image; and summing all the initial fusion values in the search window and carrying out normalization processing to obtain corresponding weight fusion values.
A computer-readable storage medium is provided in an embodiment of the present application, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the following steps: calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual image and the weight of the filter image through a weight fusion operation formula; the weight fusion operation formula comprises:
w1=1/(1+e(c1(g4-T2)+c2g4-g5));
w2=1-w1;w5=w1w3+w2w4;
wherein c1 and c2 represent fusion regulation coefficients; g4 represents the gradient value to be fused; t2 denotes a second critical value; g5 denotes a second gradient value; w3 represents the corresponding weight values of the positions of the gradient values to be fused in the filter map; w4 represents the corresponding weight value of the position of the gradient value to be fused in the residual map; w5 denotes the initial fusion value.
The computer-readable storage media to which this application relates include Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage media known in the art.
For a description of a relevant part in an image denoising system, an image denoising device, and a computer readable storage medium provided in the embodiments of the present application, please refer to a detailed description of a corresponding part in an image denoising method provided in the embodiments of the present application, which is not described herein again. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. An image noise reduction method, comprising:
acquiring an image to be processed;
filtering the image to be processed by a non-local mean filtering method to obtain a filtering image;
taking the absolute value of the difference between the image to be processed and the filter map as a residual map;
calculating the gradient value of the image to be processed to obtain a gradient map;
calculating the weights of the residual image and the filtering image by the non-local mean filtering method;
calculating a weight fusion value of weights of the residual image and the filtering image based on the gradient image;
and performing noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
2. The method according to claim 1, wherein the calculating gradient values of the image to be processed to obtain a gradient map comprises:
and calculating the gradient value of the image to be processed based on a gradient operator template to obtain the gradient map.
3. The method according to claim 2, wherein the calculating gradient values of the image to be processed based on the gradient operator template to obtain the gradient map comprises:
calculating gradient values of the image to be processed based on the gradient operator template;
and processing the gradient value based on a median filtering method to obtain the gradient map.
4. The method of claim 3, wherein the processing the gradient values based on the median filtering method to obtain the gradient map comprises:
determining a first matching template;
determining a first gradient value of a center point of the first matching template from the gradient values selected by the first matching template;
determining a gradient threshold range based on the gradient value selected by the first matching template;
judging whether the first gradient value is out of the gradient threshold range or not, wherein the absolute values of the difference values of the first gradient value and the boundary value of the gradient threshold range are all larger than a first critical value;
if yes, performing median filtering on the first gradient value;
if not, keeping the first gradient value unchanged;
and processing all the gradient values based on the first matching template to obtain the gradient map.
5. The method according to claim 4, wherein the calculating a weight fusion value of the weights of the residual map and the filter map based on the gradient map comprises:
acquiring a second critical value for distinguishing image flatness and image detail;
acquiring a fusion adjustment coefficient;
determining a second matching template and a search window;
and calculating the weight fusion value based on the second critical value, the second matching template, the search window, the fusion adjusting coefficient, the gradient map, the weight of the residual map and the weight of the filter map.
6. The method of claim 5, wherein the calculating the weight fusion value based on the second threshold, the second matching template, the search window, the fusion adjustment coefficient, the weight of the gradient map, the weight of the residual map, and the weight of the filter map comprises:
determining a second gradient value of the center point of the second matching template in the gradient map;
for any gradient value to be fused selected by the second matching template, calculating an initial fusion value corresponding to the gradient value to be fused based on a difference value between the gradient value to be fused and the second gradient value, a difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, a weight of the residual map and a weight of the filter map;
and summing all the initial fusion values in the search window and carrying out normalization processing to obtain the corresponding weight fusion value.
7. The method according to claim 6, wherein the calculating an initial blending value corresponding to the gradient value to be blended based on the difference between the gradient value to be blended and the second gradient value, the difference between the gradient value to be blended and the second critical value, the blending adjustment coefficient, the weight of the residual map, and the weight of the filter map comprises:
calculating an initial fusion value corresponding to the gradient value to be fused based on the difference value between the gradient value to be fused and the second gradient value, the difference value between the gradient value to be fused and the second critical value, the fusion adjusting coefficient, the weight of the residual map and the weight of the filter map by using a weight fusion operation formula;
the weight fusion operation formula comprises:
w1=1/(1+e(c1(g4-T2)+c2g4-g5));
w2=1-w1;w5=w1w3+w2w4;
wherein c1, c2 represent the fusion adjustment coefficient; g4 represents the gradient value to be fused; t2 represents the second critical value; g5 represents the second gradient value; w3 represents the corresponding weight value of the position of the gradient value to be fused in the filter map; w4 represents the corresponding weight value of the position of the gradient value to be fused in the residual map; w5 represents the initial fusion value.
8. An image noise reduction system, comprising:
the first acquisition module is used for acquiring an image to be processed;
the first filtering module is used for filtering the image to be processed by a non-local mean filtering method to obtain a filtering image;
a first residual error module, configured to use an absolute value of a difference between the image to be processed and the filter map as a residual error map;
the first gradient module is used for calculating the gradient value of the image to be processed to obtain a gradient map;
the first weight module is used for calculating the weights of the residual error graph and the filtering graph by the non-local mean filtering method;
the first fusion module is used for calculating a weight fusion value of the weights of the residual image and the filtering image based on the gradient image;
and the first noise reduction module is used for carrying out noise reduction processing on the image to be processed based on the weight fusion value to obtain a noise reduction image.
9. An image noise reduction device characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the image noise reduction method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which is executed by a processor for implementing the image noise reduction method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010213216.1A CN111402172B (en) | 2020-03-24 | 2020-03-24 | Image noise reduction method, system, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010213216.1A CN111402172B (en) | 2020-03-24 | 2020-03-24 | Image noise reduction method, system, equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111402172A true CN111402172A (en) | 2020-07-10 |
CN111402172B CN111402172B (en) | 2023-08-22 |
Family
ID=71432914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010213216.1A Active CN111402172B (en) | 2020-03-24 | 2020-03-24 | Image noise reduction method, system, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111402172B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112508823A (en) * | 2021-02-02 | 2021-03-16 | 数坤(北京)网络科技有限公司 | Image denoising method and device and computer readable storage medium |
CN113225590A (en) * | 2021-05-06 | 2021-08-06 | 深圳思谋信息科技有限公司 | Video super-resolution enhancement method and device, computer equipment and storage medium |
CN114693543A (en) * | 2021-12-07 | 2022-07-01 | 珠海市杰理科技股份有限公司 | Image noise reduction method and device, image processing chip and image acquisition equipment |
WO2022257759A1 (en) * | 2021-06-08 | 2022-12-15 | 百果园技术(新加坡)有限公司 | Image banding artifact removal method and apparatus, and device and medium |
CN116385280A (en) * | 2023-01-09 | 2023-07-04 | 爱芯元智半导体(上海)有限公司 | Image noise reduction system and method and noise reduction neural network training method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208104A (en) * | 2013-04-16 | 2013-07-17 | 浙江工业大学 | Non-local theory-based image denoising method |
US20130321678A1 (en) * | 2012-05-31 | 2013-12-05 | Apple Inc. | Systems and methods for lens shading correction |
KR20150080865A (en) * | 2014-01-02 | 2015-07-10 | 삼성테크윈 주식회사 | Apparatus for interpolating color |
US20150371108A1 (en) * | 2013-02-18 | 2015-12-24 | Nec Corporation | Image processing method, image processing device, and recording medium |
CN107610072A (en) * | 2017-10-10 | 2018-01-19 | 北京理工大学 | A kind of low-light video frequency image self adaption noise-reduction method based on gradient guiding filtering |
US20180293496A1 (en) * | 2017-04-06 | 2018-10-11 | Pixar | Denoising monte carlo renderings using progressive neural networks |
CN108921800A (en) * | 2018-06-26 | 2018-11-30 | 成都信息工程大学 | Non-local mean denoising method based on form adaptive search window |
CN110796615A (en) * | 2019-10-18 | 2020-02-14 | 浙江大华技术股份有限公司 | Image denoising method and device and storage medium |
-
2020
- 2020-03-24 CN CN202010213216.1A patent/CN111402172B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130321678A1 (en) * | 2012-05-31 | 2013-12-05 | Apple Inc. | Systems and methods for lens shading correction |
US20150371108A1 (en) * | 2013-02-18 | 2015-12-24 | Nec Corporation | Image processing method, image processing device, and recording medium |
CN103208104A (en) * | 2013-04-16 | 2013-07-17 | 浙江工业大学 | Non-local theory-based image denoising method |
KR20150080865A (en) * | 2014-01-02 | 2015-07-10 | 삼성테크윈 주식회사 | Apparatus for interpolating color |
US20180293496A1 (en) * | 2017-04-06 | 2018-10-11 | Pixar | Denoising monte carlo renderings using progressive neural networks |
CN107610072A (en) * | 2017-10-10 | 2018-01-19 | 北京理工大学 | A kind of low-light video frequency image self adaption noise-reduction method based on gradient guiding filtering |
CN108921800A (en) * | 2018-06-26 | 2018-11-30 | 成都信息工程大学 | Non-local mean denoising method based on form adaptive search window |
CN110796615A (en) * | 2019-10-18 | 2020-02-14 | 浙江大华技术股份有限公司 | Image denoising method and device and storage medium |
Non-Patent Citations (2)
Title |
---|
GENG CHEN ET AL.: "Denoising of Diffusion MRI Data via Graph Framelet Matching in x-q Space" * |
黄智 等: "混合相似性权重的非局部均值去噪算法" * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112508823A (en) * | 2021-02-02 | 2021-03-16 | 数坤(北京)网络科技有限公司 | Image denoising method and device and computer readable storage medium |
CN112508823B (en) * | 2021-02-02 | 2021-09-07 | 数坤(北京)网络科技股份有限公司 | Image denoising method and device and computer readable storage medium |
CN113225590A (en) * | 2021-05-06 | 2021-08-06 | 深圳思谋信息科技有限公司 | Video super-resolution enhancement method and device, computer equipment and storage medium |
WO2022257759A1 (en) * | 2021-06-08 | 2022-12-15 | 百果园技术(新加坡)有限公司 | Image banding artifact removal method and apparatus, and device and medium |
CN114693543A (en) * | 2021-12-07 | 2022-07-01 | 珠海市杰理科技股份有限公司 | Image noise reduction method and device, image processing chip and image acquisition equipment |
CN114693543B (en) * | 2021-12-07 | 2024-04-05 | 珠海市杰理科技股份有限公司 | Image noise reduction method and device, image processing chip and image acquisition equipment |
CN116385280A (en) * | 2023-01-09 | 2023-07-04 | 爱芯元智半导体(上海)有限公司 | Image noise reduction system and method and noise reduction neural network training method |
CN116385280B (en) * | 2023-01-09 | 2024-01-23 | 爱芯元智半导体(上海)有限公司 | Image noise reduction system and method and noise reduction neural network training method |
Also Published As
Publication number | Publication date |
---|---|
CN111402172B (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402172B (en) | Image noise reduction method, system, equipment and computer readable storage medium | |
CN108681743B (en) | Image object recognition method and device and storage medium | |
US8860756B2 (en) | Automatic image cropping | |
US9852353B2 (en) | Structure aware image denoising and noise variance estimation | |
CN105631913B (en) | Cloud-based content-aware population for images | |
CN109741287B (en) | Image-oriented filtering method and device | |
EP3264032B1 (en) | Measurement method and terminal | |
CN108932702B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
JPWO2010024402A1 (en) | Image processing apparatus and method, and image display apparatus | |
CN112464921B (en) | Obstacle detection information generation method, apparatus, device and computer readable medium | |
KR101646138B1 (en) | Method of determing layout, layout determination server performing the same and storage medium storing the same | |
CN112801882B (en) | Image processing method and device, storage medium and electronic equipment | |
CN113177898A (en) | Image defogging method and device, electronic equipment and storage medium | |
CN112215776A (en) | Portrait buffing method, electronic device and computer readable storage medium | |
CN111523531A (en) | Word processing method and device, electronic equipment and computer readable storage medium | |
CN113780190A (en) | Method, equipment and storage medium for constructing space contour recognition and space detection model | |
CN104536566A (en) | Page content processing method | |
WO2024139756A1 (en) | Method and apparatus for interpolation of feature map, device, and storage medium | |
WO2022027191A1 (en) | Method and device for plane correction, computer-readable medium, and electronic device | |
CN114187502A (en) | Vehicle loading rate identification method and device, electronic equipment and storage medium | |
CN110516680B (en) | Image processing method and device | |
CN106911934A (en) | Blocking effect minimizing technology and block elimination effect filter | |
CN115690364A (en) | AR model acquisition method, electronic device and readable storage medium | |
CN117237669B (en) | Structural member feature extraction method, device, equipment and storage medium | |
CN114529481B (en) | Pneumatic optical thermal radiation effect correction method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |