CN111833259A - Image definition determining method, device, equipment and storage medium - Google Patents
Image definition determining method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111833259A CN111833259A CN202010470387.2A CN202010470387A CN111833259A CN 111833259 A CN111833259 A CN 111833259A CN 202010470387 A CN202010470387 A CN 202010470387A CN 111833259 A CN111833259 A CN 111833259A
- Authority
- CN
- China
- Prior art keywords
- image
- noise
- edge
- detail data
- definition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 238000001914 filtration Methods 0.000 claims description 99
- 238000005070 sampling Methods 0.000 claims description 35
- 238000004590 computer program Methods 0.000 claims description 16
- 230000009467 reduction Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 11
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 19
- 230000004927 fusion Effects 0.000 description 15
- 230000000694 effects Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000035807 sensation Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application relates to an image definition determining method, an image definition determining device, image definition determining equipment and a storage medium. The method comprises the following steps: respectively carrying out edge-preserving and noise-reducing processing on the first original image and the second original image to obtain an edge-preserved and noise-reduced first image and an edge-preserved and noise-reduced second image; extracting first image detail data from the edge-protected and noise-reduced first image, and extracting second image detail data from the edge-protected and noise-reduced second image; and determining the definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data. By adopting the technical scheme, when the two original images are subjected to definition comparison, the two original images are subjected to edge-protecting and noise-reducing treatment, and then the two images subjected to edge-protecting and noise-reducing treatment are subjected to definition comparison, so that the interference of noise in the original images on the definition comparison is reduced, and the accuracy of the definition comparison is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method, a device, equipment and a storage medium for determining image definition.
Background
When multi-focus image fusion is performed, two original images are generally subjected to block fusion. Specifically, the pixel points (or pixel blocks) with relatively high definition in the two original images are fused to obtain an overall clear image.
However, in a digital image, a pixel with higher definition means that the noise is higher, so that the finally synthesized image has higher noise and contains more noise points, thereby giving people a stronger granular feeling.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for determining image definition and a storage medium, and aims to reduce the interference of image noise on definition comparison.
A first aspect of an embodiment of the present application provides a method for determining sharpness of an image, where the method includes:
respectively carrying out edge-preserving and noise-reducing treatment on a first original image and a second original image to obtain an edge-preserved and noise-reduced first image and an edge-preserved and noise-reduced second image, wherein the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting main body;
extracting first image detail data from the edge-protected and noise-reduced first image, and extracting second image detail data from the edge-protected and noise-reduced second image;
and determining the definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
Optionally, performing edge-preserving and noise-reducing processing on the first original image to obtain an edge-preserved and noise-reduced first image, including:
taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering checks to obtain a plurality of filtering values;
adjusting the pixel values of the pixels to be processed according to the difference values between the plurality of filtering values and the pixel values of the pixels to be processed respectively;
and adjusting the pixel value of each pixel point in the first original image to form the edge-protected and noise-reduced first image.
Optionally, extracting first image detail data from the edge-preserved and noise-reduced first image includes:
and performing low-pass filtering processing on the edge-protected and noise-reduced first image to extract the first image detail data.
Optionally, the low-pass filtering processing is performed on the edge-protected and noise-reduced first image to extract the first image detail data, and the method includes:
performing primary Gaussian filtering processing on the edge-protected and noise-reduced first image;
down-sampling the first image after the first Gaussian filtering processing;
performing up-sampling on the first image subjected to down-sampling, wherein the up-sampling multiple is consistent with the down-sampling multiple;
and carrying out difference on the first image subjected to the up-sampling and the first image subjected to the first Gaussian filtering processing to obtain the first image detail data.
Optionally, after upsampling the downsampled first image, the method further comprises:
performing second Gaussian filtering processing on the first image subjected to the upsampling to obtain a first image subjected to the second Gaussian filtering processing;
the difference between the first image after the upsampling and the first image after the first gaussian filtering processing is performed to obtain the first image detail data, and the difference comprises the following steps:
and carrying out difference on the first image subjected to the second Gaussian filtering and the first image subjected to the first Gaussian filtering to obtain the first image detail data.
Optionally, determining a result of comparing the sharpness of the first original image and the second original image according to the first image detail data and the second image detail data includes:
and comparing the two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate a definition mask image.
Optionally, after determining the sharpness comparison result of the first original image and the second original image, the method further comprises:
according to the definition comparison result, aiming at each pixel coordinate position, extracting a pixel point with higher definition from the edge-protected and noise-reduced first image or the edge-protected and noise-reduced second image;
and synthesizing a target image according to the extracted pixel points with higher definition.
Optionally, after determining the sharpness comparison result of the first original image and the second original image, the method further comprises:
determining an image with higher overall definition from the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image according to the definition comparison result;
and generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to store the image with higher overall definition, or deleting or adjusting the definition of the images except the image with higher overall definition.
A second aspect of the embodiments of the present application provides an image sharpness determining apparatus, where the apparatus includes:
the noise reduction module is used for respectively carrying out edge-preserving and noise-reducing processing on a first original image and a second original image to obtain an edge-preserving and noise-reducing first image and an edge-preserving and noise-reducing second image, wherein the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting main body;
the detail extraction module is used for extracting first image detail data from the edge-protected and noise-reduced first image and extracting second image detail data from the edge-protected and noise-reduced second image;
and the definition determining module is used for determining the definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
A third aspect of embodiments of the present application provides a readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the method according to the first aspect of the present application.
A fourth aspect of the embodiments of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the present application.
By adopting the image definition determining method provided by the application, when the definition (or the integral definition) of the pixel points in the two original images is compared, the edge-preserving and noise-reducing processing is preferentially carried out on the two original images, so that the edge-preserving effect (namely the edge details in the images are preserved) can be achieved while the noise in the images is reduced (namely the noise in the images is reduced). In the subsequent definition comparison process, the influence of noise in the image is avoided, and a more accurate definition comparison result is obtained. Furthermore, image fusion is carried out based on a more accurate definition comparison result, the noise of the synthesized image is lower, and the synthesized image contains less noise points, so that the overall visual perception of the synthesized image to human eyes is smoother, and stronger granular sensation is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram illustrating an effect of an image fusion method proposed in the related art;
FIG. 2 is a schematic illustration of a noisy image;
fig. 3 is a flowchart of an image sharpness determining method according to an embodiment of the present application;
fig. 4 is a schematic diagram of two images with sharpness to be compared according to an embodiment of the present application after edge-preserving and noise-reducing processing;
fig. 5 is a schematic diagram of a filter kernel selection window in an image sharpness determining method according to another embodiment of the present application;
fig. 6 is a schematic diagram illustrating selection of a filter kernel in an image sharpness determining method according to another embodiment of the present application;
fig. 7 is a schematic diagram of down-sampling and up-sampling an image in an image sharpness determining method according to yet another embodiment of the present application;
fig. 8 is a schematic diagram of a sharpness mask pattern generated in an image sharpness determining method according to yet another embodiment of the present application;
fig. 9 is a schematic diagram of image fusion performed in an image sharpness determination method according to yet another embodiment of the present application;
fig. 10 is a schematic diagram of an image sharpness determining apparatus according to still another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When image acquisition equipment carries out image acquisition, in the object that holds in the field of vision of image acquisition equipment's camera, it is nearer apart from the camera to be bound for some object, and some object is far away from the camera. Therefore, when the camera of the image capturing device captures an image, the camera cannot focus on all objects in the field of view at the same time, and eventually the definition of the focused objects in the image captured by the camera is significantly higher than that of the unfocused objects, that is, the unfocused objects appear in a "blurred" state in the image. As shown in fig. 1A, is an image taken by a camera of an image capture device focusing a focus on an alarm clock; as shown in fig. 1B, it is an image captured by the camera of the image pickup apparatus focusing a focus on a person.
In order to make all objects in the image finally output by the image capturing device clear, it is necessary to fuse a plurality of images captured by the camera in the same view field when focusing on different objects in the view field respectively. Specifically, a pixel point (or a pixel block) with relatively high definition at the same position in each original image relative to other original images is selected and placed in the fused image for fusion, so that the whole clear image is finally obtained, and the effect after fusion is shown in fig. 1C.
However, in the digital image acquired by the image acquisition device, the processor determines the sharpness of the pixel point according to the pixel value of the pixel point, that is, the larger the pixel value of the pixel point is, the higher the sharpness of the pixel point is, and accordingly, the processor may consider that the sharpness of the pixel point is higher, that is, the noise in the image may interfere with the sharpness comparison process. Moreover, the pixel with higher definition usually means that the noise (including but not limited to gaussian noise, poisson noise, multiplicative noise, salt and pepper noise, etc.) is higher, especially in the night vision mode, in order to improve the definition of the captured image, the exposure of the image is usually enhanced, and finally the noise of the image is larger, as shown in fig. 2, a pair of images with more obvious noise is shown. And then in the existing multi-focusing definition selection process, the pixel points with higher noise are inevitably selected preferentially to perform image fusion, so that the finally synthesized image has higher noise and more noise points, and stronger granular sensation is brought to human eyes, and the human eyes still feel that the image is not clear and identifiable after perceiving the image. Moreover, the existing multi-focus definition image selection algorithm has higher requirements on the operation performance of the processor, and is difficult to meet the corresponding performance requirements for a low-end processor.
For example, assume that A, B two images were taken of an object. In the image A, the object is clearer relative to the background; in the image B, the background is clearer relative to the object, so when image fusion is carried out, the object in the image A and the background in the image B are fused into a complete clear image. However, when comparing the sharpness of the pixel points, the existing algorithm cannot filter out the noise (noise) in A, B two images, so that the final synthesized target image AB also contains more noise, and for human eyes, the algorithm has a strong noise granular sensation and appears unclear.
Therefore, when multi-focus image fusion is performed, after different focusing is performed for multiple times, pixel points with relatively high definition are selected from multiple acquired images, and when the pixel points are used for synthesizing a target image, how to reduce noise of the selected pixel points is very important.
In view of this, the present application provides a series of technical solutions to solve the technical problem that the noise of the selected pixel point for synthesizing the target image is high when multi-focus images are fused, and the technical solutions provided by the present application can also be used for comparing the overall sharpness of the a/B two images.
In order to more clearly explain the technical solution proposed in the present application, the schematic diagram of fig. 4 will be used as a basis for description and explanation, and therefore the schematic diagram of fig. 4 will be briefly described. Referring to fig. 4, each square in the figure represents a pixel, wherein the oblique line at the upper left corner of each square represents the noise of the pixel (when the upper left corner of the square does not contain the oblique line, the noise of the pixel is filtered), fig. 4A is an image captured when the camera focuses the focus on the background, and fig. 4B is an image captured when the camera focuses the focus on the face. Then, based on the technical solution of the prior art, after the image of fig. 4A and the image of fig. 4B are fused, the fused image contains much noise. In the drawings referred to hereinafter, like elements represent the same schematic.
On the basis of this, the technical solution proposed by the present application is described below.
Referring to fig. 3, fig. 3 is a flowchart of an image sharpness determining method according to an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
s401: respectively carrying out edge-preserving and noise-reducing processing on a first original image and a second original image to obtain an edge-preserved and noise-reduced first image and an edge-preserved and noise-reduced second image, wherein the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting main body.
The first original image and the second original image are two different images respectively acquired when the image acquisition equipment respectively focuses the focal points of the cameras on different objects in the same visual field during image acquisition, and the definition of the focused objects in each image is relatively higher than that of the unfocused objects.
For example, an image of "alarm clock is clearer relative to human image" collected when the focus is focused on the alarm clock in fig. 1A, and an image of "person is clearer relative to alarm clock" collected when the focus is focused on the person in fig. 1B, and for example, an image of background is clearer relative to human face collected when the focus is focused on the background in fig. 4A, and an image of human face is clearer relative to background collected when the focus is focused on the human face in fig. 4B.
The edge-preserving and noise-reducing processing is performed on the first original image and the second original image, that is, the noise (or noise) in the two original images is smoothed, and in the process of the smoothing processing, image edge details (such as the face contour shown in fig. 4) in the images also need to be preserved.
The edge-preserving and noise-reducing processes are respectively performed on fig. 4A and fig. 4B to obtain the first image after the edge-preserving and noise-reducing process shown in fig. 4A 'and the second image after the edge-preserving and noise-reducing process shown in fig. 4B' (noise in the image is filtered, and edge details, i.e., face contours, in the image are retained).
After the edge-preserving noise processing is finished, two images are obtained, so that the subsequent definition judgment step can be ensured not to be influenced by noise points in the images.
S402: and extracting first image detail data from the edge-protected and noise-reduced first image, and extracting second image detail data from the edge-protected and noise-reduced second image.
When extracting detail data of an image, the extraction may be performed with pixel points as a basic unit, or with pixel blocks (i.e., blocks including several adjacent pixel points, where the pixel points included in each pixel block may be determined according to actual requirements) as a basic unit and according to a certain step length. However, it is required to be ensured that the extraction position for extracting the first image detail data from the edge-preserving and noise-reducing first image is the same as and in one-to-one correspondence to the extraction position for extracting the second image detail data from the edge-preserving and noise-reducing second image, and the extraction is performed according to the same basic unit when the extraction of the detail data is performed in the edge-preserving and noise-reducing first image and the edge-preserving and noise-reducing first image, so as to facilitate the resolution comparison in the subsequent step S403.
Taking the pixel point as an example, assuming that the coordinates (1, 1) in the edge-preserving and noise-reducing first image shown in fig. 4A 'indicate the row number [ abscissa indicates the row number ], and the ordinate indicates the column number [ 1, 1 ] indicates that the first image detail data is extracted at the first row and the first column ], it is simultaneously necessary to extract the second image detail data at the coordinates (1, 1) in the edge-preserving and noise-reducing first image shown in fig. 4B'. Similarly, the first image detail data and the second image detail data at other positions are extracted in the same way. If the extraction is performed according to the pixel blocks, the manner of extraction is similar, which is not described herein again, but it is to be reminded that the extraction should be performed with the pixel blocks of the same size and the same step size in the two images.
S403: and determining the definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
If in step S402, the detail data is extracted according to the pixel points as the basic units, when performing the sharpness comparison, the first image detail data and the second image detail data respectively extracted at the same positions in the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image are compared to determine which pixel point of the two pixel points at the same current position in the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image has a higher sharpness. If the detail data is extracted according to the pixel blocks as basic units, when the definition comparison is carried out, the first image detail data and the second image detail data which are respectively extracted at the same positions in the first image after the edge protection and noise reduction and the second image after the edge protection and noise reduction are compared to judge which pixel block has higher definition in two pixel blocks at the current same position in the first image after the edge protection and noise reduction and the second image after the edge protection and noise reduction. And comparing all pixel points at all coordinate positions in the first image subjected to edge-preserving noise reduction and the second image subjected to edge-preserving noise reduction in a traversing manner.
Taking pixel points as an example, for example, the first image detail data extracted at the coordinate (1, 1) in fig. 4A 'is compared with the second image detail data extracted at the coordinate (1, 1) in fig. 4B', and then the first image detail data extracted at the coordinate (1, 2) in fig. 4A 'is compared with the second image detail data extracted at the coordinate (1, 2) in fig. 4B', until all pixel points in fig. 4A 'and fig. 4B' are compared in a traversal manner. Of course, in the above example, the comparison is performed in sequence according to the arrangement sequence of the pixel points and with the pixel points as the basic unit step length. But it can also be any selection of pixel points to compare until all pixel points are compared. The above is not a limitation of the present application and may be selected according to the actual situation.
The edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image are obtained by respectively performing edge protection and noise reduction on the first original image and the second original image. Therefore, after the definition comparison result of the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image is determined, the comparison result is mapped to the original image, and therefore the definition comparison result of the first original image and the definition comparison result of the second original image are determined.
For example, after the comparison, it is determined that the definition of the pixel point at the coordinate (1, 1) in fig. 4A 'is higher than that of the pixel point at the coordinate (1, 1) in fig. 4B', and then, in the original image, the definition of the pixel point at the coordinate (1, 1) in fig. 4A is also higher than that of the pixel point at the coordinate (1, 1) in fig. 4B.
By adopting the technical scheme, when the definition (or the integral definition) of the pixel points in the two original images is compared, the edge-preserving and noise-reducing processing is preferentially carried out on the two original images, so that the edge-preserving effect (namely the edge details in the images are preserved) can be achieved while the noise in the images is reduced (namely the noise in the images is reduced). In the subsequent sharpness matching step, the influence of noise in the image is avoided.
In an alternative embodiment, the step S401: the edge-preserving and noise-reducing processing is performed on the first original image and the second original image, and is realized by the following steps S4011 to S4014.
Namely, step S401 includes:
s4011: and taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering checks to obtain a plurality of filtering values.
In this step, it is necessary to perform edge-preserving and noise-reducing processing on each pixel point in the first original image, so that each pixel point is to be used as a pixel point to be processed, and each pixel point is filtered to finally obtain a plurality of filtering values.
Specifically, a window of N × N is selected with a current pixel point as a center (where N is a positive odd number, N is greater than or equal to 3, and usually N is 3 or 5; a value of N may be freely selected according to an operation capability of a processor of the image acquisition device, and when a value of N is larger, an image obtained finally becomes clearer, which means that more operation resources are required for calculation, that is, the operation performance of the processor needs to be supported), and then a plurality of windows of N × N are selected (for example, N (N is a positive odd number, and N is greater than or equal to 3, and usually N is 3 or 5), and then (N is a number that is needed for the operation2-1) filtering kernels for performing filtering processing on the current pixel points; each filtering kernel needs to include the current pixel point.
The type of the filter kernel may be a boxFilter, a gauss filter, or other types of filter kernels, and those skilled in the art may select the filter kernel according to actual needs, which is not a limitation of the present application. However, when selecting the filter kernel, it is necessary to ensure that the product of the sum of the weight values of each pixel inside each filter kernel and the filter kernel coefficient is 1 (for example, in the first filter kernel (fig. 6A) selected in the first direction, the weights of four pixels are all 1, the coefficient of the filter kernel is 1/4, and finally, the product of the sum of the weight values of each pixel inside the filter kernel and the filter kernel coefficient is (1+1+1) × 1/4 ═ 1).
Taking the edge-preserving and noise-reducing processing of fig. 4A as an example, each pixel point therein is taken as a pixel point to be processed, and filtering processing is performed. Selecting a pixel point P (with a pixel value of P) at coordinates (4, 6)0) For the current pixel point to be processed as an example, a window of 3 × 3 is selected, as shown in fig. 5. Then, 8-direction filtering kernels are selected according to the following modes:
the finally selected 8 filter kernels are shown in fig. 6A to 6H, respectively.
Certainly, the filtering kernel may also be selected from a 5 × 5 window, at this time, the rule of selecting the filtering kernel is similar to that of the 3 × 3 window, and the upper left corner, the upper right corner, the lower left corner, the lower right corner, the upper half, the lower half, the left half, and the right half in the window are sequentially selected as the filtering kernel to filter the current pixel point.
Then, the selected 8 filtering kernels are used for respectively filtering the current pixel point P to finally obtain 8 filtering values P1、P2、……、P8。
It should be noted that, it may be found that when the to-be-processed pixel points are pixel points at the edge of the image, the selection of the filtering kernel can no longer be performed according to the above steps, and at this time, zero padding needs to be performed at the edge of the image, as shown in fig. 5, partial pixel points are padded at the edge of the image, but the pixel values of these pixel points are set to zero, so as to select the filtering kernel.
S4012: adjusting the pixel values of the pixels to be processed according to the difference values between the plurality of filtering values and the pixel values of the pixels to be processed respectively; and adjusting the pixel value of each pixel point in the first original image to form the edge-protected and noise-reduced first image.
In a possible implementation manner, after obtaining a plurality of filter values of the current pixel point, the plurality of filter values of the current pixel point are subtracted from the pixel value of the current pixel point to obtain a plurality of difference values, and then a filter value corresponding to a difference value with the smallest absolute value among the plurality of difference values is taken as a filtered filter value of the current pixel point. And repeating the steps for each pixel point until all pixel points in the image are traversed.
As in the above example, when the above 8 filtered values P are acquired1、P2、……、P8And then, subtracting the pixel value of each filtering value and the pixel value of the current pixel point to obtain:
P0-P1;P0-P2;P0-P3;P0-P4;P0-P5;P0-P6;P0-P7;P0-P8
and determining a new pixel value of the current pixel point according to the corresponding filtering value with the minimum absolute value of the difference value.
Assume that of the 8 differences, P0-P8Is minimized, the filtered value P is obtained8And re-determining the new pixel value of the current pixel point P. The adjustment methods of other pixels are similar to those of the above, and those skilled in the art can directly and unambiguously obtain the current pixel P according to the above description, and therefore, the details are not described herein again.
The step and the mode of performing the edge-preserving and noise-reducing processing on the second original image are the same as the step and the mode of performing the edge-preserving and noise-reducing processing on the first original image, and include step S4013 and step S4014, which are specifically as follows:
s4013: and taking each pixel point of the second original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering checks to obtain a plurality of filtering values.
S4014: adjusting the pixel values of the pixels to be processed according to the difference values between the plurality of filtering values and the pixel values of the pixels to be processed respectively; and adjusting the pixel value of each pixel point in the second original image to form the edge-protected and noise-reduced second image.
In a more specific embodiment, a person skilled in the art can directly and unambiguously obtain the above description of performing edge-preserving and noise-reducing processing on the first original image, and details thereof are not repeated herein. After the above edge-preserving and noise-reducing process is performed on fig. 4A and 4B, fig. 4A 'and 4B' are obtained.
In an optional implementation manner, after the edge-preserving and noise-reducing processing is performed, the obtained edge-preserving and noise-reducing first image and the obtained edge-preserving and noise-reducing second image are both high-frequency images, and low-pass filtering processing is performed on the edge-preserving and noise-reducing first image and the edge-preserving and noise-reducing second image respectively to obtain a low-pass filtered first image and a low-pass filtered second image, that is, a low-frequency image, so as to extract detail data of the first image and the second image. And then, carrying out difference by using the high-frequency image and the low-frequency image to obtain high-frequency information, namely extracting the first image detail data and the second image detail data for carrying out definition comparison. Namely, step S402: the following steps S4021 to S4027 are performed to extract first image detail data from the edge-protected and noise-reduced first image and to extract second image detail data from the edge-protected and noise-reduced second image. That is, step S402 includes:
s4021: and performing first Gaussian filtering processing on the edge-preserving and noise-reducing first image (A1).
The first image after edge-preserving and noise-reducing is a high-frequency image, and the first Gaussian filtering processing is mainly used for filtering Gaussian noise in the image.
For example, the first gaussian filtering process is performed on fig. 4A', so as to filter out the gaussian noise generated in the edge-preserving and noise-reducing process.
S4022: and downsampling the first image (A1) subjected to the first Gaussian filtering processing to obtain small A1.
And then down-sampling the first image after the first Gaussian filtering processing to a certain size. In a preferred embodiment, the downsampling is performed in an interlaced and spaced manner, that is, when sampling is performed, each first pixel point O is sampled, one line or one column is skipped to sample a pixel point P, and it is ensured that coordinates of each two nearest pixel points O and P in the sampled image satisfy the following relationship: and O (x, y), P (x +/-1, y) or P (x, y +/-1), and finally splicing all the pixel points obtained by sampling in the manner according to the relative position relationship in the sampled image (namely, erasing the pixel points which are not sampled when sampling is carried out, and enabling every two nearest sampled pixel points to be adjacent to each other) so as to obtain a small image with the size of one quarter of the first image after the first Gaussian filtering treatment.
For example, when the interlaced downsampling is performed on fig. 4A '(or fig. 7A'), pixels with odd horizontal and vertical coordinates or pixels with even horizontal and vertical coordinates are taken; for example, for columns, sampling the 2 nd column, the 4 th column, the 6 th column, the 8 th column, the 10 th column and the 12 th column, for rows, sampling the 2 nd row, the 4 th row, the 6 th row and the 8 th row, then, regarding the pixel points at the intersection points of the sampled rows and columns as sampled pixel points, the sampled pixel points are shown as the pixel points in the dotted line frame in fig. 7A', and after the sampled pixel points are spliced according to the original arrangement rule, the image shown in fig. 7A ″ is finally obtained (smallA 1).
S4023: and upsampling the downsampled first image, wherein the multiple of upsampling is consistent with the multiple of downsampling.
And then upsampling the downsampled smallA1 to the size of A1.
I.e., the image shown in fig. 7A "(smallA 1) is back sampled to the size of fig. 7A ' (or 4A ') to yield gA1 (i.e., fig. 7A '").
S4024: and carrying out difference on the first image subjected to the up-sampling and the first image subjected to the first Gaussian filtering processing to obtain the first image detail data.
Specifically, the first image after upsampling and the first image after the first gaussian filtering are differentiated, and the absolute value of each pixel after differentiation is used as the first image detail data of each pixel in the first image after edge preservation and noise reduction.
For example, taking the absolute value of a1 and gA1 as the difference (i.e., taking the difference), the first image detail data lapA1 (i.e., laplacian detail layer of a1) of the a1 image, i.e., lapA1 ═ a1-gA1|, is obtained. lapA1 describes high frequency information, i.e., edge detail information, of the edge-capped and noise-reduced first image.
In some other embodiments, after step S4023, step S40231 may be executed: and performing second Gaussian filtering on the first image after the upsampling to obtain a first image (marked as gA1') after the second Gaussian filtering.
When the small-sized image is sampled back to the original size, gaussian noise may be substituted, so after the up-sampling is completed, the second gaussian filtering is performed again to filter the substituted gaussian noise during the up-sampling so as to further reduce the noise influence during subsequent definition comparison, and the noise of the image used for definition comparison is lower, and the noise particles in the finally synthesized image are fewer and clearer. Then, step S40241 is executed to perform a difference between the first image after the second gaussian filtering and the first image after the first gaussian filtering, so as to obtain the first image detail data.
The difference is performed in a manner similar to that of the difference, except that the difference is performed by using a1 and gA1', and other steps and procedures are the same, and are not repeated here.
The steps and the mode of performing the low-pass filtering process on the edge-protected and noise-reduced second image are the same as the steps and the mode of performing the low-pass filtering process on the edge-protected and noise-reduced first image, and include steps S4025 to 4028, and the following steps:
s4025, carrying out first Gaussian filtering processing on the edge-protected and noise-reduced second image.
S4026, down-sampling the second image after the first gaussian filtering process.
S4027, upsampling the downsampled second image, wherein a multiple of the upsampling is the same as a multiple of the downsampling.
S4028, carrying out difference on the second image subjected to the upsampling and the second image subjected to the first Gaussian filtering processing to obtain second image detail data.
Similarly, when the second original image after down-sampling is up-sampled to the original size, gaussian noise may be substituted, so after up-sampling is completed, the second gaussian filtering is performed again to filter the gaussian noise substituted during up-sampling, so as to further reduce the noise influence during subsequent definition comparison, which has lower noise for the image with definition comparison, and the final synthesized image has fewer noise particles and is clearer.
Therefore, in some other embodiments, the low-pass filtering processing on the edge-protected and noise-reduced second image includes step S40271 and step S40281 in addition to step S4025 to step S4027, that is, after step S4027, step S40271 is executed: and performing second Gaussian filtering processing on the second image subjected to the upsampling to obtain a second image subjected to the second Gaussian filtering processing. Then, step S40281 is executed: and carrying out difference on the second image subjected to the second Gaussian filtering processing and the second image subjected to the first Gaussian filtering processing to obtain the second image detail data.
In a more specific embodiment, a person skilled in the art can directly and unambiguously obtain the second image after edge-preserving and noise-reducing by referring to the description of the low-pass filtering process performed on the second image, which is not described herein again. After the above-described low-pass filtering process is performed on fig. 4A '(or fig. 7A') and fig. 4B '(or fig. 7B'), lapA1 and lapB1, i.e., first image detail data and second image detail data, are obtained.
In an alternative embodiment, after the first image detail data and the second image detail data are extracted, step S403: determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data, and implementing definition comparison through the following steps S4031 to S4032. Namely, step S403 includes:
s4031: a sharpness threshold Thr is set.
S4032: determining a sharpness comparison result of the first original image and the second original image according to the first image detail data lapA1, the second image detail data lapB1 and the sharpness threshold Thr.
For example, the determination is made by the following rule:
for the pixel point at the same coordinate position, when lapA1> lapB1 x Thr is met, determining the coordinate position, wherein the definition of the pixel point in the edge-protected and noise-reduced first image is higher than that of the pixel point in the edge-protected and noise-reduced second image; that is, at the coordinate, the definition of the pixel point in the first original image is higher than that of the pixel point in the second original image.
For the pixel point at the same coordinate position, when lapA1< lapB1 x Thr is satisfied, determining that the definition of the pixel point in the edge-protected and noise-reduced first image is lower than that of the pixel point in the edge-protected and noise-reduced second image at the coordinate position; that is, at the coordinate, the definition of the pixel point in the first original image is lower than that of the pixel point in the second original image.
Thr is generally set to 1.0 (i.e., the pixel values of the pixel points in the edge-preserving and noise-reducing first image and the pixel points in the edge-preserving and noise-reducing second image at the same coordinate position are directly compared), and may be set according to actual situations. When the value of Thr is larger, the definition degree of the pixel point in A1 is required to be higher than that of the pixel point in B1.
In an optional implementation manner, the method further includes generating a sharpness mask map according to the comparison result of the sharpness, that is, comparing two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data, and generating the sharpness mask map. The method specifically comprises the following steps:
and defining an empty mask image (mask) for indicating the definition of the pixel points in the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image. Mask is aligned with either a1 or B1 pixels.
For any coordinate, when the definition of a pixel point in the edge-protected and noise-reduced first image (or a pixel point in the first original image) is higher than that of a pixel point in the edge-reduced and noise-reduced second image (or a pixel point in the second original image), treating 1 in the same coordinate in the Mask; when the definition of the pixel point in the first image (or the pixel point in the first original image) after the edge-preserving and noise-reducing is lower than that of the pixel point in the second image (or the pixel point in the second original image) after the edge-reducing and noise-reducing, the same coordinate in the Mask is treated with 0.
Taking the above-mentioned example that lapA1 and lapB1, i.e., the first image detail data and the second image detail data, are obtained after the above-mentioned low-pass filtering process is performed on fig. 4A '(or fig. 7A') and fig. 4B '(or fig. 7B'), the mask map shown in fig. 8 is finally obtained according to the comparison result of lapA1 and lapB1 (note that the mask map does not have a dashed outline of the human face, but only for assisting understanding of the mask map, the mask map not only shows relatively clear pixel points in the two images a1 and B1 (or the two images a and B), but also shows a rough outline of the human face according to the position distribution of the mask in the map).
In an optional implementation manner, after obtaining the mask image with sharpness, according to the mask image with sharpness, the merging, through steps S501 to S502, the edge-preserving and noise-reducing first image and the edge-preserving and noise-reducing second image into a target image specifically includes:
s501: and according to the definition comparison result, aiming at each pixel coordinate position, extracting a pixel point with higher definition from the edge-preserving and noise-reducing first image or the edge-preserving and noise-reducing second image.
As shown in fig. 9, the pixel points with higher definition in fig. 4A '(or fig. 7A', or fig. 9A ') and fig. 4B' (or fig. 7B ', or fig. 9B') are extracted, that is, the pixel points of the background portion in fig. 4A '(or fig. 7A', or fig. 9A ') and the pixel points of the face portion in fig. 4B' (or fig. 7B ', or fig. 9B') are extracted.
S502: and synthesizing a target image according to the extracted pixel points with higher definition.
And then synthesizing a target image by using the extracted pixel points.
As shown in fig. 9, the image with both the background and the face clear as shown in fig. 9C' is finally synthesized. The noise in the image shown in fig. 9C' is reduced compared to the image fusion technique approach of the prior art.
In another optional embodiment, after determining the sharpness comparison result of the first original image and the second original image, the method further includes:
s601: and according to the definition comparison result, determining an image with higher overall definition from the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image.
Taking the above fig. 4 (or fig. 7) as an example, after comparison, it can be found that more clear pixels are contained in fig. 4A 'compared to fig. 4B'. That is, the overall definition of fig. 4A 'is higher than that of fig. 4B'.
S602: and generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to store the image with higher overall definition, or deleting or adjusting the definition of the images except the image with higher overall definition.
Taking the above-mentioned fig. 4 (or fig. 7) as an example, if it is determined that the overall sharpness of fig. 4A 'is higher than that of fig. 4B', image editing prompt information is output for the fig. 4A ', for example, prompting the user to select whether to save the image shown in fig. 4A', or prompting the user to delete the image shown in fig. 4B ', or perform sharpness adjustment on the image shown in fig. 4B'.
By adopting the technical scheme, when the definition (or the integral definition) of the pixel points in the two original images is compared, the edge-preserving and noise-reducing processing is preferentially carried out on the two original images, so that the edge-preserving effect (namely the edge details in the images are preserved) can be achieved while the noise in the images is reduced (namely the noise in the images is reduced). In the subsequent steps of sharpness comparison and image fusion, the influence of noise in the image is avoided. And the computation amount is obviously less than that of the existing multi-focus definition image selection algorithm, the requirement on the computation performance of the processor is low, and the performance requirement of a low-end processor is easy to adapt.
In the embodiments described above, the two images obtained by focusing on two different objects twice in the same field of view are mainly used for performing sharpness contrast, image fusion, and the like, but this is not a limitation to the present application. The technical personnel in the field can know that when a plurality of articles are in different spatial positions in the same visual field, the camera of the image acquisition equipment can focus a focus on each article in the visual field respectively to obtain a plurality of images, two of the images are subjected to definition contrast, fusion and the like, then the images are subjected to definition contrast and fusion with other images on the basis of the previous definition contrast and fusion, and the images are finally synthesized into one image with all clear pixel points (namely all the articles in the image are clear).
Based on the same inventive concept, an embodiment of the present application provides an image definition determining apparatus. Referring to fig. 10, fig. 10 is a schematic diagram of an image sharpness determining apparatus according to an embodiment of the present application. As shown in fig. 10, the apparatus includes:
a noise reduction module 1001, configured to perform edge-preserving and noise-reducing processing on a first original image and a second original image respectively to obtain an edge-preserving and noise-reducing first image and an edge-preserving and noise-reducing second image, where the first original image and the second original image are two images obtained by performing image acquisition on a same shooting subject;
a detail extracting module 1002, configured to extract first image detail data from the edge-protected and noise-reduced first image, and extract second image detail data from the edge-protected and noise-reduced second image;
a definition determining module 1003, configured to determine a definition comparison result between the first original image and the second original image according to the first image detail data and the second image detail data.
Optionally, the noise reduction module 1001 includes:
the filtering kernel selection submodule is used for taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering kernels to obtain a plurality of filtering values;
the pixel value adjusting submodule is used for adjusting the pixel values of the pixels to be processed according to the difference values between the plurality of filtering values and the pixel values of the pixels to be processed respectively;
and adjusting the pixel value of each pixel point in the first original image to form the edge-protected and noise-reduced first image.
Optionally, the detail extracting module 1002 includes:
and the low-pass filtering unit is used for performing low-pass filtering processing on the edge-preserving and noise-reducing first image so as to extract the first image detail data.
Optionally, the low-pass filtering unit includes:
the first Gaussian filtering subunit is used for performing first Gaussian filtering processing on the edge-preserving and noise-reducing first image;
the down-sampling subunit is used for down-sampling the first image after the first Gaussian filtering processing;
the up-sampling sub-unit is used for up-sampling the first image after down-sampling, wherein the multiple of up-sampling is consistent with the multiple of down-sampling;
and the difference subunit is used for carrying out difference on the first image subjected to the up-sampling and the first image subjected to the first Gaussian filtering processing to obtain the first image detail data.
Optionally, the low-pass filtering unit further includes:
the second Gaussian filtering subunit is used for performing second Gaussian filtering processing on the first image subjected to the upsampling to obtain a first image subjected to the second Gaussian filtering processing;
and the difference subunit is further configured to perform difference on the first image subjected to the second gaussian filtering and the first image subjected to the first gaussian filtering, so as to obtain the first image detail data.
Optionally, the sharpness determining module 1003 includes:
and the mask image generation submodule is used for comparing two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate a definition mask image.
Optionally, the apparatus further comprises:
the high-definition pixel point extraction module is used for extracting a pixel point with higher definition from the edge-protected and noise-reduced first image or the edge-protected and noise-reduced second image aiming at each pixel coordinate position according to the definition comparison result;
and the image synthesis module is used for synthesizing the target image according to the extracted pixel points with higher definition.
Optionally, the apparatus further comprises:
the high-definition image determining module is used for determining an image with higher overall definition from the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image according to the definition comparison result;
and the prompt information generation module is used for generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to store the image with higher overall definition or delete or adjust the definition of the image except the image with higher overall definition.
Based on the same inventive concept, another embodiment of the present application provides a readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the image sharpness determining method according to any of the above-mentioned embodiments of the present application.
Based on the same inventive concept, another embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the method for determining image sharpness according to any of the above embodiments of the present application is implemented.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The image definition determining method, apparatus, device and storage medium provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (11)
1. An image sharpness determination method, characterized by comprising:
respectively carrying out edge-preserving and noise-reducing treatment on a first original image and a second original image to obtain an edge-preserved and noise-reduced first image and an edge-preserved and noise-reduced second image, wherein the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting main body;
extracting first image detail data from the edge-protected and noise-reduced first image, and extracting second image detail data from the edge-protected and noise-reduced second image;
and determining the definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
2. The method according to claim 1, wherein performing edge-preserving and noise-reducing processing on the first original image to obtain an edge-preserved and noise-reduced first image comprises:
taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering checks to obtain a plurality of filtering values;
adjusting the pixel values of the pixels to be processed according to the difference values between the plurality of filtering values and the pixel values of the pixels to be processed respectively;
and adjusting the pixel value of each pixel point in the first original image to form the edge-protected and noise-reduced first image.
3. The method of claim 1, wherein extracting first image detail data from the edge-preserved and noise-reduced first image comprises:
and performing low-pass filtering processing on the edge-protected and noise-reduced first image to extract the first image detail data.
4. The method of claim 3, wherein the low-pass filtering the edge-capped and noise-reduced first image to extract the first image detail data comprises:
performing primary Gaussian filtering processing on the edge-protected and noise-reduced first image;
down-sampling the first image after the first Gaussian filtering processing;
performing up-sampling on the first image subjected to down-sampling, wherein the up-sampling multiple is consistent with the down-sampling multiple;
and carrying out difference on the first image subjected to the up-sampling and the first image subjected to the first Gaussian filtering processing to obtain the first image detail data.
5. The method of claim 4, wherein after upsampling the downsampled first image, the method further comprises:
performing second Gaussian filtering processing on the first image subjected to the upsampling to obtain a first image subjected to the second Gaussian filtering processing;
the difference between the first image after the upsampling and the first image after the first gaussian filtering processing is performed to obtain the first image detail data, and the difference comprises the following steps:
and carrying out difference on the first image subjected to the second Gaussian filtering and the first image subjected to the first Gaussian filtering to obtain the first image detail data.
6. The method of claim 1, wherein determining the sharpness comparison of the first original image and the second original image based on the first image detail data and the second image detail data comprises:
and comparing the two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate a definition mask image.
7. The method of any one of claims 1 to 6, wherein after determining the sharpness comparison of the first original image and the second original image, the method further comprises:
according to the definition comparison result, aiming at each pixel coordinate position, extracting a pixel point with higher definition from the edge-protected and noise-reduced first image or the edge-protected and noise-reduced second image;
and synthesizing a target image according to the extracted pixel points with higher definition.
8. The method of any one of claims 1 to 6, wherein after determining the sharpness comparison of the first original image and the second original image, the method further comprises:
determining an image with higher overall definition from the edge-protected and noise-reduced first image and the edge-protected and noise-reduced second image according to the definition comparison result;
and generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to store the image with higher overall definition, or deleting or adjusting the definition of the images except the image with higher overall definition.
9. An image sharpness determining apparatus, characterized by comprising:
the noise reduction module is used for respectively carrying out edge-preserving and noise-reducing processing on a first original image and a second original image to obtain an edge-preserving and noise-reducing first image and an edge-preserving and noise-reducing second image, wherein the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting main body;
the detail extraction module is used for extracting first image detail data from the edge-protected and noise-reduced first image and extracting second image detail data from the edge-protected and noise-reduced second image;
and the definition determining module is used for determining the definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 8 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010470387.2A CN111833259B (en) | 2020-05-28 | 2020-05-28 | Image definition determining method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010470387.2A CN111833259B (en) | 2020-05-28 | 2020-05-28 | Image definition determining method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111833259A true CN111833259A (en) | 2020-10-27 |
CN111833259B CN111833259B (en) | 2024-05-07 |
Family
ID=72913919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010470387.2A Active CN111833259B (en) | 2020-05-28 | 2020-05-28 | Image definition determining method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111833259B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018214769A1 (en) * | 2017-05-24 | 2018-11-29 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
CN109089046A (en) * | 2018-09-25 | 2018-12-25 | Oppo广东移动通信有限公司 | Image denoising method, device, computer readable storage medium and electronic equipment |
-
2020
- 2020-05-28 CN CN202010470387.2A patent/CN111833259B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018214769A1 (en) * | 2017-05-24 | 2018-11-29 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
CN109089046A (en) * | 2018-09-25 | 2018-12-25 | Oppo广东移动通信有限公司 | Image denoising method, device, computer readable storage medium and electronic equipment |
Non-Patent Citations (2)
Title |
---|
刘峰;王信佳;于波;徐福龙;: "基于暗原色先验的低照度视频增强算法", 计算机系统应用, no. 06 * |
杨昊;陈雷霆;邱航;: "基于加权空间离群点度量的随机脉冲噪声降噪算法", 计算机应用, no. 10 * |
Also Published As
Publication number | Publication date |
---|---|
CN111833259B (en) | 2024-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gallo et al. | Artifact-free high dynamic range imaging | |
CN107993190B (en) | Image watermark removing device | |
Schechner et al. | Separation of transparent layers using focus | |
CN109118459B (en) | Image salient object detection method and device | |
JP6511247B2 (en) | Image processing apparatus, image processing method and program | |
US9338437B2 (en) | Apparatus and method for reconstructing high density three-dimensional image | |
JP5370542B1 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
US20140035909A1 (en) | Systems and methods for generating a three-dimensional shape from stereo color images | |
Rouf et al. | Glare encoding of high dynamic range images | |
KR20130128124A (en) | Panorama image data providing method and apparatus | |
CN110349080B (en) | Image processing method and device | |
CN107798704B (en) | Real-time image superposition method and device for augmented reality | |
CN113436112A (en) | Image enhancement method, device and equipment | |
CN105488771B (en) | Light field image edit methods and device | |
Nguyen et al. | Depth image-based rendering from multiple cameras with 3D propagation algorithm | |
KR101795952B1 (en) | Method and device for generating depth image of 2d image | |
CN111652809A (en) | Infrared image noise suppression method for enhancing details | |
CN111833259B (en) | Image definition determining method, device, equipment and storage medium | |
Bareja et al. | An improved iterative back projection based single image super resolution approach | |
CN110349110B (en) | Blurred image enhancement method based on accumulative frame over-fusion and application | |
WO2009090641A1 (en) | Multi-scale representation of an out of focus image | |
Chen et al. | Depth map generation based on depth from focus | |
Maik et al. | Pattern selective image fusion for multi-focus image reconstruction | |
CN114612352B (en) | Multi-focus image fusion method, storage medium, and computer | |
Fanaswala | Regularized super-resolution of multi-view images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |