[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111833259B - Image definition determining method, device, equipment and storage medium - Google Patents

Image definition determining method, device, equipment and storage medium Download PDF

Info

Publication number
CN111833259B
CN111833259B CN202010470387.2A CN202010470387A CN111833259B CN 111833259 B CN111833259 B CN 111833259B CN 202010470387 A CN202010470387 A CN 202010470387A CN 111833259 B CN111833259 B CN 111833259B
Authority
CN
China
Prior art keywords
image
noise reduction
detail data
definition
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010470387.2A
Other languages
Chinese (zh)
Other versions
CN111833259A (en
Inventor
唐金伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN202010470387.2A priority Critical patent/CN111833259B/en
Publication of CN111833259A publication Critical patent/CN111833259A/en
Application granted granted Critical
Publication of CN111833259B publication Critical patent/CN111833259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application relates to an image definition determining method, an image definition determining device, image definition determining equipment and a storage medium. The method comprises the following steps: performing edge-preserving and noise-reducing treatment on the first original image and the second original image respectively to obtain a first image after edge-preserving and noise-reducing and a second image after edge-preserving and noise-reducing; extracting first image detail data from the first image after edge protection and noise reduction, and extracting second image detail data from the second image after edge protection and noise reduction; and determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data. By adopting the technical scheme, when the two original images are subjected to definition comparison, the two original images are subjected to edge protection and noise reduction treatment, and then the two images subjected to edge protection and noise reduction are subjected to definition comparison, so that the interference of noise in the original images on definition comparison is reduced, and the accuracy of definition comparison is improved.

Description

Image definition determining method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image definition determining method, an image definition determining device and a storage medium.
Background
In multi-focus image fusion, two original images are typically fused in blocks. Specifically, pixels (or pixel blocks) with relatively high definition in the two original images are fused to obtain an overall clear image.
However, in the digital image, the pixels with higher definition mean that the noise is higher, so that the noise of the finally synthesized image is higher, and the finally synthesized image contains more noise points, so that the human eyes are more granular.
Disclosure of Invention
The embodiment of the application provides an image definition determining method, an image definition determining device, image definition determining equipment and a storage medium, aiming at reducing interference of image noise on definition comparison.
An embodiment of the present application provides a method for determining image sharpness, including:
Performing edge-preserving and noise-reducing treatment on the first original image and the second original image respectively to obtain an edge-preserving and noise-reducing first image and an edge-preserving and noise-reducing second image, wherein the first original image and the second original image are two images obtained by image acquisition of the same shooting subject;
extracting first image detail data from the first image after edge protection and noise reduction, and extracting second image detail data from the second image after edge protection and noise reduction;
and determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
Optionally, performing edge-preserving and noise-reducing processing on the first original image to obtain an edge-preserving and noise-reducing first image, including:
Taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering cores to obtain a plurality of filtering values;
according to the difference values between the filter values and the pixel values of the pixel points to be processed, the pixel values of the pixel points to be processed are adjusted;
And after the pixel value of each pixel point in the first original image is adjusted, forming the first image after edge protection and noise reduction.
Optionally, extracting the first image detail data from the first image after edge-preserving and noise-reducing includes:
And carrying out low-pass filtering processing on the first image after edge protection and noise reduction so as to extract the detail data of the first image.
Optionally, performing low-pass filtering processing on the edge-preserving and noise-reducing first image to extract the first image detail data, including:
performing a first Gaussian filtering treatment on the first image after edge protection and noise reduction;
downsampling a first image after the first Gaussian filtering process;
Upsampling the downsampled first image, wherein a multiple of the upsampling is identical to a multiple of the downsampling;
and differentiating the first image after up-sampling and the first image after the first Gaussian filtering treatment to obtain the first image detail data.
Optionally, after upsampling the downsampled first image, the method further comprises:
Performing a second Gaussian filtering process on the first image after the up-sampling to obtain a first image after the second Gaussian filtering process;
differentiating the first image after up-sampling and the first image after the first Gaussian filtering to obtain the first image detail data, wherein the method comprises the following steps:
And differentiating the first image after the second Gaussian filtering treatment and the first image after the first Gaussian filtering treatment to obtain the first image detail data.
Optionally, determining a sharpness comparison result of the first original image and the second original image according to the first image detail data and the second image detail data includes:
and comparing the two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate a definition mask diagram.
Optionally, after determining the sharpness comparison result of the first original image and the second original image, the method further comprises:
According to the definition comparison result, extracting pixel points with higher definition from the first image after edge protection and noise reduction or the second image after edge protection and noise reduction according to each pixel coordinate position;
and synthesizing the target image according to the extracted pixel points with higher definition.
Optionally, after determining the sharpness comparison result of the first original image and the second original image, the method further comprises:
Determining an image with higher overall definition from the first image after edge protection and noise reduction and the second image after edge protection and noise reduction according to the definition comparison result;
Generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to save the image with higher overall definition or delete or regulate the definition of images except for the image with higher overall definition.
A second aspect of an embodiment of the present application provides an image sharpness determining apparatus, including:
The device comprises a noise reduction module, a first image acquisition module and a second image acquisition module, wherein the noise reduction module is used for respectively carrying out edge protection noise reduction treatment on a first original image and a second original image to obtain an edge protection noise reduction first image and an edge protection noise reduction second image, and the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting subject;
The detail extraction module is used for extracting first image detail data from the first image after edge protection and noise reduction and extracting second image detail data from the second image after edge protection and noise reduction;
and the definition determining module is used for determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
A third aspect of the embodiments of the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method according to the first aspect of the present application.
A fourth aspect of the embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect of the application when the processor executes the computer program.
When the image definition determining method provided by the application is adopted, the two original images are preferentially subjected to edge protection and noise reduction treatment when the definition (or the overall definition) of the pixel points in the two original images are compared, so that the edge protection effect (namely, the edge details in the images are reserved) can be achieved while the noise in the images is reduced (namely, the noise points in the images are reduced). In the subsequent definition comparison process, the influence of noise in the image is avoided, and a more accurate definition comparison result is obtained. Further, based on the accurate definition comparison result, the image fusion is carried out, the noise of the synthesized image is low, and fewer noise points are contained, so that the synthesized image has smoother overall visual perception for human eyes and does not have stronger granular sensation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an effect schematic diagram of an image fusion method proposed by the related art;
FIG. 2 is a schematic illustration of a noisy image;
FIG. 3 is a flow chart of a method for determining image sharpness according to an embodiment of the present application;
FIG. 4 is a schematic diagram of two images with sharpness to be compared after edge protection and noise reduction according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a filter kernel selection window in an image sharpness determination method according to another embodiment of the present application;
FIG. 6 is a schematic diagram illustrating selection of a filter kernel in an image sharpness determination method according to another embodiment of the present application;
FIG. 7 is a schematic diagram showing downsampling and upsampling of an image in an image sharpness determination method according to another embodiment of the present application;
fig. 8 is a schematic diagram of a mask diagram of sharpness generated in an image sharpness determination method according to still another embodiment of the present application;
FIG. 9 is a schematic diagram showing image fusion in an image sharpness determination method according to another embodiment of the present application;
fig. 10 is a schematic view of an image sharpness determining apparatus according to still another embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
When the image acquisition device acquires images, some objects are necessarily closer to the camera and some objects are farther from the camera in objects contained in the field of view of the camera of the image acquisition device. Therefore, when the camera of the image acquisition device performs image capturing, the camera cannot focus on all objects in the field of view at the same time, which finally results in that the definition of the focused object is obviously higher than that of the unfocused object in the image captured by the camera, i.e. the unfocused object presents a "virtual" state in the image. As shown in fig. 1A, it is an image captured by the camera of the image capturing device focusing a focal point on the alarm clock; as shown in fig. 1B, it is an image captured by the camera of the image capturing apparatus focusing a focal point on a person.
In order to make all objects in the image finally output by the image acquisition device clear, the camera needs to be in the same view, and a plurality of images shot when focusing is respectively carried out on different objects in the view. Specifically, a pixel point (or pixel block) with relatively higher definition at the same position in each original image relative to other original images is selected and placed in the fusion image for fusion, so that the whole clear image is finally obtained, and the effect after fusion is shown in fig. 1C.
However, in the digital image acquired by the image acquisition device, the processor determines the definition of the pixel according to the pixel value of the pixel, that is, the greater the pixel value of the pixel, the higher the definition of the pixel is correspondingly considered by the processor, that is, the noise in the image will interfere in the process of comparing the definition. Moreover, a pixel with higher definition generally means that its noise (including but not limited to gaussian noise, poisson noise, multiplicative noise, pretzel noise, etc.) is higher, especially in the night vision mode, in order to improve the definition of the captured image, the exposure of the image is generally enhanced, and finally the noise of the image is larger, as shown in fig. 2, a pair of images with more obvious noise is shown. In the existing multi-focus definition selection process, pixel points with higher noise are inevitably selected preferentially to perform image fusion, so that the noise of the finally synthesized image is higher, more noise points are contained, the human eyes are given stronger granular feel, and the human eyes still feel that the image is not clearly discernible after perceiving the image. In addition, the conventional multi-focus definition image selection algorithm has high requirements on the operation performance of a processor, and is difficult for a low-end processor to meet corresponding performance requirements.
For example, assume that A, B images are taken of an object. In the image A, the object is clearer relative to the background; in the image B, the background is clearer than the object, so that when the image fusion is carried out, the object in the A and the background in the B are fused into a complete clear image. However, when the definition of the pixel points is compared, the noise (noise points) in the A, B two images cannot be filtered by the existing algorithm, so that the finally synthesized target image AB also contains more noise points, and for human eyes, the noise particles are very strong, and are not clear.
Therefore, in the process of fusing the multi-focus images, after the multi-focus images are focused differently for a plurality of times, it is important to select pixels with relatively high definition from the acquired images so as to reduce the noise of the selected pixels when the pixels are used for synthesizing the target image.
In view of this, the present application proposes a series of technical solutions to solve the technical problem that the noise of the pixel point selected for synthesizing the target image is high when the multi-focus images are fused, and meanwhile, the technical solutions proposed by the present application can also be used for comparing the overall sharpness of the a/B two images.
In order to more clearly illustrate the technical solution proposed by the present application, the description will be based on the schematic diagram of fig. 4, and therefore the schematic diagram of fig. 4 will be briefly described. Referring to fig. 4, each square represents a pixel, wherein the diagonal line of the upper left corner in each square represents the noise of the pixel (when the diagonal line is not included in the upper left corner of the square, the noise of the pixel is filtered), fig. 4A is an image captured when the camera focuses the focus on the background, and fig. 4B is an image captured when the camera focuses the focus on the face. Then, based on the technical solution in the prior art, after fig. 4A and fig. 4B are fused, the fused image contains more noise. In the drawings referred to hereinafter, the same graphic elements represent the same schematic.
On the basis, the technical scheme provided by the application is described below.
Referring to fig. 3, fig. 3 is a flowchart illustrating an image sharpness determination method according to an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
S401: and performing edge-preserving and noise-reducing treatment on the first original image and the second original image respectively to obtain an edge-preserving and noise-reducing first image and an edge-preserving and noise-reducing second image, wherein the first original image and the second original image are two images obtained by image acquisition of the same shooting subject.
When the image acquisition device is used for acquiring images, the focal points of the cameras are respectively focused on different objects in the same visual field, two different images are respectively acquired, and the definition of the focused objects in each image is relatively higher than that of the objects which are not focused.
For example, the image of "the alarm clock is clearer than the human figure" collected when the focus is focused on the alarm clock in fig. 1A, and the image of "the human figure is clearer than the alarm clock" collected when the focus is focused on the human figure in fig. 1B, and the image of "the human figure is clearer than the alarm clock" collected when the focus is focused on the background in fig. 4A, and the image of the human face is clearer than the background collected when the focus is focused on the human face in fig. 4B.
The edge-preserving and noise-reducing processing is performed on the first original image and the second original image respectively, that is, noise (or noise points) in the two original images are subjected to smoothing processing, and in the process of the smoothing processing, image edge details (such as a face contour shown in fig. 4) in the images also need to be kept.
And (3) performing edge-preserving and noise-reducing treatment on the image in fig. 4A and 4B respectively to obtain a first image after edge-preserving and noise-reducing shown in fig. 4A 'and a second image after edge-preserving and noise-reducing shown in fig. 4B' (noise in the image is filtered, and edge details in the image, namely, the face contour is reserved).
After the edge protection noise processing is finished, two images are obtained, so that the subsequent definition judgment step can be ensured not to be influenced by noise points in the images.
S402: extracting first image detail data from the first image after edge protection and noise reduction, and extracting second image detail data from the second image after edge protection and noise reduction.
When extracting the detail data of the image, the pixel point may be used as a basic unit for extraction, or a pixel block (i.e. a block comprising several adjacent pixel points, where the pixel point included in each pixel block may be determined according to actual requirements) may be used as a basic unit for extraction according to a certain step length. However, it is to be ensured that the extraction positions of the first image detail data extracted from the first image after edge protection and noise reduction are the same and one-to-one corresponding to the extraction positions of the second image detail data extracted from the second image after edge protection and noise reduction, and the extraction is performed according to the same basic unit when the detail data extraction is performed in the first image after edge protection and noise reduction and the first image after edge protection, so as to facilitate the definition comparison in the subsequent step S403.
Taking the pixel as an example, assuming that the coordinates (1, 1) in the first image after edge protection and noise reduction shown in fig. 4A 'are represented by the abscissa indicating the number of rows and the ordinate indicating the number of columns, (1, 1) indicating the first row and the first column ], it is necessary to extract the second image detail data at the coordinates (1, 1) in the first image after edge protection and noise reduction shown in fig. 4B'. Similarly, the first image detail data and the second image detail data at other positions are extracted in the same way. If the extraction is performed according to the pixel blocks, the extraction manner is similar, and will not be described in detail herein, but it is to be noted that the two images should be extracted by using the same size pixel blocks and the same step size.
S403: and determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
If in step S402, the detail data is extracted according to the pixel point as the basic unit, then when the sharpness comparison is performed, the first image detail data and the second image detail data respectively extracted at the same position in the first image after edge protection and noise reduction and the second image after edge protection and noise reduction are compared, so as to determine which pixel point in the two pixel points at the current same position in the first image after edge protection and noise reduction and the second image after edge protection has higher sharpness. If the detail data are extracted according to the pixel blocks as basic units, when the definition comparison is performed, the first image detail data and the second image detail data which are respectively extracted at the same position in the first image after edge protection and noise reduction and the second image after edge protection and noise reduction are compared, so that the definition of which pixel block in the two pixel blocks at the current same position in the first image after edge protection and noise reduction and the second image after edge protection is higher is judged. And traversing and comparing all pixel points at all coordinate positions in the first image after edge protection and noise reduction and the second image after edge protection and noise reduction.
Taking pixel point as an example, for example, the first image detail data extracted at the coordinates (1, 1) in fig. 4A 'is compared with the second image detail data extracted at the coordinates (1, 1) in fig. 4B', and then the first image detail data extracted at the coordinates (1, 2) in fig. 4A 'is compared with the second image detail data extracted at the coordinates (1, 2) in fig. 4B', until all the pixel points in fig. 4A 'and 4B' are traversed and compared. Of course, the above example is that the comparison is performed sequentially with the pixel points as the basic unit step size according to the arrangement order of the pixel points. But the pixel points can be arbitrarily selected for comparison until all the pixel points are traversed and compared. The above is not a limitation of the present application, and may be selected according to the actual practice.
Because the first image after edge protection and noise reduction and the second image after edge protection and noise reduction are obtained by performing edge protection and noise reduction on the first original image and the second original image respectively. Therefore, after the definition comparison result of the first image after edge protection and noise reduction and the second image after edge protection and noise reduction is determined, the comparison result is mapped into the original image, so that the definition comparison result of the first original image and the second original image is determined.
For example, after comparison, it is determined that the definition of the pixel at the coordinate (1, 1) in fig. 4A 'is higher than the definition of the pixel at the coordinate (1, 1) in fig. 4B', and similarly, in the original image, the definition of the pixel at the coordinate (1, 1) in fig. 4A is also higher than the definition of the pixel at the coordinate (1, 1) in fig. 4B.
By adopting the technical scheme, when the definition (or the overall definition) of the pixel points in the two original images are compared, the edge protection and noise reduction treatment is preferentially carried out on the two original images, so that the edge protection effect (namely, the edge details in the images are reserved) can be achieved while the noise in the images is reduced (namely, the noise points in the images are reduced). In the subsequent sharpness comparison step, noise in the image is no longer affected.
In an alternative embodiment, step S401 is described above: the edge-preserving and noise-reducing processing is performed on the first original image and the second original image, respectively, by the following steps S4011 to S4014.
Namely, step S401 includes:
S4011: and taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering cores to obtain a plurality of filtering values.
In this step, the edge-preserving and noise-reducing process is required to be performed on each pixel in the first original image, so that each pixel is taken as a pixel to be processed, and each pixel is filtered, and finally a plurality of filtering values are obtained.
Specifically, taking the current pixel point as the center, selecting a window of n×n (where N is a positive odd number and N is greater than or equal to 3, where the value of n=3 or 5;N can be freely selected according to the computing capability of the processor of the image acquisition device, and when the value of N is greater, the clearer the finally obtained image is, which means that more computing resources are needed to perform computation, that is, the computing capability of the processor is needed to support), then selecting a plurality of (for example, (N 2 -1) filter kernels in the window of n×n, and performing filtering processing on the current pixel point; wherein each filter kernel needs to include the current pixel point.
The type of the filter kernel may be boxFilter, gaussFilter, or any other type of filter kernel, which may be selected by those skilled in the art according to actual needs, which is not a limitation of the present application. However, when selecting the filter kernels, it is necessary to ensure that the product of the sum of the weight values of the pixels in each filter kernel and the filter kernel coefficient is 1 (for example, in the first filter kernel (fig. 6A) selected in the first direction in the following example, the weights of the four pixels are all 1, the coefficient of the filter kernel is 1/4, and the product of the sum of the weight values of the pixels in the final filter kernel and the filter kernel coefficient is (1+1+1) ×1/4=1).
Taking the edge-preserving and noise-reducing processing as an example in fig. 4A, each pixel point is taken as a pixel point to be processed, and filtering processing is performed. Taking the pixel P (the pixel value is P 0) at the coordinates (4, 6) as the current pixel to be processed as an example, a window of 3*3 is selected, as shown in fig. 5. Then 8 directions of filter kernels are selected as follows:
The 8 filter kernels finally selected are shown in fig. 6A to 6H, respectively.
Of course, the filtering kernel may be selected from the 5*5 window, and at this time, the rule of selecting the filtering kernel is similar to that of the 3*3 window, and the upper left corner, the upper right corner, the lower left corner, the lower right corner, the upper half, the lower half, the left half and the right half in the window are sequentially selected as the filtering kernel, so as to filter the current pixel point.
And then filtering the current pixel point P by using the 8 selected filter kernels respectively to finally obtain 8 filter values P 1、P2、……、P8.
It should be noted that when the pixel to be processed is a pixel of the image edge, the selection of the filtering kernel cannot be performed according to the above steps, and zero padding needs to be performed on the image edge, as shown in fig. 5, a part of pixels are supplemented on the image edge, but the pixel values of the pixels are set to zero for selecting the filtering kernel.
S4012: according to the difference values between the filter values and the pixel values of the pixel points to be processed, the pixel values of the pixel points to be processed are adjusted; and after the pixel value of each pixel point in the first original image is adjusted, forming the first image after edge protection and noise reduction.
In one possible implementation manner, after obtaining a plurality of filtering values of the current pixel, the plurality of filtering values of the current pixel and the pixel value of the current pixel are subjected to difference to obtain a plurality of difference values, and then a filtering value corresponding to a difference value with the smallest absolute value in the plurality of difference values is taken as the filtering value of the current pixel after filtering. And repeating the steps for each pixel point until all the pixel points in the image are traversed.
As in the example above, after the above 8 filter values P 1、P2、……、P8 are obtained, each filter value is differentiated from the pixel value of the current pixel point to obtain:
P0-P1;P0-P2;P0-P3;P0-P4;P0-P5;P0-P6;P0-P7;P0-P8
And determining a new pixel value of the current pixel point by the corresponding filtering value with the minimum absolute value of the difference value.
Assuming that the absolute value of the difference value of P 0-P8 is the smallest among the 8 differences values, the filtered value P 8 is redetermined as the new pixel value of the current pixel point P. The adjustment manner of other pixels is similar to that, and those skilled in the art can directly and unambiguously obtain the current pixel P according to the above description, so that the description is omitted herein.
The steps and modes of performing the edge-preserving and noise-reducing process on the second original image are the same as the steps and modes of performing the edge-preserving and noise-reducing process on the first original image, including step S4013 and step S4014, and specifically include the following steps:
s4013: and taking each pixel point of the second original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering cores to obtain a plurality of filtering values.
S4014: according to the difference values between the filter values and the pixel values of the pixel points to be processed, the pixel values of the pixel points to be processed are adjusted; and after the pixel value of each pixel point in the second original image is adjusted, forming the second image after edge protection and noise reduction.
In more specific embodiments, those skilled in the art can directly and unambiguously obtain the foregoing description of performing the edge-preserving and noise-reducing processing on the first original image, which is not described herein again. After the above-described edge-preserving noise reduction processing is performed on fig. 4A and 4B, fig. 4A 'and 4B' are obtained.
In an alternative embodiment, after the edge-preserving and noise-reducing processing is performed, the obtained first image after edge-preserving and noise-reducing and the obtained second image after edge-preserving and noise-reducing are both high-frequency images, and low-pass filtering processing is performed on the first image after edge-preserving and noise-reducing and the second image after edge-preserving and noise-reducing respectively, so as to obtain a first image after low-pass filtering and a second image after low-pass filtering, namely, a low-frequency image, so as to be used for extracting detail data of the first image and the second image. And then, carrying out difference by utilizing the high-frequency image and the low-frequency image to obtain high-frequency information, namely extracting the first image detail data and the second image detail data for definition comparison. Namely, step S402: the steps S4021 to S4027 are implemented by extracting the first image detail data from the first image after edge protection and noise reduction, and extracting the second image detail data from the second image after edge protection and noise reduction. That is, step S402 includes:
s4021: and performing first Gaussian filtering processing on the first image (A1) subjected to edge protection and noise reduction.
The first image after edge protection and noise reduction is a high-frequency image, and the first Gaussian filtering treatment is mainly used for filtering Gaussian noise in the first image.
For example, the first gaussian filtering process is performed on fig. 4A', and gaussian noise generated in the edge-preserving noise-reducing process is filtered.
S4022: downsampling the first image (A1) after the first gaussian filtering process, to obtain smallA1.
The first image after the first gaussian filtering is then downsampled to a certain size. In a preferred embodiment, downsampling is performed in an interlaced manner, that is, when sampling is performed, every sampling the first pixel point O, one row or one column is skipped, and the pixel point P is sampled, so that the coordinates of every two nearest pixel points O and P in the sampled image in the sampled pixel points are ensured to satisfy the following relationship: and finally, splicing all the pixel points obtained by sampling in the mode according to the relative position relation in the sampled image (namely, wiping the pixel points which are not sampled when sampling, and enabling every two nearest sampled pixel points to be adjacent to each other), so as to obtain a quarter-sized small image of the first image after the first Gaussian filtering treatment.
For example, when the interlaced downsampling is performed in fig. 4A '(or fig. 7A'), pixels with odd horizontal and vertical coordinates or pixels with even horizontal and vertical coordinates are adopted; for example, for the columns, the 2 nd, 4 th, 6 th, 8 th, 10 th and 12 th columns are sampled, for the rows, the 2 nd, 4 th, 6 th and 8 th rows are sampled, then the pixel points at the intersection points of the sampled rows and columns are sampled pixel points, the sampled pixel points are shown as the pixel points in the dashed frame in fig. 7A', and the sampled pixel points are spliced according to the original arrangement rule, so as to finally obtain the image (smallA 1) shown in fig. 7A ".
S4023: and upsampling the downsampled first image, wherein the upsampling multiple is consistent with the downsampling multiple.
Then, smallA s1 obtained by downsampling are upsampled to the size of A1.
I.e., the image (smallA 1) shown in fig. 7A "is back sampled to the size of fig. 7A ' (or 4A '), resulting in gA1 (i.e., fig. 7A '").
S4024: and differentiating the first image after up-sampling and the first image after the first Gaussian filtering treatment to obtain the first image detail data.
Specifically, the up-sampled first image and the first image after the first Gaussian filtering process are differentiated, and the absolute value of each pixel point after the differentiation is used as first image detail data of each pixel point in the first image after edge protection and noise reduction.
For example, taking the absolute value of A1 and gA1 (i.e., differencing) yields first image detail data lapA1 of the A1 image (i.e., the laplace detail layer of A1), i.e., lapA 1= |a1-gA1|. lapA1 describes high-frequency information of the first image after edge-preserving and noise-reducing, namely edge detail information.
In other embodiments, step S40231 may also be performed after step S4023: and carrying out second Gaussian filtering treatment on the first image after the up-sampling to obtain a first image (marked as gA 1') after the second Gaussian filtering treatment.
When the image with small size is back sampled to the original size, gaussian noise may be substituted, so after the up-sampling is completed, the second Gaussian filtering is performed again to filter the Gaussian noise substituted during the up-sampling, so that the noise influence during the subsequent sharpness comparison is further reduced, the noise of the image used for sharpness comparison is lower, and the noise particles in the finally synthesized image are fewer and clearer. Then, step S40241 is executed to perform difference between the first image after the second gaussian filtering process and the first image after the first gaussian filtering process, so as to obtain the first image detail data.
The difference is similar to the difference described above, except that A1 and gA1' are used to perform the difference, and other steps and processes are the same, which will not be repeated here.
The steps and modes of performing low-pass filtering processing on the second image after edge protection and noise reduction are the same as the steps and modes of performing low-pass filtering processing on the first image after edge protection and noise reduction, and the steps include steps S4025 to 4028, and the following steps:
S4025, performing first Gaussian filtering processing on the second image after edge protection and noise reduction.
S4026, downsampling the second image after the first gaussian filtering process.
And S4027, up-sampling the second image after down-sampling, wherein the up-sampling multiple is consistent with the down-sampling multiple.
S4028, differentiating the second image after up-sampling and the second image after the first Gaussian filtering to obtain the second image detail data.
Similarly, when the downsampled second original image is upsampled to the original size, gaussian noise may be substituted, so after the upsampling is completed, the second gaussian filtering is performed again, so as to filter out the gaussian noise substituted during the upsampling, so that the noise influence during the subsequent sharpness comparison is further reduced, the noise of the image used for sharpness comparison is lower, and the noise particles in the finally synthesized image are fewer and clearer.
Therefore, in other embodiments, the low-pass filtering processing of the second image after edge-preserving and noise-reducing includes step S4025 to step S4027, and further includes step S40271 and step S40281, that is, after step S4027, step S40271 is performed: and performing a second Gaussian filtering process on the second image after the up-sampling to obtain a second image after the second Gaussian filtering process. Then, step S40281 is performed: and differentiating the second image after the second Gaussian filtering treatment and the second image after the first Gaussian filtering treatment to obtain the second image detail data.
In more specific embodiments, those skilled in the art may directly and unambiguously obtain the description of the low-pass filtering process performed on the second image after edge protection and noise reduction, which is not described herein. After the above-described low-pass filtering processing is performed on fig. 4A '(or fig. 7A') and fig. 4B '(or fig. 7B'), lapA and lapB1, i.e., first image detail data and second image detail data, are obtained.
In an alternative embodiment, after the first image detail data and the second image detail data are extracted, step S403: and determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data, and comparing the definition through the following steps S4031 to S4032. Namely, step S403 includes:
S4031: a sharpness threshold Thr is set.
S4032: and determining a definition comparison result of the first original image and the second original image according to the first image detail data lapA1, the second image detail data lapB1 and the definition threshold Thr.
For example, the determination is made by the following rule:
For the pixel points at the same coordinate position, when lapA < 1> lapB < Thr > is satisfied, determining that the definition of the pixel points in the first image after edge protection and noise reduction is higher than the definition of the pixel points in the second image after edge protection and noise reduction at the coordinate position; that is, at the coordinates, the sharpness of the pixel points in the first original image is higher than the sharpness of the pixel points in the second original image.
For the pixel points at the same coordinate position, when lapA < lapB < 1> Thr is satisfied, determining that the definition of the pixel points in the first image after edge-preserving and noise-reducing is lower than the definition of the pixel points in the second image after edge-preserving and noise-reducing at the coordinate position; that is, at the coordinates, the sharpness of the pixels in the first original image is lower than the sharpness of the pixels in the second original image.
Thr is generally set to 1.0 (i.e. the pixel values of the pixel points in the first image after edge-preserving and noise-reducing and the pixel points in the second image after edge-preserving and noise-reducing are directly compared at the same coordinate position), which can be set according to practical situations. The greater the value of Thr, the higher the degree of sharpness that the pixel in A1 needs to be compared to the degree of sharpness of the pixel in B1.
In an optional embodiment, the method further includes generating a sharpness mask map according to the sharpness comparison result, that is, comparing two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate the sharpness mask map. The method specifically comprises the following steps:
A mask map (mask) is defined that is empty and indicates the sharpness of the pixels in the edge-protected noise-reduced first image and the edge-protected noise-reduced second image. Mask is aligned with either A1 or B1 pixel.
For any coordinate, when the definition of the pixel point in the first image (or the pixel point in the first original image) after edge protection and noise reduction is higher than the definition of the pixel point in the second image (or the pixel point in the second original image) after edge protection and noise reduction, the same coordinate in the Mask is treated as 1; when the definition of the pixel point in the first image (or the pixel point in the first original image) after edge protection and noise reduction is lower than the definition of the pixel point in the second image (or the pixel point in the second original image) after edge protection and noise reduction, the same coordinate in the Mask is treated with 0.
Taking the example that the low-pass filtering process is performed on fig. 4A '(or fig. 7A') and fig. 4B '(or fig. 7B') to obtain lapA and lapB1, that is, the first image detail data and the second image detail data, according to the comparison result of lapA1 and lapB1, the mask diagram shown in fig. 8 is finally obtained (note that the mask diagram does not have a dashed outline of a human face, but only is used for assisting in understanding the mask diagram, the mask diagram not only shows relatively clearer pixels in two images A1 and B1 (or two images a and B), but also shows the rough outline of the human face according to the position distribution situation of the mask in the diagram).
In an alternative embodiment, after obtaining the mask map of the definition, according to the mask map of the definition, the first image after edge protection and noise reduction and the second image after edge protection and noise reduction are fused into the target image through steps S501 to S502, which specifically includes:
S501: and extracting pixel points with higher definition from the first image after edge protection and noise reduction or the second image after edge protection and noise reduction according to the definition comparison result and aiming at each pixel coordinate position.
As shown in fig. 9, the pixel points with higher definition in fig. 4A '(or fig. 7A', or fig. 9A ') and fig. 4B' (or fig. 7B ', or fig. 9B'), that is, the pixel points of the background portion in fig. 4A '(or fig. 7A', or fig. 9A '), and the pixel points of the face portion in fig. 4B' (or fig. 7B ', or fig. 9B'), are extracted.
S502: and synthesizing the target image according to the extracted pixel points with higher definition.
And then synthesizing the target image by using the extracted pixel points.
As shown in fig. 9, an image in which both the background and the face are clear as shown in fig. 9C' is finally synthesized. Compared with the image fusion technical means in the prior art, the noise in the image shown in the figure 9C' is obviously reduced.
In another alternative embodiment, after determining the sharpness comparison of the first original image and the second original image, the method further comprises:
S601: and determining an image with higher overall definition from the first image after edge protection and noise reduction and the second image after edge protection and noise reduction according to the definition comparison result.
Taking the above fig. 4 (or fig. 7) as an example, it can be found that more and clearer pixels are included in fig. 4A 'than in fig. 4B'. That is, the overall sharpness of fig. 4A 'is higher than that of fig. 4B'.
S602: generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to save the image with higher overall definition or delete or regulate the definition of images except for the image with higher overall definition.
Taking fig. 4 (or fig. 7) as an example, if it is determined that the overall definition of fig. 4A 'is higher than that of fig. 4B', an image editing prompt message is output for fig. 4A ', for example, to prompt the user to select whether to save the image shown in fig. 4A', or to delete the image shown in fig. 4B ', or to perform definition adjustment on the image shown in fig. 4B', or the like.
By adopting the technical scheme, when the definition (or the overall definition) of the pixel points in the two original images are compared, the edge protection and noise reduction treatment is preferentially carried out on the two original images, so that the edge protection effect (namely, the edge details in the images are reserved) can be achieved while the noise in the images is reduced (namely, the noise points in the images are reduced). In the subsequent steps of sharpness comparison and image fusion, the noise in the image is no longer affected. The operation amount is obviously less than that of the existing multi-focus definition image selection algorithm, the operation performance requirement on the processor is low, and the method is easy to adapt to the performance requirement of a low-end processor.
In the above embodiment of the present application, the two images are obtained by focusing on two different objects twice in the same field of view for sharpness comparison, image fusion, and the like, but this is not a limitation of the present application. It can be known by those skilled in the art that when there are a plurality of objects in different spatial positions in the same field of view, the camera of the image acquisition device can focus the focus on each object in the field of view to obtain a plurality of images, then two of the images are subjected to sharpness contrast, fusion and the like, and then the images are subjected to sharpness contrast, fusion and the like on the basis of the sharpness contrast, fusion and the like of the previous time until the sharpness contrast, fusion and the like of all the images are traversed, and finally an image with all pixels being relatively sharp (i.e. all the objects in the image are relatively sharp) is synthesized.
Based on the same inventive concept, an embodiment of the present application provides an image sharpness determining apparatus. Referring to fig. 10, fig. 10 is a schematic diagram of an image sharpness determining apparatus according to an embodiment of the present application. As shown in fig. 10, the apparatus includes:
The noise reduction module 1001 is configured to perform edge protection noise reduction processing on a first original image and a second original image, to obtain an edge protection noise reduced first image and an edge protection noise reduced second image, where the first original image and the second original image are two images obtained by performing image acquisition on the same shooting subject;
The detail extraction module 1002 is configured to extract first image detail data from the first image after edge-preserving and noise-reducing, and extract second image detail data from the second image after edge-preserving and noise-reducing;
a sharpness determining module 1003, configured to determine a sharpness comparison result of the first original image and the second original image according to the first image detail data and the second image detail data.
Optionally, the noise reduction module 1001 includes:
the filtering core selection submodule is used for taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering cores to obtain a plurality of filtering values;
The pixel value adjusting sub-module is used for adjusting the pixel value of the pixel point to be processed according to the difference value between the plurality of filtering values and the pixel value of the pixel point to be processed;
And after the pixel value of each pixel point in the first original image is adjusted, forming the first image after edge protection and noise reduction.
Optionally, the detail extraction module 1002 includes:
and the low-pass filtering unit is used for carrying out low-pass filtering processing on the first image after edge protection and noise reduction so as to extract the first image detail data.
Optionally, the low-pass filtering unit includes:
The first Gaussian filter subunit is used for carrying out first Gaussian filter processing on the first image after edge protection and noise reduction;
a downsampling subunit, configured to downsample the first image after the first gaussian filtering process;
an up-sampling subunit, configured to up-sample the down-sampled first image, where a multiple of the up-sampling is consistent with a multiple of the down-sampling;
and the difference subunit is used for carrying out difference on the first image after up-sampling and the first image after the first Gaussian filtering processing to obtain the first image detail data.
Optionally, the low-pass filtering unit further includes:
the second Gaussian filter subunit is used for carrying out second Gaussian filter processing on the first image after the up-sampling to obtain a first image after the second Gaussian filter processing;
the difference subunit is further configured to perform difference on the first image after the second gaussian filtering process and the first image after the first gaussian filtering process, so as to obtain the first image detail data.
Optionally, the sharpness determination module 1003 includes:
and the mask map generation sub-module is used for comparing the two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate a definition mask map.
Optionally, the apparatus further comprises:
The high-definition pixel point extraction module is used for extracting pixel points with higher definition from the first image after edge protection and noise reduction or the second image after edge protection and noise reduction according to the definition comparison result and for each pixel coordinate position;
and the image synthesis module is used for synthesizing the target image according to the extracted pixel points with higher definition.
Optionally, the apparatus further comprises:
The high-definition image determining module is used for determining an image with higher overall definition from the first image after edge protection and noise reduction and the second image after edge protection and noise reduction according to the definition comparison result;
The prompt information generation module is used for generating image editing prompt information according to the image with higher overall definition, wherein the image editing information is used for prompting a user to save the image with higher overall definition or delete or regulate the definition of images except for the image with higher overall definition.
Based on the same inventive concept, another embodiment of the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image sharpness determination method according to any of the above embodiments of the present application.
Based on the same inventive concept, another embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the steps in the image sharpness determining method according to any of the above embodiments of the present application.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or terminal device that comprises the element.
The foregoing has described in detail the method, apparatus, device and storage medium for determining image sharpness, according to the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, and the above examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. A method of determining sharpness of an image, the method comprising:
Performing edge-preserving and noise-reducing treatment on the first original image and the second original image respectively to obtain an edge-preserving and noise-reducing first image and an edge-preserving and noise-reducing second image, wherein the first original image and the second original image are two images obtained by image acquisition of the same shooting subject;
extracting first image detail data from the first image after edge protection and noise reduction, and extracting second image detail data from the second image after edge protection and noise reduction;
Determining a definition comparison result of the first original image and the second original image according to the first image detail data and the second image detail data;
the extracting the first image detail data from the first image after edge protection and noise reduction includes:
And performing low-pass filtering processing on the first image after edge protection and noise reduction to extract the first image detail data, wherein the processing comprises the following steps:
performing a first Gaussian filtering treatment on the first image after edge protection and noise reduction;
downsampling a first image after the first Gaussian filtering process;
Upsampling the downsampled first image, wherein a multiple of the upsampling is identical to a multiple of the downsampling;
Differentiating the first image after up-sampling and the first image after the first Gaussian filtering treatment to obtain the first image detail data;
Performing a second Gaussian filtering process on the first image after the up-sampling to obtain a first image after the second Gaussian filtering process;
differentiating the first image after up-sampling and the first image after the first Gaussian filtering to obtain the first image detail data, wherein the method comprises the following steps:
And differentiating the first image after the second Gaussian filtering treatment and the first image after the first Gaussian filtering treatment to obtain the first image detail data.
2. The method of claim 1, wherein performing edge preserving and noise reducing processing on the first original image to obtain an edge preserving and noise reducing first image, comprises:
Taking each pixel point of the first original image as a pixel point to be processed, and filtering the pixel point to be processed through a plurality of filtering cores to obtain a plurality of filtering values;
according to the difference values between the filter values and the pixel values of the pixel points to be processed, the pixel values of the pixel points to be processed are adjusted;
And after the pixel value of each pixel point in the first original image is adjusted, forming the first image after edge protection and noise reduction.
3. The method of claim 1, wherein determining a sharpness comparison of the first original image and the second original image based on the first image detail data and the second image detail data comprises:
and comparing the two image detail data corresponding to the same pixel coordinate position in the first image detail data and the second image detail data to generate a definition mask diagram.
4. A method according to any one of claims 1 to 3, wherein after determining a sharpness comparison of the first original image and the second original image, the method further comprises:
According to the definition comparison result, extracting pixel points with higher definition from the first image after edge protection and noise reduction or the second image after edge protection and noise reduction according to each pixel coordinate position;
and synthesizing the target image according to the extracted pixel points with higher definition.
5. A method according to any one of claims 1 to 3, wherein after determining a sharpness comparison of the first original image and the second original image, the method further comprises:
Determining an image with higher overall definition from the first image after edge protection and noise reduction and the second image after edge protection and noise reduction according to the definition comparison result;
Generating image editing prompt information according to the image with higher overall definition, wherein the image editing prompt information is used for prompting a user to save the image with higher overall definition or delete or regulate the definition of images except for the image with higher overall definition.
6. An image sharpness determination apparatus, characterized in that the apparatus comprises:
The device comprises a noise reduction module, a first image acquisition module and a second image acquisition module, wherein the noise reduction module is used for respectively carrying out edge protection noise reduction treatment on a first original image and a second original image to obtain an edge protection noise reduction first image and an edge protection noise reduction second image, and the first original image and the second original image are two images obtained by carrying out image acquisition on the same shooting subject;
The detail extraction module is used for extracting first image detail data from the first image after edge protection and noise reduction and extracting second image detail data from the second image after edge protection and noise reduction;
the definition determining module is used for determining definition comparison results of the first original image and the second original image according to the first image detail data and the second image detail data;
the detail extraction module 1002 includes:
The low-pass filtering unit is used for carrying out low-pass filtering processing on the first image after edge preservation and noise reduction so as to extract the first image detail data;
The low-pass filtering unit includes:
The first Gaussian filter subunit is used for carrying out first Gaussian filter processing on the first image after edge protection and noise reduction;
a downsampling subunit, configured to downsample the first image after the first gaussian filtering process;
an up-sampling subunit, configured to up-sample the down-sampled first image, where a multiple of the up-sampling is consistent with a multiple of the down-sampling;
The difference subunit is used for carrying out difference on the first image after up-sampling and the first image after the first Gaussian filtering treatment to obtain the first image detail data;
the low-pass filtering unit further includes:
the second Gaussian filter subunit is used for carrying out second Gaussian filter processing on the first image after the up-sampling to obtain a first image after the second Gaussian filter processing;
the difference subunit is further configured to perform difference on the first image after the second gaussian filtering process and the first image after the first gaussian filtering process, so as to obtain the first image detail data.
7. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 5.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
CN202010470387.2A 2020-05-28 2020-05-28 Image definition determining method, device, equipment and storage medium Active CN111833259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010470387.2A CN111833259B (en) 2020-05-28 2020-05-28 Image definition determining method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010470387.2A CN111833259B (en) 2020-05-28 2020-05-28 Image definition determining method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111833259A CN111833259A (en) 2020-10-27
CN111833259B true CN111833259B (en) 2024-05-07

Family

ID=72913919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010470387.2A Active CN111833259B (en) 2020-05-28 2020-05-28 Image definition determining method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111833259B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214769A1 (en) * 2017-05-24 2018-11-29 阿里巴巴集团控股有限公司 Image processing method, device and system
CN109089046A (en) * 2018-09-25 2018-12-25 Oppo广东移动通信有限公司 Image denoising method, device, computer readable storage medium and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214769A1 (en) * 2017-05-24 2018-11-29 阿里巴巴集团控股有限公司 Image processing method, device and system
CN109089046A (en) * 2018-09-25 2018-12-25 Oppo广东移动通信有限公司 Image denoising method, device, computer readable storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于加权空间离群点度量的随机脉冲噪声降噪算法;杨昊;陈雷霆;邱航;;计算机应用(10);全文 *
基于暗原色先验的低照度视频增强算法;刘峰;王信佳;于波;徐福龙;;计算机系统应用(06);全文 *

Also Published As

Publication number Publication date
CN111833259A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
CN109118459B (en) Image salient object detection method and device
CN110476185B (en) Depth of field information estimation method and device
CN106846241B (en) Image fusion method, device and equipment
CN107392852B (en) Super-resolution reconstruction method, device and equipment for depth image and storage medium
CN110349080B (en) Image processing method and device
CN113436112B (en) Image enhancement method, device and equipment
JP5370542B1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN103927767B (en) Image processing method and image processing apparatus
JP2015522198A (en) Depth map generation for images
CN107798704B (en) Real-time image superposition method and device for augmented reality
JP2010510600A (en) Generating an image depth map
CN102722863A (en) Super-resolution reconstruction method for depth map by adopting autoregressive model
EP3448032B1 (en) Enhancing motion pictures with accurate motion information
KR20110014067A (en) Method and system for transformation of stereo content
JP7234057B2 (en) Image processing method, image processing device, imaging device, lens device, program, storage medium, and image processing system
CN106709879A (en) Spatial variation point diffusion function smoothing method based on simple lens calculating imaging
CN111031239A (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111402111B (en) Image blurring method, device, terminal and computer readable storage medium
Jeong et al. Multi-frame example-based super-resolution using locally directional self-similarity
EP3931795A1 (en) Depth of field image refocusing
CN112001940B (en) Image processing method and device, terminal and readable storage medium
KR101795952B1 (en) Method and device for generating depth image of 2d image
CN111833259B (en) Image definition determining method, device, equipment and storage medium
Bareja et al. An improved iterative back projection based single image super resolution approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant