[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021083020A1 - Image fusion method and apparatus, and storage medium and terminal device - Google Patents

Image fusion method and apparatus, and storage medium and terminal device Download PDF

Info

Publication number
WO2021083020A1
WO2021083020A1 PCT/CN2020/122679 CN2020122679W WO2021083020A1 WO 2021083020 A1 WO2021083020 A1 WO 2021083020A1 CN 2020122679 W CN2020122679 W CN 2020122679W WO 2021083020 A1 WO2021083020 A1 WO 2021083020A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
sub
difference
item
Prior art date
Application number
PCT/CN2020/122679
Other languages
French (fr)
Chinese (zh)
Inventor
陈曦
Original Assignee
RealMe重庆移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RealMe重庆移动通信有限公司 filed Critical RealMe重庆移动通信有限公司
Publication of WO2021083020A1 publication Critical patent/WO2021083020A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an image fusion method, an image fusion device, a computer-readable storage medium, and terminal equipment.
  • Image noise refers to the brightness or color information that does not exist in the object itself, but appears in the image. It is generated by the image sensor or signal transmission circuit. At present, improving the pixels of image sensors is a common development direction in the industry. For example, image sensors with millions or even tens of millions of pixels are usually used on mobile phones, which can support the shooting of ultra-high-definition photos.
  • the present disclosure provides an image fusion method, an image fusion device, a computer-readable storage medium, and a terminal device, so as to improve at least to a certain extent the problem of high noise in existing high-pixel pictures.
  • an image fusion method is provided, which is applied to a terminal device equipped with an image sensor, and the method includes: acquiring a first image and a second image collected by the image sensor, the first image Is the first resolution, the second image is the second resolution, the first resolution is higher than the second resolution; the edge feature image is extracted from the first image; the second image is processed Super-resolution reconstruction to obtain a third image, where the first image is the first resolution; and the edge feature image and the third image are fused to obtain a final image.
  • an image fusion device configured in a terminal device equipped with an image sensor, the device including: an image acquisition module for acquiring a first image and a second image collected by the image sensor , The first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second resolution; an edge extraction module is used to extract the Extracting edge feature images; an image reconstruction module for super-resolution reconstruction of the second image to obtain a third image, the third image being the first resolution; a fusion processing module for converting the The edge feature image and the third image are fused to obtain the final image.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned image fusion method and possible implementation manners thereof are realized.
  • a terminal device including: a processor; a memory for storing executable instructions of the processor; and an image sensor; wherein the processor is configured to execute the executable Executing instructions to execute the above-mentioned image fusion method and its possible implementations.
  • image fusion device image fusion device, storage medium and terminal equipment, a relatively high-resolution first image and a relatively low-resolution second image are collected, edge feature images are extracted from the first image, and the second image is super-resolved
  • the rate is reconstructed into a third image, so that the resolution of the third image and the first image are the same, and finally the edge feature image and the third image are merged to obtain the final image.
  • the resolution of the first image is higher, but there may be more noise.
  • edge feature extraction the original detail information in the image is preserved; the resolution of the second image is lower, but the noise is less.
  • the third image After reconstruction into the third image, the third image has less noise, but there may be distortion in details; after fusing the two parts of the image, it is equivalent to incorporating the original image detail information into the low-noise image, and the first image can be realized.
  • the advantages of the first image and the second image are complementary to solve the problem of high-pixel image noise and improve the quality of image shooting.
  • Fig. 1 shows a flowchart of an image fusion method in this exemplary embodiment
  • FIG. 2 shows a schematic diagram of a color filter array in this exemplary embodiment
  • FIG. 3 shows a flowchart of a method for acquiring a first image and a second image in this exemplary embodiment
  • Fig. 4 shows a schematic diagram of acquiring a first image in this exemplary embodiment
  • Fig. 5 shows a schematic diagram of acquiring a second image in this exemplary embodiment
  • Fig. 6 shows a flowchart of a method for extracting edge feature images in this exemplary embodiment
  • FIG. 7 shows a schematic diagram of obtaining gradient difference images in this exemplary embodiment
  • FIG. 8 shows a flowchart of a method for constructing a differential constraint item in this exemplary embodiment
  • Fig. 9 shows a flowchart of a method for generating a third image in this exemplary embodiment
  • Fig. 10 shows a flowchart of a method for generating a final image in this exemplary embodiment
  • FIG. 11 shows a schematic diagram of an image fusion process in this exemplary embodiment
  • FIG. 12 shows a schematic structural diagram of an image fusion device in this exemplary embodiment
  • FIG. 13 shows a schematic structural diagram of another image fusion device in this exemplary embodiment
  • FIG. 14 shows a schematic structural diagram of a terminal device in this exemplary embodiment.
  • the photosensitive area of a single pixel on the image sensor decreases, and the requirements for illumination are higher when taking pictures.
  • the photosensitive element on the image sensor is more susceptible to crosstalk, resulting in insufficient signal-to-noise ratio of the input signal, and the final output image has too much noise and poor quality.
  • the resolution of photographing is usually lowered actively when the light is insufficient, so as to increase the light-sensitivity and reduce the noise.
  • this is at the cost of loss of resolution and cannot take advantage of the high-resolution image sensor itself.
  • exemplary embodiments of the present disclosure provide an image fusion method, which can be applied to terminal devices such as mobile phones, tablet computers, and digital cameras.
  • the terminal device is equipped with an image sensor, which can be used to collect images.
  • Fig. 1 shows a process of this exemplary embodiment, which may include the following steps S110 to S140:
  • Step S110 acquiring the first image and the second image collected by the image sensor
  • Step S120 extract an edge feature image from the first image
  • Step S130 Perform super-resolution reconstruction on the second image to obtain a third image
  • step S140 the edge feature image and the third image are merged to obtain a final image.
  • step S110 the first image and the second image collected by the image sensor are acquired.
  • the objects captured by the first image and the second image are the same, that is, the image content is the same, the difference lies in the image resolution, the first image is the first resolution, the second image is the second resolution, and the first resolution Higher than the second resolution, that is, the first image has more pixels than the second image.
  • the above-mentioned image sensor may be a Quad Bayer (Quad Bayer) image sensor, and the Quad Bayer image sensor refers to an image sensor using a Quad Bayer color filter array.
  • the left figure shows the standard Bayer color filter array
  • the unit array of the filter is GRBG (or BGGR, GBRG, RGGB), and most image sensors use the standard Bayer color filter array
  • Figure 2 The right figure shows a four-Bayer color filter array.
  • the four adjacent cells in the unit array of the filter are of the same color.
  • some high-pixel image sensors use a four-Bayer color filter array.
  • step S110 can be specifically implemented through the following steps S310 to S330:
  • step S310 the original Bayer image based on the four Bayer color filter array is collected by the four Bayer image sensor.
  • the Bayer image refers to an image in RAW format, which is image data after the image sensor converts the collected light signal into a digital signal.
  • each pixel has only one color in RGB.
  • the original image data obtained is the original Bayer image.
  • the color arrangement of the pixels in the image is as shown in the right figure in Figure 2, and the adjacent four pixels are the same. colour.
  • Step S320 Perform demosaic processing and demosaic processing on the original Bayer image to obtain a first image.
  • the first image is the first resolution;
  • Remosaic refers to the fusion of the original Bayer image based on the four Bayer color filter array into the Bayer image based on the standard Bayer color filter array;
  • Demosaic refers to Fusion of Bayer images into complete RGB images.
  • the original Bayer image P can be demosaiced to obtain the Bayer image Q1 based on the standard Bayer color filter array; then the Bayer image Q1 based on the standard Bayer color filter array can be demosaiced to obtain the RGB format
  • Demosaicing and demosaicing can be implemented by different interpolation algorithms, and can also be implemented by other related algorithms such as neural networks, which are not limited in the present disclosure.
  • the terminal device is usually equipped with an ISP (Image Signal Processing, image signal processing) unit that is matched with the image sensor to perform the above-mentioned demosaic and demosaic processing process.
  • ISP Image Signal Processing, image signal processing
  • Each pixel of the first image IMG1 has pixel values of three RGB channels, denoted by C.
  • the process of demosaicing and demosaicing can also be combined into one interpolation process, that is, based on the pixel data in the original Bayer image, each pixel is directly interpolated to obtain the pixel value of the missing color channel. For example, you can Use linear interpolation, mean interpolation and other algorithms to achieve, so as to obtain the first image.
  • step S330 four adjacent pixels of the same color in the original Bayer image are merged into one pixel, and the merged Bayer image is demosaiced to obtain a second image.
  • the second image is the second resolution.
  • the original Bayer image P is first processed by pixel "four-in-one", that is, the pixels of the same color in 2*2 units are combined into one pixel.
  • the combined Bayer image Q2 is also based on the standard Bayer image.
  • the terminal device may be configured with two image sensors with different pixels.
  • the first image sensor with higher pixels is used to capture the first image
  • the second image sensor with lower pixels is used to capture the second image. Since the image acquisition process of the two image sensors is completed in one shot, the exposure degree is similar, so the resolution of the first image is higher, but affected by the amount of light, its noise may be more, the second image on the contrary.
  • step S120 an edge feature image is extracted from the first image.
  • the edge feature refers to the feature in the image where the feature (such as image texture, pixel gray level) is discontinuously distributed.
  • An important difference between high-resolution images and low-resolution images is that high-resolution images have richer edge features.
  • This exemplary embodiment preserves the detailed information in the image by extracting the edge feature image from the first image with the first resolution.
  • step S120 may be specifically implemented through the following steps S610 to S630:
  • Step S610 using Fourier transform to construct a noise separation function on the first image to obtain a denoising constraint item
  • Step S620 construct a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction;
  • step S630 a constraint function is established according to the denoising constraint item and the difference constraint item, and the edge feature image corresponding to the first image is obtained by solving the minimum value of the constraint function.
  • the first image itself carries certain noise information (noise points), which is embodied as a part of the edge features.
  • noise points When extracting the edge features, it is hoped to reduce the influence of noise.
  • a noise-free image is sparsely similar, that is, it can be represented by fewer parameters or feature vectors, while the noise is irregular, non-similar, and non-sparse.
  • a noise separation function can be constructed to obtain the edge Denoising constraints in extraction:
  • R represents the denoising constraint item
  • F represents the matrix Fourier transform operator
  • H represents the conjugate transpose
  • D represents the transform domain
  • IMG E represents the edge feature image (independent variable)
  • IMG1 represents the first image . Since the energy of the noise is small and the gradient is relatively pure, a better denoising effect can be achieved by formula (1).
  • the x direction and the y direction represent the horizontal and vertical directions, respectively.
  • the edge feature of the first image itself contains the components in these two directions. Therefore, by calculating the gray gradient difference in the x direction and the y direction, the difference constraint term can be constructed:
  • R(diff) represents the difference constraint item
  • a 1 and a 2 are the coefficients of two parts, which represent their respective weights, which are adjustable parameters, which can be adjusted according to experience and actual needs.
  • G x and G y respectively represent the x direction And the difference operator in the y direction.
  • Adding formula (1) and formula (2) can establish a constraint function, which is used to indicate that the constraint condition is met during edge feature extraction.
  • a constraint function which is used to indicate that the constraint condition is met during edge feature extraction.
  • Is the first sub-constraint item which is the L2 norm of the noise separation function
  • 1 is the second sub-constraint item, which is the L1 norm of the gray gradient difference of the first image in the x direction
  • 1 is the third sub-constraint item, which is the L1 norm of the gray gradient difference of the first image in the y direction;
  • the three-part sub-constraint can be adjusted by setting the values of a 1 and a 2
  • Equation (3) can be solved by iteration. When the constraint function reaches the convergence condition, the corresponding edge feature image is obtained.
  • step S620 may include the following steps S810 to S830:
  • Step S810 Perform grayscale processing on the first image
  • Step S820 Obtain the gradient difference image in the x direction and the gradient difference image in the y direction of the first image.
  • Step S830 construct a difference constraint item based on the x-direction gradient difference image and the y-direction gradient difference image.
  • the x-direction gradient difference image and the y-direction gradient difference image can be separately calculated to construct the difference constraint terms in the two directions; or the x-direction gradient difference image and the y-direction gradient difference image can be synthesized first to obtain the inclusion A comprehensive gradient difference image of gray gradient information in two directions, and then a difference constraint item is constructed for the image, and the difference constraint can be realized through a sub-constraint item.
  • step S130 super-resolution reconstruction is performed on the second image to obtain a third image.
  • the third image is the first resolution.
  • Super-resolution reconstruction refers to the reconstruction of high-resolution images from low-resolution images through interpolation and other methods.
  • step S130 may include the following steps S910 to S930:
  • step S910 a second image of one frame is selected as a reference frame, and motion change parameters between the second image of other frames and the reference frame are calculated.
  • Step S920 Perform image registration on at least two frames of second images according to the aforementioned motion change parameters, and process the complementary non-redundant information between the at least two frames of second images to determine interpolation parameters.
  • Image registration can use gradient image template matching alignment algorithm or feature operator-based alignment algorithm. In general, the more the number of second images selected during image registration, the more accurate the interpolation parameters obtained.
  • step S930 the reference frame is interpolated by the above-mentioned interpolation parameters to obtain a third image of the first resolution.
  • post-processing such as blur and noise removal can also be performed to optimize the third image.
  • a multi-channel input convolutional neural network model can also be trained according to the number of frames of the second image, each channel is used to input a frame of low-resolution image, and the output of the model is the original high Resolution image, so that after multiple frames of second images are input to the model, the corresponding third image can be output.
  • the present disclosure does not limit which super-resolution reconstruction algorithm is specifically adopted.
  • step S140 the edge feature image and the third image are merged to obtain the final image.
  • the edge feature image is mainly used to enhance the edge details in the third image, and the richer high-resolution detail information in the first image is incorporated into the third image.
  • step S140 can be specifically implemented through the following steps S1010 to S1040:
  • Step S1010 define the independent variables of the final image
  • Step S1020 construct a first sub-loss item according to the difference between the third image and the independent variable
  • Step S1030 construct a second sub-loss item based on the difference between the edge feature image and the gray gradient difference of the independent variable in the x direction, and construct a second sub-loss item based on the difference between the edge feature image and the gray gradient difference of the independent variable in the y direction
  • the third sub-loss item
  • step S1040 a loss function is established according to the first sub-loss item, the second sub-loss item and the third sub-loss item, and the final image is obtained by solving the minimum value of the loss function.
  • the loss function can be as follows:
  • IMG F represents the final image (independent variable), and IMG3 represents the third image.
  • Is the first sub-loss item which is a measure of the difference between the third image and the final image. It can use the L2 norm or other methods;
  • 1 is the second sub-loss item , Is a measure of the difference between the gray gradient difference of the final image in the x direction and the edge feature image.
  • the L1 norm can be used, or other methods can be used;
  • 1 is the third The sub-loss item is a measure of the difference between the gray gradient difference of the final image in the y direction and the edge feature image.
  • a 3 and a 4 are the second sub-loss items and
  • the coefficients of the third sub-loss item represent their respective weights and are adjustable parameters. The value can be adjusted according to experience and actual needs to adjust the weight ratio of the three sub-loss items.
  • the above loss function actually expresses the error between the image fusion and the original image content.
  • the original image information can be retained to the greatest extent, and the final image with the highest degree of restoration can be obtained.
  • each pixel in the edge feature image is traversed; for each pixel, if the gray value of the pixel in the edge feature image is 0, then The pixel value in the third image is used. If the gray value is not 0, the pixel value in the edge feature image is used. In this way, the edge feature information can also be incorporated into the third image to reconstruct detailed features.
  • the final image After the final image is obtained, it can be directly output, for example, displayed in the user interface, or automatically saved to the corresponding folder.
  • the first resolution and the second resolution may be determined in advance according to the current exposure parameters.
  • the current exposure parameters can include the current photosensitivity parameters, shutter parameters (shooting time), and lighting parameters of the surrounding environment, etc.
  • the system can estimate the amount of light when taking pictures according to the current exposure parameters to determine the appropriate first resolution and second resolution. Resolution.
  • the first resolution is always 4 times the second resolution.
  • the calculation relationship between the current exposure parameter and the second resolution can be pre-configured based on experience, for example, it can be a linear proportional relationship. The higher the current exposure parameter, the greater the second resolution, and then the second resolution is calculated.
  • the calculation relationship between the current exposure parameters and the second resolution can also be pre-configured.
  • the first resolution is fixed to the maximum pixel of the image sensor.
  • the second resolution is calculated according to the current exposure parameters during actual shooting, and only the second resolution is adjusted. Rate can be.
  • the first resolution and the second resolution are determined in the above-mentioned manner to adapt to the current lighting conditions and exposure settings, etc., so that the quality of image shooting can be improved.
  • Fig. 11 shows the flow of this exemplary embodiment: first collect a first image IMG1 and a multi-frame second image IMG2; extract gradient difference images in the x direction and y direction from IMG1, and extract Edge feature image IMG E ; use IMG2 for super-resolution reconstruction to obtain a third image IMG3, IMG3 and IMG1 have the same resolution, so the resolution is the same as IMG E ; finally IMG3 and IMG E are fused, and the loss is established Function and solve the minimum value to get the final image IMG F.
  • a relatively high-resolution first image and a relatively low-resolution second image are collected, edge feature images are extracted from the first image, and the second image is super-resolution reconstructed into the first image.
  • Three images make the resolution of the third image and the first image the same, and finally merge the edge feature image and the third image to obtain the final image.
  • the resolution of the first image is higher, but there may be more noise.
  • edge feature extraction the original detail information in the image is preserved; the resolution of the second image is lower, but the noise is less.
  • the third image After reconstruction into the third image, the third image has less noise, but there may be distortion in details; after fusing the two parts of the image, it is equivalent to incorporating the original image detail information into the low-noise image, which can achieve the first
  • the advantages of the first image and the second image are complementary to solve the problem of high-pixel image noise and improve the quality of image shooting.
  • Exemplary embodiments of the present disclosure also provide an image fusion device, which can be configured in a terminal device equipped with an image sensor.
  • the image fusion apparatus 1200 may include a processor 1210 and a memory 1220; wherein, the memory 1220 stores the following program modules:
  • the image acquisition module 1221 is configured to acquire a first image and a second image collected by an image sensor, the first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second resolution;
  • the edge extraction module 1222 is used to extract edge feature images from the first image
  • the image reconstruction module 1223 is configured to perform super-resolution reconstruction on the second image to obtain a third image, where the third image has the first resolution;
  • the fusion processing module 1224 is used for fusing the edge feature image and the third image to obtain the final image.
  • the above-mentioned image sensor includes a four-Bayer image sensor.
  • the image acquisition module 1221 may include:
  • the Bayer image acquisition unit is used to collect the original Bayer image based on the Four Bayer color filter array through the Four Bayer image sensor;
  • the first parsing unit is configured to perform demosaic processing and demosaic processing on the original Bayer image to obtain a first image, the first image being the first resolution;
  • the second analysis unit is used to merge four adjacent pixels of the same color in the original Bayer image into one pixel, and perform demosaicing processing on the Bayer image after the merged pixels to obtain a second image, which is the second resolution Rate; where the first resolution is four times the second resolution.
  • the above-mentioned image sensor includes a first image sensor and a second image sensor, and the pixels of the first image sensor are higher than those of the second image sensor.
  • the image acquisition module 1221 may be used to acquire the first image acquired by the first image sensor and the second image acquired by the second image sensor.
  • the edge extraction module 1222 may include:
  • the denoising constraint unit is used to construct a noise separation function on the first image using Fourier transform to obtain a denoising constraint item;
  • the difference constraint unit is used to construct a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction;
  • the constraint function solving unit is used to establish a constraint function according to the denoising constraint item and the difference constraint item, and obtain the edge feature image corresponding to the first image by solving the minimum value of the constraint function.
  • the aforementioned denoising constraint items may include:
  • the first sub-constraint item is the L2 norm of the noise separation function.
  • the above-mentioned differential constraint item may include:
  • the second sub-constraint item is the L1 norm of the gray gradient difference of the first image in the x direction;
  • the third sub-constraint item is the L1 norm of the gray gradient difference of the first image in the y direction.
  • the differential constraint unit can be used for:
  • the difference constraint term is constructed based on the x-direction gradient difference image and the y-direction gradient difference image.
  • the second image may include multiple consecutive frames of the second image.
  • the image reconstruction module 1223 may be used to process the second image of multiple consecutive frames using a super-resolution reconstruction algorithm to obtain a third image.
  • the image reconstruction module 1223 may be used for:
  • the reference frame is interpolated through the above-mentioned interpolation parameters to obtain a third image of the first resolution.
  • the image reconstruction module 1223 may be used to:
  • the multi-channel input convolutional neural network model is used to process the above-mentioned continuous multiple frames of the second image, and output the corresponding third image.
  • the fusion processing module 1224 may include:
  • the independent variable definition unit is used to define the independent variables of the final image
  • the sub-loss construction unit is used to construct the first sub-loss item according to the difference between the third image and the independent variable, and construct the second sub-loss item according to the difference between the edge feature image and the difference in the gray gradient of the independent variable in the x direction, And according to the difference between the edge feature image and the gray gradient difference of the independent variable in the y direction, construct the third sub-loss item;
  • the loss function solving unit is used to establish a loss function according to the first sub-loss item, the second sub-loss item and the third sub-loss item, and obtain the final image by solving the minimum value of the loss function.
  • the first sub-loss item adopts the L2 norm
  • the second sub-loss item and the third sub-loss item adopt the L1 norm
  • the fusion processing module 1224 may be used to:
  • the image acquisition module 1221 may also be used to determine the first resolution and the second resolution according to the current exposure parameters.
  • the current exposure parameter includes at least one of the current light-sensing parameter, the shutter parameter, and the ambient lighting parameter.
  • the exemplary embodiments of the present disclosure also provide another image fusion device, which can be configured in a terminal device equipped with an image sensor.
  • the image fusion apparatus 1300 may include:
  • the image acquisition module 1310 is configured to acquire a first image and a second image collected by an image sensor, the first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second resolution;
  • the edge extraction module 1320 is used to extract edge feature images from the first image
  • the image reconstruction module 1330 is configured to perform super-resolution reconstruction on the second image to obtain a third image, where the third image has the first resolution;
  • the fusion processing module 1340 is used for fusing the edge feature image and the third image to obtain a final image.
  • the above-mentioned image sensor includes a four-Bayer image sensor.
  • the image acquisition module 1310 may include:
  • the Bayer image acquisition unit is used to collect the original Bayer image based on the Four Bayer color filter array through the Four Bayer image sensor;
  • the first parsing unit is configured to perform demosaic processing and demosaic processing on the original Bayer image to obtain a first image, the first image being the first resolution;
  • the second analysis unit is used to merge four adjacent pixels of the same color in the original Bayer image into one pixel, and perform demosaicing processing on the Bayer image after the merged pixels to obtain a second image, which is the second resolution Rate; where the first resolution is four times the second resolution.
  • the above-mentioned image sensor includes a first image sensor and a second image sensor, and the pixels of the first image sensor are higher than those of the second image sensor.
  • the image acquisition module 1310 may be used to acquire the first image acquired by the first image sensor and the second image acquired by the second image sensor.
  • the edge extraction module 1320 may include:
  • the denoising constraint unit is used to construct a noise separation function on the first image using Fourier transform to obtain a denoising constraint item;
  • the difference constraint unit is used to construct a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction;
  • the constraint function solving unit is used to establish a constraint function according to the denoising constraint item and the difference constraint item, and obtain the edge feature image corresponding to the first image by solving the minimum value of the constraint function.
  • the aforementioned denoising constraint items may include:
  • the first sub-constraint item is the L2 norm of the noise separation function.
  • the above-mentioned differential constraint item may include:
  • the second sub-constraint item is the L1 norm of the gray gradient difference of the first image in the x direction;
  • the third sub-constraint item is the L1 norm of the gray gradient difference of the first image in the y direction.
  • the differential constraint unit can be used for:
  • the difference constraint term is constructed based on the x-direction gradient difference image and the y-direction gradient difference image.
  • the second image may include multiple consecutive frames of the second image.
  • the image reconstruction module 1330 may be used to process the second image of multiple consecutive frames using a super-resolution reconstruction algorithm to obtain a third image.
  • the image reconstruction module 1330 may be used to:
  • the reference frame is interpolated through the above-mentioned interpolation parameters to obtain a third image of the first resolution.
  • the image reconstruction module 1330 may be used to:
  • the multi-channel input convolutional neural network model is used to process the above-mentioned continuous multiple frames of the second image, and output the corresponding third image.
  • the fusion processing module 1340 may include:
  • the independent variable definition unit is used to define the independent variables of the final image
  • the sub-loss construction unit is used to construct the first sub-loss item according to the difference between the third image and the independent variable, and construct the second sub-loss item according to the difference between the edge feature image and the difference in the gray gradient of the independent variable in the x direction, And according to the difference between the edge feature image and the gray gradient difference of the independent variable in the y direction, construct the third sub-loss item;
  • the loss function solving unit is used to establish a loss function according to the first sub-loss item, the second sub-loss item and the third sub-loss item, and obtain the final image by solving the minimum value of the loss function.
  • the first sub-loss item adopts the L2 norm
  • the second sub-loss item and the third sub-loss item adopt the L1 norm
  • the fusion processing module 1340 may be used to:
  • the image acquisition module 1310 may also be used to determine the first resolution and the second resolution according to the current exposure parameters.
  • the current exposure parameter includes at least one of the current light-sensing parameter, the shutter parameter, and the ambient lighting parameter.
  • Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be implemented in the form of a program product, which includes program code.
  • the program product runs on a terminal device, the program code is used to make the terminal device Perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification.
  • the program product can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can run on a terminal device, such as a personal computer.
  • CD-ROM portable compact disk read-only memory
  • the program product of the present disclosure is not limited thereto.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
  • the program product can adopt any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
  • the program code used to perform the operations of the present disclosure can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages-such as Java, C++, etc., as well as conventional procedural programming. Language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service providers for example, using Internet service providers.
  • Exemplary embodiments of the present disclosure also provide a terminal device capable of implementing the above method.
  • the terminal device 1400 according to this exemplary embodiment of the present disclosure will be described below with reference to FIG. 14.
  • the terminal device 1400 shown in FIG. 14 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the terminal device 1400 may be represented in the form of a general-purpose computing device.
  • the components of the terminal device 1400 may include, but are not limited to: at least one processing unit 1410, at least one storage unit 1420, a bus 1430 connecting different system components (including the storage unit 1420 and the processing unit 1410), a display unit 1440, and an image sensor 1470.
  • the sensor 1470 is used to collect images.
  • the storage unit 1420 stores program codes, and the program codes can be executed by the processing unit 1410, so that the processing unit 1410 executes the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification.
  • the processing unit 1410 may execute any one or more of the method steps in FIG. 1, FIG. 3, FIG. 6, FIG. 8, FIG. 9, or FIG. 10.
  • the storage unit 1420 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 1421 and/or a cache storage unit 1422, and may further include a read-only storage unit (ROM) 1423.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 1420 may also include a program/utility tool 1424 having a set of (at least one) program modules 1425.
  • program modules 1425 include but are not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
  • the bus 1430 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures bus.
  • the terminal device 1400 may also communicate with one or more external devices 1500 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable the user to interact with the terminal device 1400, and/or communicate with Any device (such as a router, modem, etc.) that enables the terminal device 1400 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 1450.
  • the terminal device 1400 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 1460.
  • networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 1460 communicates with other modules of the terminal device 1400 through the bus 1430. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the terminal device 1400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
  • the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the exemplary embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

An image fusion method and apparatus, and a storage medium and a terminal device, relating to the technical field of image processing. The method is applied to a terminal device having an image sensor and comprises: acquiring a first image and a second image collected by the image sensor, wherein the first image has first resolution, the second image has second resolution, and the first resolution is higher than the second resolution (S110); extracting an edge feature image from the first image (S120); performing super-resolution reconstruction on the second image, and obtaining a third image having the same resolution as the first resolution (S130); and fusing the edge feature image and the third image to obtain a final image (S140). Therefore, the problem of much noise of a high-pixel image is solved, and the quality of image photographing is improved.

Description

图像融合方法、图像融合装置、存储介质与终端设备Image fusion method, image fusion device, storage medium and terminal equipment
本申请要求于2019年11月01日提交的,申请号为201911057665.5,名称为“图像融合方法、图像融合装置、存储介质与终端设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本文中。This application claims the priority of the Chinese patent application filed on November 1, 2019, with the application number 201911057665.5, titled "Image fusion method, image fusion device, storage medium and terminal equipment", and the entire content of the Chinese patent application Incorporated herein by reference.
技术领域Technical field
本公开涉及图像处理技术领域,尤其涉及一种图像融合方法、图像融合装置、计算机可读存储介质与终端设备。The present disclosure relates to the field of image processing technology, and in particular to an image fusion method, an image fusion device, a computer-readable storage medium, and terminal equipment.
背景技术Background technique
图像噪点(或噪声)是指被拍摄物体本身没有、但图像中出现的亮度或颜色信息,一股是由图像传感器或信号传输电路产生的。目前,提高图像传感器的像素是业界普遍的发展方向,例如手机上通常采用百万甚至千万级别像素的图像传感器,可以支持拍摄出超高清的照片。Image noise (or noise) refers to the brightness or color information that does not exist in the object itself, but appears in the image. It is generated by the image sensor or signal transmission circuit. At present, improving the pixels of image sensors is a common development direction in the industry. For example, image sensors with millions or even tens of millions of pixels are usually used on mobile phones, which can support the shooting of ultra-high-definition photos.
发明内容Summary of the invention
本公开提供了一种图像融合方法、图像融合装置、计算机可读存储介质与终端设备,进而至少在一定程度上改善现有的高像素图片中噪点较高的问题。The present disclosure provides an image fusion method, an image fusion device, a computer-readable storage medium, and a terminal device, so as to improve at least to a certain extent the problem of high noise in existing high-pixel pictures.
根据本公开的第一方面,提供一种图像融合方法,应用于具备图像传感器的终端设备,所述方法包括:获取由所述图像传感器采集的第一图像和第二图像,所述第一图像为第一分辨率,所述第二图像为第二分辨率,所述第一分辨率高于所述第二分辨率;从所述第一图像提取边缘特征图像;将所述第二图像进行超分辨率重建,得到第三图像,所述第图像为所述第一分辨率;将所述边缘特征图像和所述第三图像进行融合,得到最终图像。According to a first aspect of the present disclosure, an image fusion method is provided, which is applied to a terminal device equipped with an image sensor, and the method includes: acquiring a first image and a second image collected by the image sensor, the first image Is the first resolution, the second image is the second resolution, the first resolution is higher than the second resolution; the edge feature image is extracted from the first image; the second image is processed Super-resolution reconstruction to obtain a third image, where the first image is the first resolution; and the edge feature image and the third image are fused to obtain a final image.
根据本公开的第二方面,提供一种图像融合装置,配置于具备图像传感器的终端设备,所述装置包括:图像获取模块,用于获取由所述图像传感器采集的第一图像和第二图像,所述第一图像为第一分辨率,所述第二图像为第二分辨率,所述第一分辨率高于所述第二分辨率;边缘提取模块,用于从所述第一图像提取边缘特征图像;图像重建模块,用于将所述第二图像进行超分辨率重建,得到第三图像,所述第三图像为所述第一分辨率;融合处理模块,用于将所述边缘特征图像和所述第三图像进行融合,得到最终图像。According to a second aspect of the present disclosure, there is provided an image fusion device configured in a terminal device equipped with an image sensor, the device including: an image acquisition module for acquiring a first image and a second image collected by the image sensor , The first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second resolution; an edge extraction module is used to extract the Extracting edge feature images; an image reconstruction module for super-resolution reconstruction of the second image to obtain a third image, the third image being the first resolution; a fusion processing module for converting the The edge feature image and the third image are fused to obtain the final image.
根据本公开的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述图像融合方法及其可能的实现方式。According to a third aspect of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned image fusion method and possible implementation manners thereof are realized.
根据本公开的第四方面,提供一种终端设备,包括:处理器;存储器,用于存储所述处理器的可执行指令;以及图像传感器;其中,所述处理器配置为经由执行所述可执行指令来执行上述图像融合方法及其可能的实现方式。According to a fourth aspect of the present disclosure, there is provided a terminal device including: a processor; a memory for storing executable instructions of the processor; and an image sensor; wherein the processor is configured to execute the executable Executing instructions to execute the above-mentioned image fusion method and its possible implementations.
本公开的技术方案具有以下有益效果:The technical solution of the present disclosure has the following beneficial effects:
根据上述图像融合方法、图像融合装置、存储介质和终端设备,采集相对高分辨率的第一图像和相对低分辨率的第二图像,从第一图像提取边缘特征图像,将第二图像超分辨率重建为第三图像,使第三图像和第一图像的分辨率相同,最后融合缘特征图像和第三图像,得到最终图像。其中,第一图像的分辨率较高,但噪点可能较多,通过边缘特征提取,保留了图像中原有的细节信息;第二图像分辨率较低,但噪点一股较少,通过超分辨率重建为第三图像后,第三图像的噪点较少,但在细节上可能存在失真;将两部分图像融合后,相当于在低噪点的图像中融入了原有的图像细节信息, 可以实现第一图像和第二图像各自优势的互补,解决高像素图像噪点多的问题,提高图像拍摄的质量。According to the above-mentioned image fusion method, image fusion device, storage medium and terminal equipment, a relatively high-resolution first image and a relatively low-resolution second image are collected, edge feature images are extracted from the first image, and the second image is super-resolved The rate is reconstructed into a third image, so that the resolution of the third image and the first image are the same, and finally the edge feature image and the third image are merged to obtain the final image. Among them, the resolution of the first image is higher, but there may be more noise. Through edge feature extraction, the original detail information in the image is preserved; the resolution of the second image is lower, but the noise is less. After reconstruction into the third image, the third image has less noise, but there may be distortion in details; after fusing the two parts of the image, it is equivalent to incorporating the original image detail information into the low-noise image, and the first image can be realized. The advantages of the first image and the second image are complementary to solve the problem of high-pixel image noise and improve the quality of image shooting.
附图说明Description of the drawings
图1示出本示例性实施方式中一种图像融合方法的流程图;Fig. 1 shows a flowchart of an image fusion method in this exemplary embodiment;
图2示出本示例性实施方式中滤色阵列的示意图;FIG. 2 shows a schematic diagram of a color filter array in this exemplary embodiment;
图3示出本示例性实施方式中一种获取第一图像与第二图像方法的流程图;Fig. 3 shows a flowchart of a method for acquiring a first image and a second image in this exemplary embodiment;
图4示出本示例性实施方式中一种获取第一图像的示意图;Fig. 4 shows a schematic diagram of acquiring a first image in this exemplary embodiment;
图5示出本示例性实施方式中一种获取第二图像的示意图;Fig. 5 shows a schematic diagram of acquiring a second image in this exemplary embodiment;
图6示出本示例性实施方式中一种提取边缘特征图像方法的流程图;Fig. 6 shows a flowchart of a method for extracting edge feature images in this exemplary embodiment;
图7示出本示例性实施方式中获得梯度差分图像的示意图;FIG. 7 shows a schematic diagram of obtaining gradient difference images in this exemplary embodiment;
图8示出本示例性实施方式中一种构建差分约束项方法的流程图;FIG. 8 shows a flowchart of a method for constructing a differential constraint item in this exemplary embodiment;
图9示出本示例性实施方式中一种生成第三图像方法的流程图;Fig. 9 shows a flowchart of a method for generating a third image in this exemplary embodiment;
图10示出本示例性实施方式中一种生成最终图像方法的流程图;Fig. 10 shows a flowchart of a method for generating a final image in this exemplary embodiment;
图11示出本示例性实施方式中一种图像融合流程的示意图;FIG. 11 shows a schematic diagram of an image fusion process in this exemplary embodiment;
图12示出本示例性实施方式中一种图像融合装置的结构示意图;FIG. 12 shows a schematic structural diagram of an image fusion device in this exemplary embodiment;
图13示出本示例性实施方式中另一种图像融合装置的结构示意图;FIG. 13 shows a schematic structural diagram of another image fusion device in this exemplary embodiment;
图14示出本示例性实施方式中一种终端设备的结构示意图。FIG. 14 shows a schematic structural diagram of a terminal device in this exemplary embodiment.
具体实施方式Detailed ways
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。Example embodiments will now be described more fully with reference to the accompanying drawings. However, the example embodiments can be implemented in various forms, and should not be construed as being limited to the examples set forth herein; on the contrary, these embodiments are provided so that the present disclosure will be more comprehensive and complete, and the concept of the example embodiments will be fully conveyed To those skilled in the art. The described features, structures or characteristics can be combined in one or more embodiments in any suitable way. In the following description, many specific details are provided to give a sufficient understanding of the embodiments of the present disclosure. However, those skilled in the art will realize that the technical solutions of the present disclosure can be practiced without one or more of the specific details, or other methods, components, devices, steps, etc. can be used. In other cases, the well-known technical solutions are not shown or described in detail to avoid overwhelming the crowd and obscure all aspects of the present disclosure.
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。In addition, the drawings are only schematic illustrations of the present disclosure, and are not necessarily drawn to scale. The same reference numerals in the figures denote the same or similar parts, and thus their repeated description will be omitted. Some of the block diagrams shown in the drawings are functional entities and do not necessarily correspond to physically or logically independent entities. These functional entities may be implemented in the form of software, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and/or processor devices and/or microcontroller devices.
随着像素的提高,图像传感器上单个像素的感光面积减小,拍照时对于光照的要求更高。例如在弱光照环境中,图像传感器上的感光元件更易受到串扰,导致输入信号的信噪比不足,最终输出的图片噪点过多,质量较差。With the increase of pixels, the photosensitive area of a single pixel on the image sensor decreases, and the requirements for illumination are higher when taking pictures. For example, in a low-light environment, the photosensitive element on the image sensor is more susceptible to crosstalk, resulting in insufficient signal-to-noise ratio of the input signal, and the final output image has too much noise and poor quality.
相关技术中,通常在光照不足时主动降低拍照的分辨率,以增加感光量,降低噪点。但是这样是以损失分辨率为代价,无法发挥出图像传感器本身高像素的优势。In the related art, the resolution of photographing is usually lowered actively when the light is insufficient, so as to increase the light-sensitivity and reduce the noise. However, this is at the cost of loss of resolution and cannot take advantage of the high-resolution image sensor itself.
鉴于上述一个或多个问题,本公开的示例性实施方式提供一种图像融合方法,可以应用于手机、平板电脑、数码相机等终端设备。该终端设备配置有图像传感器,可用于采集图像。In view of one or more of the above-mentioned problems, exemplary embodiments of the present disclosure provide an image fusion method, which can be applied to terminal devices such as mobile phones, tablet computers, and digital cameras. The terminal device is equipped with an image sensor, which can be used to collect images.
图1示出了本示例性实施方式的一种流程,可以包括以下步骤S110至S140:Fig. 1 shows a process of this exemplary embodiment, which may include the following steps S110 to S140:
步骤S110,获取由图像传感器采集的第一图像和第二图像;Step S110, acquiring the first image and the second image collected by the image sensor;
步骤S120,从第一图像提取边缘特征图像;Step S120, extract an edge feature image from the first image;
步骤S130,将第二图像进行超分辨率重建,得到第三图像;Step S130: Perform super-resolution reconstruction on the second image to obtain a third image;
步骤S140,将边缘特征图像和第三图像进行融合,得到最终图像。In step S140, the edge feature image and the third image are merged to obtain a final image.
下面分别对图1中的每个步骤进行具体说明。Each step in Figure 1 will be described in detail below.
步骤S110中,获取由图像传感器采集的第一图像和第二图像。In step S110, the first image and the second image collected by the image sensor are acquired.
其中,第一图像和第二图像所拍摄的对象相同,即图像内容相同,不同之处在于图像分辨率,第一图像为第一分辨率,第二图像为第二分辨率,第一分辨率高于第二分辨率,即第一图像比第二图像的像素数更多。Among them, the objects captured by the first image and the second image are the same, that is, the image content is the same, the difference lies in the image resolution, the first image is the first resolution, the second image is the second resolution, and the first resolution Higher than the second resolution, that is, the first image has more pixels than the second image.
在一种实施方式中,上述图像传感器可以是四拜耳(Quad Bayer)图像传感器,四拜耳图像传感器是指采用四拜耳滤色阵列的图像传感器。参考图2所示,左图示出了标准拜耳滤色阵列,其滤光片的单元阵列排布为GRBG(或BGGR、GBRG、RGGB),大部分图像传感器采用标准拜耳滤色阵列;图2中右图示出了四拜耳滤色阵列,其滤光片的单元阵列中相邻四个单元为相同颜色,目前一部分高像素的图像传感器采用四拜耳滤色阵列。基于此,参考图3所示,步骤S110可以具体通过以下步骤S310至S330实现:In an embodiment, the above-mentioned image sensor may be a Quad Bayer (Quad Bayer) image sensor, and the Quad Bayer image sensor refers to an image sensor using a Quad Bayer color filter array. Refer to Figure 2, the left figure shows the standard Bayer color filter array, the unit array of the filter is GRBG (or BGGR, GBRG, RGGB), and most image sensors use the standard Bayer color filter array; Figure 2 The right figure shows a four-Bayer color filter array. The four adjacent cells in the unit array of the filter are of the same color. At present, some high-pixel image sensors use a four-Bayer color filter array. Based on this, referring to FIG. 3, step S110 can be specifically implemented through the following steps S310 to S330:
步骤S310,通过四拜耳图像传感器采集基于四拜耳滤色阵列的原始拜耳图像。In step S310, the original Bayer image based on the four Bayer color filter array is collected by the four Bayer image sensor.
其中,拜耳图像是指RAW格式的图像,是图像传感器将采集到的光信号转化为数字信号后的图像数据,在拜耳图像中,每个像素点只有RGB中的一种颜色。本示例性实施方式中,利用四拜耳图像传感器采集图像后,得到的原始图像数据即上述原始拜耳图像,该图像中像素的颜色排列如图2中右图所示,相邻四个像素为相同颜色。Among them, the Bayer image refers to an image in RAW format, which is image data after the image sensor converts the collected light signal into a digital signal. In a Bayer image, each pixel has only one color in RGB. In this exemplary embodiment, after the four Bayer image sensor is used to collect the image, the original image data obtained is the original Bayer image. The color arrangement of the pixels in the image is as shown in the right figure in Figure 2, and the adjacent four pixels are the same. colour.
步骤S320,对原始拜耳图像进行解马赛克处理和去马赛克处理,得到第一图像。Step S320: Perform demosaic processing and demosaic processing on the original Bayer image to obtain a first image.
其中,第一图像为第一分辨率;解马赛克处理(Remosaic)是指将基于四拜耳滤色阵列的原始拜耳图像融合为基于标准拜耳滤色阵列的拜耳图像;去马赛克处理(Demosaic)是指将拜耳图像融合为完整的RGB图像。结合图4所示,可以对原始拜耳图像P进行解马赛克处理,得到基于标准拜耳滤色阵列的拜耳图像Q1;再对基于标准拜耳滤色阵列的拜耳图像Q1进行去马赛克处理,得到RGB格式的第一图像IMG1。解马赛克和去马赛克可以通过不同的插值算法实现,也可以通过神经网络等其他相关算法实现,本公开对此不做限定。终端设备中通常配置和图像传感器配套的ISP(Image Signal Processing,图像信号处理)单元,以执行上述解马赛克和去马赛克处理过程。第一图像IMG1的每个像素都具有RGB三个通道的像素值,以C表示。此外,也可以将解马赛克和去马赛克的处理过程合并为一次插值过程,即基于原始拜耳图像中的像素数据,直接对每个像素点进行插值,以得到缺失的颜色通道的像素值,例如可以采用线性插值、均值插值等算法实现,从而获得第一图像。Among them, the first image is the first resolution; Remosaic refers to the fusion of the original Bayer image based on the four Bayer color filter array into the Bayer image based on the standard Bayer color filter array; Demosaic refers to Fusion of Bayer images into complete RGB images. As shown in Figure 4, the original Bayer image P can be demosaiced to obtain the Bayer image Q1 based on the standard Bayer color filter array; then the Bayer image Q1 based on the standard Bayer color filter array can be demosaiced to obtain the RGB format The first image IMG1. Demosaicing and demosaicing can be implemented by different interpolation algorithms, and can also be implemented by other related algorithms such as neural networks, which are not limited in the present disclosure. The terminal device is usually equipped with an ISP (Image Signal Processing, image signal processing) unit that is matched with the image sensor to perform the above-mentioned demosaic and demosaic processing process. Each pixel of the first image IMG1 has pixel values of three RGB channels, denoted by C. In addition, the process of demosaicing and demosaicing can also be combined into one interpolation process, that is, based on the pixel data in the original Bayer image, each pixel is directly interpolated to obtain the pixel value of the missing color channel. For example, you can Use linear interpolation, mean interpolation and other algorithms to achieve, so as to obtain the first image.
步骤S330,将原始拜耳图像中四个相邻的同颜色像素合并为一个像素,并对合并像素后的拜耳图像进行去马赛克处理,得到第二图像。In step S330, four adjacent pixels of the same color in the original Bayer image are merged into one pixel, and the merged Bayer image is demosaiced to obtain a second image.
第二图像为第二分辨率。可以参考图5所示,首先对原始拜耳图像P进行像素“四合一”处理,即把2*2个单元内的同颜色像素合并为一个像素,合并像素后的拜耳图像Q2也是基于标准拜耳滤色阵列的排布,相比于图4中的Q1,Q2的像素减少为1/4,同时每个像素的面积增大为4倍,这样每个像素的进光量增大;再对Q2进行去马赛克处理,得到RGB格式的第二图像IMG2。可见,第一分辨率为第二分辨率的四倍。The second image is the second resolution. As shown in Figure 5, the original Bayer image P is first processed by pixel "four-in-one", that is, the pixels of the same color in 2*2 units are combined into one pixel. The combined Bayer image Q2 is also based on the standard Bayer image. The arrangement of the color filter array, compared with Q1 in Figure 4, the pixels of Q2 are reduced to 1/4, and the area of each pixel is increased by 4 times, so that the amount of light entering each pixel increases; and then Q2 Perform demosaicing to obtain the second image IMG2 in RGB format. It can be seen that the first resolution is four times the second resolution.
在另一种实施方式中,终端设备可以配置两个不同像素的图像传感器,例如目前很多手机配置有双摄像头。其中,像素较高的第一图像传感器用于拍摄第一图像,像素较低的第二图像传感器用于拍摄第二图像。由于两图像传感器的图像采集过程是在一次拍摄中完成的,其曝光的程度是相近的,因此第一图像的分辨率更高,但受到感光量的影响,其噪点可能更多,第二图像反之。In another implementation manner, the terminal device may be configured with two image sensors with different pixels. For example, many mobile phones are currently equipped with dual cameras. Among them, the first image sensor with higher pixels is used to capture the first image, and the second image sensor with lower pixels is used to capture the second image. Since the image acquisition process of the two image sensors is completed in one shot, the exposure degree is similar, so the resolution of the first image is higher, but affected by the amount of light, its noise may be more, the second image on the contrary.
继续参考图1,步骤S120中,从第一图像提取边缘特征图像。Continuing to refer to FIG. 1, in step S120, an edge feature image is extracted from the first image.
其中,边缘特征是指图像中特性(如图像纹理、像素灰度)分布不连续的位置所 具有的特征。高分辨率图像和低分辨率图像相比,一个重要区别就在于高分辨率图像具有更加丰富的边缘特征。本示例性实施方式通过从具有第一分辨率的第一图像中提取边缘特征图像,将图像中的细节信息保留下来。Among them, the edge feature refers to the feature in the image where the feature (such as image texture, pixel gray level) is discontinuously distributed. An important difference between high-resolution images and low-resolution images is that high-resolution images have richer edge features. This exemplary embodiment preserves the detailed information in the image by extracting the edge feature image from the first image with the first resolution.
在一种可选的实施方式中,参考图6所示,步骤S120可以具体通过以下步骤S610至S630实现:In an optional implementation manner, referring to FIG. 6, step S120 may be specifically implemented through the following steps S610 to S630:
步骤S610,采用傅里叶变换对第一图像构建噪声分离函数,以得到去噪约束项;Step S610, using Fourier transform to construct a noise separation function on the first image to obtain a denoising constraint item;
步骤S620,根据第一图像在x方向和y方向上的灰度梯度差分,构建差分约束项;Step S620, construct a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction;
步骤S630,根据去噪约束项和差分约束项建立约束函数,通过求解约束函数的最小值,得到第一图像对应的边缘特征图像。In step S630, a constraint function is established according to the denoising constraint item and the difference constraint item, and the edge feature image corresponding to the first image is obtained by solving the minimum value of the constraint function.
其中,第一图像本身携带一定的噪声信息(噪点),体现为边缘特征的一部分,在提取边缘特征时,希望降低噪声的影响。一股无噪声图像是稀疏相似的,即可以用较少的参数或特征向量等表示,而噪声是无规律、无相似性、非稀疏的,利用这种特性,可以构建噪声分离函数,得到边缘提取中的去噪约束项:Among them, the first image itself carries certain noise information (noise points), which is embodied as a part of the edge features. When extracting the edge features, it is hoped to reduce the influence of noise. A noise-free image is sparsely similar, that is, it can be represented by fewer parameters or feature vectors, while the noise is irregular, non-similar, and non-sparse. Using this feature, a noise separation function can be constructed to obtain the edge Denoising constraints in extraction:
R(noise)=||F HDF(IMG E-IMG1)||;              (1) R(noise)=||F H DF(IMG E -IMG1)||; (1)
其中,R(noise)表示去噪约束项,F表示矩阵傅里叶变换算子,H表示共轭转置,D表示变换域,IMG E表示边缘特征图像(自变量),IMG1表示第一图像。由于噪声的能量较小,梯度较为纯净,通过公式(1)可以实现较好的去噪效果。 Among them, R (noise) represents the denoising constraint item, F represents the matrix Fourier transform operator, H represents the conjugate transpose, D represents the transform domain, IMG E represents the edge feature image (independent variable), IMG1 represents the first image . Since the energy of the noise is small and the gradient is relatively pure, a better denoising effect can be achieved by formula (1).
在二维图像中,x方向和y方向分别表示横向和纵向。第一图像本身的边缘特征包含了这两个方向上的分量。由此,通过计算其在x方向和y方向上的灰度梯度差分,可以构建差分约束项:In a two-dimensional image, the x direction and the y direction represent the horizontal and vertical directions, respectively. The edge feature of the first image itself contains the components in these two directions. Therefore, by calculating the gray gradient difference in the x direction and the y direction, the difference constraint term can be constructed:
R(diff)=a 1||G xIMG E||+a 2||G yIMG E||;           (2) R(diff)=a 1 ||G x IMG E ||+a 2 ||G y IMG E ||; (2)
其中,R(diff)表示差分约束项,a 1和a 2为两部分的系数,表示各自的权重,为可调参数,可以根据经验和实际需求调值,G x和G y分别表示x方向和y方向上的差分算子。 Among them, R(diff) represents the difference constraint item, a 1 and a 2 are the coefficients of two parts, which represent their respective weights, which are adjustable parameters, which can be adjusted according to experience and actual needs. G x and G y respectively represent the x direction And the difference operator in the y direction.
将公式(1)和公式(2)相加,可以建立约束函数,用于表示在边缘特征提取时满足约束条件。通过求解约束函数的最小值,可以得到第一图像对应的边缘特征图像。如下所示:Adding formula (1) and formula (2) can establish a constraint function, which is used to indicate that the constraint condition is met during edge feature extraction. By solving the minimum value of the constraint function, the edge feature image corresponding to the first image can be obtained. As follows:
Figure PCTCN2020122679-appb-000001
Figure PCTCN2020122679-appb-000001
其中,
Figure PCTCN2020122679-appb-000002
为第一子约束项,为噪声分离函数的L2范数;||G xIMG E|| 1为第二子约束项,为第一图像在x方向上的灰度梯度差分的L1范数;||G yIMG E|| 1为第三子约束项,为第一图像在y方向上的灰度梯度差分的L1范数;通过设置a 1和a 2的值,可以调节三部分子约束项的权重比例,满足实际需求。方程(3)可以通过迭代求解,当约束函数达到收敛条件时,得到对应的边缘特征图像。
among them,
Figure PCTCN2020122679-appb-000002
Is the first sub-constraint item, which is the L2 norm of the noise separation function; ||G x IMG E || 1 is the second sub-constraint item, which is the L1 norm of the gray gradient difference of the first image in the x direction; ||G y IMG E || 1 is the third sub-constraint item, which is the L1 norm of the gray gradient difference of the first image in the y direction; the three-part sub-constraint can be adjusted by setting the values of a 1 and a 2 The weight ratio of the items meets actual needs. Equation (3) can be solved by iteration. When the constraint function reaches the convergence condition, the corresponding edge feature image is obtained.
在提取边缘特征之前,参考图7所示,可以先对第一图像做灰度处理,例如可以通过去均值的归一化处理将第一图像的像素值全部映射到[-127,+127]的范围内,以转化为标准灰度图像,再分别获取灰度图像在x方向与y方向上的梯度差分图像,用于在后续的边缘特征提取时计算差分算子。在一种可选的实施方式中,参考图8所示,步骤S620可以包括以下步骤S810至S830:Before extracting edge features, referring to Figure 7, grayscale processing can be performed on the first image. For example, all the pixel values of the first image can be mapped to [-127, +127] through the normalization process of removing the mean value. Within the range of, it can be converted into a standard gray-scale image, and then the gradient difference images of the gray-scale image in the x direction and the y direction are obtained respectively, which are used to calculate the difference operator in the subsequent edge feature extraction. In an optional implementation manner, referring to FIG. 8, step S620 may include the following steps S810 to S830:
步骤S810,对第一图像进行灰度处理;Step S810: Perform grayscale processing on the first image;
步骤S820,获取第一图像的x方向梯度差分图像和y方向梯度差分图像。Step S820: Obtain the gradient difference image in the x direction and the gradient difference image in the y direction of the first image.
步骤S830,根据上述x方向梯度差分图像和y方向梯度差分图像构建差分约束项。Step S830, construct a difference constraint item based on the x-direction gradient difference image and the y-direction gradient difference image.
其中,可以分别对x方向梯度差分图像和y方向梯度差分图像进行差分计算,以构建两个方向的差分约束项;也可以先将x方向梯度差分图像与y方向梯度差分图像合成,以得到包含两个方向的灰度梯度信息的综合梯度差分图像,再对该图像构建差 分约束项,则可以通过一个子约束项实现差分约束。Among them, the x-direction gradient difference image and the y-direction gradient difference image can be separately calculated to construct the difference constraint terms in the two directions; or the x-direction gradient difference image and the y-direction gradient difference image can be synthesized first to obtain the inclusion A comprehensive gradient difference image of gray gradient information in two directions, and then a difference constraint item is constructed for the image, and the difference constraint can be realized through a sub-constraint item.
继续参考图1,步骤S130中,将第二图像进行超分辨率重建,得到第三图像。Continuing to refer to FIG. 1, in step S130, super-resolution reconstruction is performed on the second image to obtain a third image.
其中,第三图像为第一分辨率。超分辨率重建是指通过插值等方式,从低分辨率图像中重建出高分辨率图像。Among them, the third image is the first resolution. Super-resolution reconstruction refers to the reconstruction of high-resolution images from low-resolution images through interpolation and other methods.
在一种可选的实施方式中,图像传感器在采集图像时,可以采集连续多帧的第二图像。然后以第一分辨率为基准,采用超分辨率重建算法对连续多帧的第二图像进行处理,得到第一分辨率的第三图像。在多帧第二图像中,对相同的对象连续采集图像,图像中的各个目标在位置上具有微小的偏差,超分辨率重建算法可以基于该偏差中的信息,重建出细节图像。举例而言,参考图9所示,步骤S130可以包括以下步骤S910至S930:In an optional implementation manner, when the image sensor collects images, it may collect multiple consecutive frames of second images. Then, using the first resolution as a reference, the super-resolution reconstruction algorithm is used to process the second images of consecutive multiple frames to obtain the third image of the first resolution. In the multi-frame second image, images are continuously collected for the same object, and each target in the image has a slight deviation in position. The super-resolution reconstruction algorithm can reconstruct the detailed image based on the information in the deviation. For example, referring to FIG. 9, step S130 may include the following steps S910 to S930:
步骤S910,选取一帧第二图像为参考帧,计算其他帧第二图像和参考帧之间的运动变化参数。In step S910, a second image of one frame is selected as a reference frame, and motion change parameters between the second image of other frames and the reference frame are calculated.
步骤S920,根据上述运动变化参数将至少两帧第二图像进行图像配准,并对该至少两帧第二图像之间的互补非冗余信息进行处理,确定插值参数。图像配准可以采用梯度图像的模板匹配对齐算法或基于特征算子的对齐算法等。一股的,进行图像配准时选取的第二图像数量越多,所得到的插值参数越精确。Step S920: Perform image registration on at least two frames of second images according to the aforementioned motion change parameters, and process the complementary non-redundant information between the at least two frames of second images to determine interpolation parameters. Image registration can use gradient image template matching alignment algorithm or feature operator-based alignment algorithm. In general, the more the number of second images selected during image registration, the more accurate the interpolation parameters obtained.
步骤S930,通过上述插值参数对参考帧进行插值,得到第一分辨率的第三图像。In step S930, the reference frame is interpolated by the above-mentioned interpolation parameters to obtain a third image of the first resolution.
需要说明的是,在进行插值后,还可以进行去除模糊和噪声等后处理,以优化第三图像。It should be noted that after the interpolation is performed, post-processing such as blur and noise removal can also be performed to optimize the third image.
在一种可选的实施方式中,还可以根据第二图像的帧数训练一个多通道输入的卷积神经网络模型,每个通道用于输入一帧低分辨率图像,模型的输出为原高分辨率图像,这样将多帧第二图像输入模型后,可以输出对应的第三图像。In an alternative embodiment, a multi-channel input convolutional neural network model can also be trained according to the number of frames of the second image, each channel is used to input a frame of low-resolution image, and the output of the model is the original high Resolution image, so that after multiple frames of second images are input to the model, the corresponding third image can be output.
本公开对于具体采用哪种超分辨率重建算法不做限定。The present disclosure does not limit which super-resolution reconstruction algorithm is specifically adopted.
继续参考图1,步骤S140中,将边缘特征图像和第三图像进行融合,得到最终图像。Continuing to refer to FIG. 1, in step S140, the edge feature image and the third image are merged to obtain the final image.
其中,在融合边缘特征图像和第三图像时,主要通过边缘特征图像增强第三图像中的边缘细节,将第一图像中较为丰富的高分辨率细节信息融入第三图像中。Among them, when fusing the edge feature image and the third image, the edge feature image is mainly used to enhance the edge details in the third image, and the richer high-resolution detail information in the first image is incorporated into the third image.
在一种实施方式中,参考图10所示,步骤S140可以具体通过以下步骤S1010至S1040实现:In an embodiment, referring to FIG. 10, step S140 can be specifically implemented through the following steps S1010 to S1040:
步骤S1010,定义最终图像的自变量;Step S1010, define the independent variables of the final image;
步骤S1020,根据第三图像和自变量之差,构建第一子损失项;Step S1020, construct a first sub-loss item according to the difference between the third image and the independent variable;
步骤S1030,根据边缘特征图像和自变量在x方向上的灰度梯度差分之差,构建第二子损失项,以及根据边缘特征图像和自变量在y方向上的灰度梯度差分之差,构建第三子损失项;Step S1030, construct a second sub-loss item based on the difference between the edge feature image and the gray gradient difference of the independent variable in the x direction, and construct a second sub-loss item based on the difference between the edge feature image and the gray gradient difference of the independent variable in the y direction The third sub-loss item;
步骤S1040,根据第一子损失项、第二子损失项和第三子损失项建立损失函数,通过求解损失函数的最小值,得到最终图像。In step S1040, a loss function is established according to the first sub-loss item, the second sub-loss item and the third sub-loss item, and the final image is obtained by solving the minimum value of the loss function.
其中,损失函数可以如下所示:Among them, the loss function can be as follows:
Figure PCTCN2020122679-appb-000003
Figure PCTCN2020122679-appb-000003
其中,IMG F表示最终图像(自变量),IMG3表示第三图像。
Figure PCTCN2020122679-appb-000004
为第一子损失项,是对第三图像和最终图像之差的度量,可以采用L2范数,也可以采用其他方式;||G xIMG F-IMG E|| 1为第二子损失项,是对最终图像在x方向上的灰度梯度差分和边缘特征图像之差的度量,可以采用L1范数,也可以采用其他方式;||G yIMG F-IMG E|| 1为第三子损失项,是对最终图像在y方向上的灰度梯度差分和边缘 特征图像之差的度量,可以采用L1范数,也可以采用其他方式;a 3和a 4为第二子损失项和第三子损失项的系数,表示各自的权重,为可调参数,可以根据经验和实际需求调值,以调节三部分子损失项的权重比例。
Among them, IMG F represents the final image (independent variable), and IMG3 represents the third image.
Figure PCTCN2020122679-appb-000004
Is the first sub-loss item, which is a measure of the difference between the third image and the final image. It can use the L2 norm or other methods; ||G x IMG F -IMG E || 1 is the second sub-loss item , Is a measure of the difference between the gray gradient difference of the final image in the x direction and the edge feature image. The L1 norm can be used, or other methods can be used; ||G y IMG F -IMG E || 1 is the third The sub-loss item is a measure of the difference between the gray gradient difference of the final image in the y direction and the edge feature image. It can use the L1 norm or other methods; a 3 and a 4 are the second sub-loss items and The coefficients of the third sub-loss item represent their respective weights and are adjustable parameters. The value can be adjusted according to experience and actual needs to adjust the weight ratio of the three sub-loss items.
上述损失函数实际上表达了在图像融合中,和原图像内容之间的误差。通过求解损失函数的最小值,可以最大程度的保留原图像信息,得到还原度最高的最终图像。The above loss function actually expresses the error between the image fusion and the original image content. By solving the minimum value of the loss function, the original image information can be retained to the greatest extent, and the final image with the highest degree of restoration can be obtained.
在另一种实施方式中,融合第三图像和边缘特征图像时,遍历边缘特征图像中每个像素点;对于每个像素点,若边缘特征图像中该像素点的灰度值为0,则采用第三图像中的像素值,若灰度值不为0,则采用边缘特征图像中的像素值。这样也能够将边缘特征的信息融入到第三图像中,以重建细节特征。In another embodiment, when fusing the third image and the edge feature image, each pixel in the edge feature image is traversed; for each pixel, if the gray value of the pixel in the edge feature image is 0, then The pixel value in the third image is used. If the gray value is not 0, the pixel value in the edge feature image is used. In this way, the edge feature information can also be incorporated into the third image to reconstruct detailed features.
在得到最终图像后,可以直接输出,例如显示在用户界面中,或者自动保存到相应的文件夹中。After the final image is obtained, it can be directly output, for example, displayed in the user interface, or automatically saved to the corresponding folder.
在一种可选的实施方式中,可以预先根据当前曝光参数确定第一分辨率和第二分辨率。当前曝光参数可以包括当前的感光参数、快门参数(拍摄时间)以及周围环境的光照参数等,系统根据当前曝光参数可以对拍照时的进光量进行估计,以确定合适的第一分辨率和第二分辨率。下面提供几个具体实施方式:In an optional implementation manner, the first resolution and the second resolution may be determined in advance according to the current exposure parameters. The current exposure parameters can include the current photosensitivity parameters, shutter parameters (shooting time), and lighting parameters of the surrounding environment, etc. The system can estimate the amount of light when taking pictures according to the current exposure parameters to determine the appropriate first resolution and second resolution. Resolution. Several specific implementations are provided below:
1、固定第一分辨率和第二分辨率的倍数关系,例如第一分辨率始终是第二分辨率的4倍,则在确定两个分辨率时,仅需要计算合适的第二分辨率。可以根据经验预先配置当前曝光参数和第二分辨率的计算关系,例如可以是线性比例关系,当前曝光参数越高时,第二分辨率越大,进而再计算第二分辨率。1. Fix the multiple relationship between the first resolution and the second resolution. For example, the first resolution is always 4 times the second resolution. When determining the two resolutions, only the appropriate second resolution needs to be calculated. The calculation relationship between the current exposure parameter and the second resolution can be pre-configured based on experience, for example, it can be a linear proportional relationship. The higher the current exposure parameter, the greater the second resolution, and then the second resolution is calculated.
2、同样可以预先配置当前曝光参数和第二分辨率的计算关系,将第一分辨率固定设置为图像传感器的最大像素,实际拍摄时根据当前曝光参数计算第二分辨率,仅调整第二分辨率即可。2. The calculation relationship between the current exposure parameters and the second resolution can also be pre-configured. The first resolution is fixed to the maximum pixel of the image sensor. The second resolution is calculated according to the current exposure parameters during actual shooting, and only the second resolution is adjusted. Rate can be.
3、根据当前曝光参数计算多组合适的第一分辨率和第二分辨率,在拍摄界面上显示,以供用户手动选择。该方式特别适用于光照不稳定的环境中。3. Calculate multiple sets of appropriate first resolution and second resolution according to the current exposure parameters, and display them on the shooting interface for the user to manually select. This method is particularly suitable for environments with unstable light.
通过上述方式确定第一分辨率和第二分辨率,适应于当前的光照条件以及曝光设置等,可以提高图像拍摄的质量。The first resolution and the second resolution are determined in the above-mentioned manner to adapt to the current lighting conditions and exposure settings, etc., so that the quality of image shooting can be improved.
图11示出了本示例性实施方式的流程:首先采集第一图像IMG1和多帧第二图像IMG2;从IMG1分别提取x方向和y方向上的梯度差分图像,在通过约束函数的计算,提取边缘特征图像IMG E;利用IMG2进行超分辨率重建,得到第三图像IMG3,IMG3和IMG1的分辨率相同,因此和IMG E的分辨率也相同;最后将IMG3和IMG E进行融合,通过建立损失函数并求解最小值,得到最终图像IMG FFig. 11 shows the flow of this exemplary embodiment: first collect a first image IMG1 and a multi-frame second image IMG2; extract gradient difference images in the x direction and y direction from IMG1, and extract Edge feature image IMG E ; use IMG2 for super-resolution reconstruction to obtain a third image IMG3, IMG3 and IMG1 have the same resolution, so the resolution is the same as IMG E ; finally IMG3 and IMG E are fused, and the loss is established Function and solve the minimum value to get the final image IMG F.
综上所述,本示例性实施方式中,采集相对高分辨率的第一图像和相对低分辨率的第二图像,从第一图像提取边缘特征图像,将第二图像超分辨率重建为第三图像,使第三图像和第一图像的分辨率相同,最后融合缘特征图像和第三图像,得到最终图像。其中,第一图像的分辨率较高,但噪点可能较多,通过边缘特征提取,保留了图像中原有的细节信息;第二图像分辨率较低,但噪点一股较少,通过超分辨率重建为第三图像后,第三图像的噪点较少,但在细节上可能存在失真;将两部分图像融合后,相当于在低噪点的图像中融入了原有的图像细节信息,可以实现第一图像和第二图像各自优势的互补,解决高像素图像噪点多的问题,提高图像拍摄的质量。In summary, in this exemplary embodiment, a relatively high-resolution first image and a relatively low-resolution second image are collected, edge feature images are extracted from the first image, and the second image is super-resolution reconstructed into the first image. Three images, make the resolution of the third image and the first image the same, and finally merge the edge feature image and the third image to obtain the final image. Among them, the resolution of the first image is higher, but there may be more noise. Through edge feature extraction, the original detail information in the image is preserved; the resolution of the second image is lower, but the noise is less. After reconstruction into the third image, the third image has less noise, but there may be distortion in details; after fusing the two parts of the image, it is equivalent to incorporating the original image detail information into the low-noise image, which can achieve the first The advantages of the first image and the second image are complementary to solve the problem of high-pixel image noise and improve the quality of image shooting.
本公开的示例性实施方式还提供了一种图像融合装置,可以配置于具备图像传感器的终端设备。参考图12所示,该图像融合装置1200可以包括处理器1210和存储器1220;其中,存储1220存储有以下程序模块:Exemplary embodiments of the present disclosure also provide an image fusion device, which can be configured in a terminal device equipped with an image sensor. Referring to FIG. 12, the image fusion apparatus 1200 may include a processor 1210 and a memory 1220; wherein, the memory 1220 stores the following program modules:
图像获取模块1221,用于获取由图像传感器采集的第一图像和第二图像,第一图像为第一分辨率,第二图像为第二分辨率,第一分辨率高于第二分辨率;The image acquisition module 1221 is configured to acquire a first image and a second image collected by an image sensor, the first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second resolution;
边缘提取模块1222,用于从第一图像提取边缘特征图像;The edge extraction module 1222 is used to extract edge feature images from the first image;
图像重建模块1223,用于将第二图像进行超分辨率重建,得到第三图像,第三图像为第一分辨率;The image reconstruction module 1223 is configured to perform super-resolution reconstruction on the second image to obtain a third image, where the third image has the first resolution;
融合处理模块1224,用于将边缘特征图像和第三图像进行融合,得到最终图像。The fusion processing module 1224 is used for fusing the edge feature image and the third image to obtain the final image.
在一种可选的实施方式中,上述图像传感器包括四拜耳图像传感器。In an optional implementation manner, the above-mentioned image sensor includes a four-Bayer image sensor.
在一种可选的实施方式中,图像获取模块1221可以包括:In an optional implementation manner, the image acquisition module 1221 may include:
拜耳图像采集单元,用于通过四拜耳图像传感器采集基于四拜耳滤色阵列的原始拜耳图像;The Bayer image acquisition unit is used to collect the original Bayer image based on the Four Bayer color filter array through the Four Bayer image sensor;
第一解析单元,用于对原始拜耳图像进行解马赛克处理和去马赛克处理,得到第一图像,第一图像为第一分辨率;The first parsing unit is configured to perform demosaic processing and demosaic processing on the original Bayer image to obtain a first image, the first image being the first resolution;
第二解析单元,用于将原始拜耳图像中四个相邻的同颜色像素合并为一个像素,并对合并像素后的拜耳图像进行去马赛克处理,得到第二图像,第二图像为第二分辨率;其中,第一分辨率为第二分辨率的四倍。The second analysis unit is used to merge four adjacent pixels of the same color in the original Bayer image into one pixel, and perform demosaicing processing on the Bayer image after the merged pixels to obtain a second image, which is the second resolution Rate; where the first resolution is four times the second resolution.
在一种可选的实施方式中,上述图像传感器包括第一图像传感器和第二图像传感器,第一图像传感器的像素高于第二图像传感器。图像获取模块1221,可以用于获取由第一图像传感器采集的第一图像以及由第二图像传感器采集的第二图像。In an optional implementation manner, the above-mentioned image sensor includes a first image sensor and a second image sensor, and the pixels of the first image sensor are higher than those of the second image sensor. The image acquisition module 1221 may be used to acquire the first image acquired by the first image sensor and the second image acquired by the second image sensor.
在一种可选的实施方式中,边缘提取模块1222可以包括:In an optional implementation manner, the edge extraction module 1222 may include:
去噪约束单元,用于采用傅里叶变换对第一图像构建噪声分离函数,以得到去噪约束项;The denoising constraint unit is used to construct a noise separation function on the first image using Fourier transform to obtain a denoising constraint item;
差分约束单元,用于根据第一图像在x方向和y方向上的灰度梯度差分,构建差分约束项;The difference constraint unit is used to construct a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction;
约束函数求解单元,用于根据去噪约束项和差分约束项建立约束函数,通过求解约束函数的最小值,得到第一图像对应的边缘特征图像。The constraint function solving unit is used to establish a constraint function according to the denoising constraint item and the difference constraint item, and obtain the edge feature image corresponding to the first image by solving the minimum value of the constraint function.
在一种可选的实施方式中,上述去噪约束项可以包括:In an optional implementation manner, the aforementioned denoising constraint items may include:
第一子约束项,为噪声分离函数的L2范数。The first sub-constraint item is the L2 norm of the noise separation function.
在一种可选的实施方式中,上述差分约束项可以包括:In an optional implementation manner, the above-mentioned differential constraint item may include:
第二子约束项,为第一图像在x方向上的灰度梯度差分的L1范数;The second sub-constraint item is the L1 norm of the gray gradient difference of the first image in the x direction;
第三子约束项,为第一图像在y方向上的灰度梯度差分的L1范数。The third sub-constraint item is the L1 norm of the gray gradient difference of the first image in the y direction.
在一种可选的实施方式中,差分约束单元,可以用于:In an optional implementation manner, the differential constraint unit can be used for:
对第一图像进行灰度处理;Perform grayscale processing on the first image;
获取第一图像的x方向梯度差分图像和y方向梯度差分图像;Acquiring a gradient difference image in the x direction and a gradient difference image in the y direction of the first image;
根据上述x方向梯度差分图像和y方向梯度差分图像构建差分约束项。The difference constraint term is constructed based on the x-direction gradient difference image and the y-direction gradient difference image.
在一种可选的实施方式中,第二图像可以包括连续多帧的第二图像。In an optional implementation manner, the second image may include multiple consecutive frames of the second image.
在一种可选的实施方式中,图像重建模块1223,可以用于采用超分辨率重建算法对连续多帧的第二图像进行处理,得到第三图像。In an optional implementation manner, the image reconstruction module 1223 may be used to process the second image of multiple consecutive frames using a super-resolution reconstruction algorithm to obtain a third image.
在一种可选的实施方式中,图像重建模块1223,可以用于:In an optional implementation manner, the image reconstruction module 1223 may be used for:
选取一帧第二图像为参考帧,计算其他帧第二图像和参考帧之间的运动变化参数;Select one frame of the second image as the reference frame, and calculate the motion change parameters between the second image of other frames and the reference frame;
根据上述运动变化参数将至少两帧第二图像进行图像配准,并对该至少两帧第二图像之间的互补非冗余信息进行处理,确定插值参数;Performing image registration on at least two frames of second images according to the aforementioned motion change parameters, and processing complementary non-redundant information between the at least two frames of second images to determine interpolation parameters;
通过上述插值参数对参考帧进行插值,得到第一分辨率的第三图像。The reference frame is interpolated through the above-mentioned interpolation parameters to obtain a third image of the first resolution.
在一种可选的实施方式中,图像重建模块1223,可以用于:In an optional implementation manner, the image reconstruction module 1223 may be used to:
利用多通道输入的卷积神经网络模型处理上述连续多帧的第二图像,输出对应的所述第三图像。The multi-channel input convolutional neural network model is used to process the above-mentioned continuous multiple frames of the second image, and output the corresponding third image.
在一种可选的实施方式中,融合处理模块1224可以包括:In an optional implementation manner, the fusion processing module 1224 may include:
自变量定义单元,用于定义最终图像的自变量;The independent variable definition unit is used to define the independent variables of the final image;
子损失构建单元,用于根据第三图像和自变量之差,构建第一子损失项,根据边 缘特征图像和自变量在x方向上的灰度梯度差分之差,构建第二子损失项,以及根据边缘特征图像和自变量在y方向上的灰度梯度差分之差,构建第三子损失项;The sub-loss construction unit is used to construct the first sub-loss item according to the difference between the third image and the independent variable, and construct the second sub-loss item according to the difference between the edge feature image and the difference in the gray gradient of the independent variable in the x direction, And according to the difference between the edge feature image and the gray gradient difference of the independent variable in the y direction, construct the third sub-loss item;
损失函数求解单元,用于根据第一子损失项、第二子损失项和第三子损失项建立损失函数,通过求解损失函数的最小值,得到最终图像。The loss function solving unit is used to establish a loss function according to the first sub-loss item, the second sub-loss item and the third sub-loss item, and obtain the final image by solving the minimum value of the loss function.
在一种可选的实施方式中,第一子损失项采用L2范数,第二子损失项和第三子损失项采用L1范数。In an optional implementation manner, the first sub-loss item adopts the L2 norm, and the second sub-loss item and the third sub-loss item adopt the L1 norm.
在一种可选的实施方式中,融合处理模块1224,可以用于:In an optional implementation manner, the fusion processing module 1224 may be used to:
遍历边缘特征图像中每个像素点,若像素点的灰度值为0,则采用第三图像中的像素值,若像素点的灰度值不为0,则采用边缘特征图像中的像素值。Traverse each pixel in the edge feature image, if the gray value of the pixel is 0, the pixel value in the third image is used, if the gray value of the pixel is not 0, then the pixel value in the edge feature image is used .
在一种可选的实施方式中,图像获取模块1221,还可以用于根据当前曝光参数确定第一分辨率和第二分辨率。In an optional implementation manner, the image acquisition module 1221 may also be used to determine the first resolution and the second resolution according to the current exposure parameters.
在一种可选的实施方式中,当前曝光参数包括当前的感光参数、快门参数、环境光照参数中的至少一种。In an optional embodiment, the current exposure parameter includes at least one of the current light-sensing parameter, the shutter parameter, and the ambient lighting parameter.
本公开的示例性实施方式还提供了另一种图像融合装置,可以配置于具备图像传感器的终端设备。参考图13所示,该图像融合装置1300可以包括:The exemplary embodiments of the present disclosure also provide another image fusion device, which can be configured in a terminal device equipped with an image sensor. Referring to FIG. 13, the image fusion apparatus 1300 may include:
图像获取模块1310,用于获取由图像传感器采集的第一图像和第二图像,第一图像为第一分辨率,第二图像为第二分辨率,第一分辨率高于第二分辨率;The image acquisition module 1310 is configured to acquire a first image and a second image collected by an image sensor, the first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second resolution;
边缘提取模块1320,用于从第一图像提取边缘特征图像;The edge extraction module 1320 is used to extract edge feature images from the first image;
图像重建模块1330,用于将第二图像进行超分辨率重建,得到第三图像,第三图像为第一分辨率;The image reconstruction module 1330 is configured to perform super-resolution reconstruction on the second image to obtain a third image, where the third image has the first resolution;
融合处理模块1340,用于将边缘特征图像和第三图像进行融合,得到最终图像。The fusion processing module 1340 is used for fusing the edge feature image and the third image to obtain a final image.
在一种可选的实施方式中,上述图像传感器包括四拜耳图像传感器。In an optional implementation manner, the above-mentioned image sensor includes a four-Bayer image sensor.
在一种可选的实施方式中,图像获取模块1310可以包括:In an optional implementation manner, the image acquisition module 1310 may include:
拜耳图像采集单元,用于通过四拜耳图像传感器采集基于四拜耳滤色阵列的原始拜耳图像;The Bayer image acquisition unit is used to collect the original Bayer image based on the Four Bayer color filter array through the Four Bayer image sensor;
第一解析单元,用于对原始拜耳图像进行解马赛克处理和去马赛克处理,得到第一图像,第一图像为第一分辨率;The first parsing unit is configured to perform demosaic processing and demosaic processing on the original Bayer image to obtain a first image, the first image being the first resolution;
第二解析单元,用于将原始拜耳图像中四个相邻的同颜色像素合并为一个像素,并对合并像素后的拜耳图像进行去马赛克处理,得到第二图像,第二图像为第二分辨率;其中,第一分辨率为第二分辨率的四倍。The second analysis unit is used to merge four adjacent pixels of the same color in the original Bayer image into one pixel, and perform demosaicing processing on the Bayer image after the merged pixels to obtain a second image, which is the second resolution Rate; where the first resolution is four times the second resolution.
在一种可选的实施方式中,上述图像传感器包括第一图像传感器和第二图像传感器,第一图像传感器的像素高于第二图像传感器。图像获取模块1310,可以用于获取由第一图像传感器采集的第一图像以及由第二图像传感器采集的第二图像。In an optional implementation manner, the above-mentioned image sensor includes a first image sensor and a second image sensor, and the pixels of the first image sensor are higher than those of the second image sensor. The image acquisition module 1310 may be used to acquire the first image acquired by the first image sensor and the second image acquired by the second image sensor.
在一种可选的实施方式中,边缘提取模块1320可以包括:In an optional implementation manner, the edge extraction module 1320 may include:
去噪约束单元,用于采用傅里叶变换对第一图像构建噪声分离函数,以得到去噪约束项;The denoising constraint unit is used to construct a noise separation function on the first image using Fourier transform to obtain a denoising constraint item;
差分约束单元,用于根据第一图像在x方向和y方向上的灰度梯度差分,构建差分约束项;The difference constraint unit is used to construct a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction;
约束函数求解单元,用于根据去噪约束项和差分约束项建立约束函数,通过求解约束函数的最小值,得到第一图像对应的边缘特征图像。The constraint function solving unit is used to establish a constraint function according to the denoising constraint item and the difference constraint item, and obtain the edge feature image corresponding to the first image by solving the minimum value of the constraint function.
在一种可选的实施方式中,上述去噪约束项可以包括:In an optional implementation manner, the aforementioned denoising constraint items may include:
第一子约束项,为噪声分离函数的L2范数。The first sub-constraint item is the L2 norm of the noise separation function.
在一种可选的实施方式中,上述差分约束项可以包括:In an optional implementation manner, the above-mentioned differential constraint item may include:
第二子约束项,为第一图像在x方向上的灰度梯度差分的L1范数;The second sub-constraint item is the L1 norm of the gray gradient difference of the first image in the x direction;
第三子约束项,为第一图像在y方向上的灰度梯度差分的L1范数。The third sub-constraint item is the L1 norm of the gray gradient difference of the first image in the y direction.
在一种可选的实施方式中,差分约束单元,可以用于:In an optional implementation manner, the differential constraint unit can be used for:
对第一图像进行灰度处理;Perform grayscale processing on the first image;
获取第一图像的x方向梯度差分图像和y方向梯度差分图像;Acquiring a gradient difference image in the x direction and a gradient difference image in the y direction of the first image;
根据上述x方向梯度差分图像和y方向梯度差分图像构建差分约束项。The difference constraint term is constructed based on the x-direction gradient difference image and the y-direction gradient difference image.
在一种可选的实施方式中,第二图像可以包括连续多帧的第二图像。In an optional implementation manner, the second image may include multiple consecutive frames of the second image.
在一种可选的实施方式中,图像重建模块1330,可以用于采用超分辨率重建算法对连续多帧的第二图像进行处理,得到第三图像。In an optional implementation manner, the image reconstruction module 1330 may be used to process the second image of multiple consecutive frames using a super-resolution reconstruction algorithm to obtain a third image.
在一种可选的实施方式中,图像重建模块1330,可以用于:In an optional implementation manner, the image reconstruction module 1330 may be used to:
选取一帧第二图像为参考帧,计算其他帧第二图像和参考帧之间的运动变化参数;Select one frame of the second image as the reference frame, and calculate the motion change parameters between the second image of other frames and the reference frame;
根据上述运动变化参数将至少两帧第二图像进行图像配准,并对该至少两帧第二图像之间的互补非冗余信息进行处理,确定插值参数;Performing image registration on at least two frames of second images according to the aforementioned motion change parameters, and processing complementary non-redundant information between the at least two frames of second images to determine interpolation parameters;
通过上述插值参数对参考帧进行插值,得到第一分辨率的第三图像。The reference frame is interpolated through the above-mentioned interpolation parameters to obtain a third image of the first resolution.
在一种可选的实施方式中,图像重建模块1330,可以用于:In an optional implementation manner, the image reconstruction module 1330 may be used to:
利用多通道输入的卷积神经网络模型处理上述连续多帧的第二图像,输出对应的所述第三图像。The multi-channel input convolutional neural network model is used to process the above-mentioned continuous multiple frames of the second image, and output the corresponding third image.
在一种可选的实施方式中,融合处理模块1340可以包括:In an optional implementation manner, the fusion processing module 1340 may include:
自变量定义单元,用于定义最终图像的自变量;The independent variable definition unit is used to define the independent variables of the final image;
子损失构建单元,用于根据第三图像和自变量之差,构建第一子损失项,根据边缘特征图像和自变量在x方向上的灰度梯度差分之差,构建第二子损失项,以及根据边缘特征图像和自变量在y方向上的灰度梯度差分之差,构建第三子损失项;The sub-loss construction unit is used to construct the first sub-loss item according to the difference between the third image and the independent variable, and construct the second sub-loss item according to the difference between the edge feature image and the difference in the gray gradient of the independent variable in the x direction, And according to the difference between the edge feature image and the gray gradient difference of the independent variable in the y direction, construct the third sub-loss item;
损失函数求解单元,用于根据第一子损失项、第二子损失项和第三子损失项建立损失函数,通过求解损失函数的最小值,得到最终图像。The loss function solving unit is used to establish a loss function according to the first sub-loss item, the second sub-loss item and the third sub-loss item, and obtain the final image by solving the minimum value of the loss function.
在一种可选的实施方式中,第一子损失项采用L2范数,第二子损失项和第三子损失项采用L1范数。In an optional implementation manner, the first sub-loss item adopts the L2 norm, and the second sub-loss item and the third sub-loss item adopt the L1 norm.
在一种可选的实施方式中,融合处理模块1340,可以用于:In an optional implementation manner, the fusion processing module 1340 may be used to:
遍历边缘特征图像中每个像素点,若像素点的灰度值为0,则采用第三图像中的像素值,若像素点的灰度值不为0,则采用边缘特征图像中的像素值。Traverse each pixel in the edge feature image, if the gray value of the pixel is 0, the pixel value in the third image is used, if the gray value of the pixel is not 0, then the pixel value in the edge feature image is used .
在一种可选的实施方式中,图像获取模块1310,还可以用于根据当前曝光参数确定第一分辨率和第二分辨率。In an optional implementation manner, the image acquisition module 1310 may also be used to determine the first resolution and the second resolution according to the current exposure parameters.
在一种可选的实施方式中,当前曝光参数包括当前的感光参数、快门参数、环境光照参数中的至少一种。In an optional embodiment, the current exposure parameter includes at least one of the current light-sensing parameter, the shutter parameter, and the ambient lighting parameter.
上述装置1200和装置1300中各模块/单元的具体细节在方法部分实施方式中已经详细说明,未披露的细节内容可以参见方法部分的实施方式内容,因而不再赘述。The specific details of the modules/units in the above-mentioned device 1200 and device 1300 have been described in detail in the method part of the implementation. For undisclosed details, please refer to the method part of the implementation content, and therefore will not be repeated.
本公开的示例性实施方式还提供了一种计算机可读存储介质,可以实现为一种程序产品的形式,其包括程序代码,当程序产品在终端设备上运行时,程序代码用于使终端设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。该程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which can be implemented in the form of a program product, which includes program code. When the program product runs on a terminal device, the program code is used to make the terminal device Perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification. The program product can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto. In this document, the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, device, or device.
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、 光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product can adopt any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。The computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The readable signal medium may also be any readable medium other than a readable storage medium, and the readable medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。The program code contained on the readable medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,程序设计语言包括面向对象的程序设计语言-诸如Java、C++等,还包括常规的过程式程序设计语言-诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。The program code used to perform the operations of the present disclosure can be written in any combination of one or more programming languages. The programming languages include object-oriented programming languages-such as Java, C++, etc., as well as conventional procedural programming. Language-such as "C" language or similar programming language. The program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on. In the case of a remote computing device, the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet).
本公开的示例性实施方式还提供了一种能够实现上述方法的终端设备。下面参照图14来描述根据本公开的这种示例性实施方式的终端设备1400。图14显示的终端设备1400仅仅是一个示例,不应对本公开实施方式的功能和使用范围带来任何限制。Exemplary embodiments of the present disclosure also provide a terminal device capable of implementing the above method. The terminal device 1400 according to this exemplary embodiment of the present disclosure will be described below with reference to FIG. 14. The terminal device 1400 shown in FIG. 14 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
如图14所示,终端设备1400可以以通用计算设备的形式表现。终端设备1400的组件可以包括但不限于:至少一个处理单元1410、至少一个存储单元1420、连接不同系统组件(包括存储单元1420和处理单元1410)的总线1430、显示单元1440和图像传感器1470,图像传感器1470用于采集图像。As shown in FIG. 14, the terminal device 1400 may be represented in the form of a general-purpose computing device. The components of the terminal device 1400 may include, but are not limited to: at least one processing unit 1410, at least one storage unit 1420, a bus 1430 connecting different system components (including the storage unit 1420 and the processing unit 1410), a display unit 1440, and an image sensor 1470. The sensor 1470 is used to collect images.
存储单元1420存储有程序代码,程序代码可以被处理单元1410执行,使得处理单元1410执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的步骤。例如,处理单元1410可以执行图1、图3、图6、图8、图9或图10中任意一个或多个方法步骤。The storage unit 1420 stores program codes, and the program codes can be executed by the processing unit 1410, so that the processing unit 1410 executes the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification. For example, the processing unit 1410 may execute any one or more of the method steps in FIG. 1, FIG. 3, FIG. 6, FIG. 8, FIG. 9, or FIG. 10.
存储单元1420可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)1421和/或高速缓存存储单元1422,还可以进一步包括只读存储单元(ROM)1423。The storage unit 1420 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 1421 and/or a cache storage unit 1422, and may further include a read-only storage unit (ROM) 1423.
存储单元1420还可以包括具有一组(至少一个)程序模块1425的程序/实用工具1424,这样的程序模块1425包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The storage unit 1420 may also include a program/utility tool 1424 having a set of (at least one) program modules 1425. Such program modules 1425 include but are not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples or some combination may include the implementation of a network environment.
总线1430可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。The bus 1430 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures bus.
终端设备1400也可以与一个或多个外部设备1500(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该终端设备1400交互的设备通信,和/或与使得该终端设备1400能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口1450进行。并且,终端设备1400还可以通过网络适配器1460与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器1460通过总线1430与终端设备1400的其它模块通信。应当明白,尽管图中未示出,可以结合终端设备1400使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱 动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The terminal device 1400 may also communicate with one or more external devices 1500 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable the user to interact with the terminal device 1400, and/or communicate with Any device (such as a router, modem, etc.) that enables the terminal device 1400 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 1450. In addition, the terminal device 1400 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 1460. As shown in the figure, the network adapter 1460 communicates with other modules of the terminal device 1400 through the bus 1430. It should be understood that although not shown in the figure, other hardware and/or software modules can be used in conjunction with the terminal device 1400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives And data backup storage system, etc.
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开示例性实施方式的方法。Through the description of the above embodiments, those skilled in the art can easily understand that the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the exemplary embodiment of the present disclosure.
此外,上述附图仅是根据本公开示例性实施方式的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。In addition, the above-mentioned drawings are merely schematic illustrations of the processing included in the method according to the exemplary embodiment of the present disclosure, and are not intended for limitation. It is easy to understand that the processing shown in the above drawings does not indicate or limit the time sequence of these processings. In addition, it is easy to understand that these processes can be executed synchronously or asynchronously in multiple modules, for example.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的示例性实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to exemplary embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.
所属技术领域的技术人员能够理解,本公开的各个方面可以实现为系统、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施方式。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一股性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施方式仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art can understand that various aspects of the present disclosure can be implemented as a system, a method, or a program product. Therefore, various aspects of the present disclosure can be specifically implemented in the following forms, namely: complete hardware implementation, complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software implementations, which may be collectively referred to herein as "Circuit", "Module" or "System". After considering the specification and practicing the invention disclosed herein, those skilled in the art will easily think of other embodiments of the present disclosure. This application is intended to cover any variations, uses, or adaptive changes of the present disclosure. These variations, uses, or adaptive changes follow the general principles of the present disclosure and include common knowledge or conventional techniques in the technical field that are not disclosed in the present disclosure. means. The description and the embodiments are only regarded as exemplary, and the true scope and spirit of the present disclosure are pointed out by the claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限定。It should be understood that the present disclosure is not limited to the precise structure that has been described above and shown in the drawings, and various modifications and changes can be made without departing from its scope. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

  1. 一种图像融合方法,应用于具备图像传感器的终端设备,其特征在于,所述方法包括:An image fusion method applied to a terminal device equipped with an image sensor, characterized in that the method includes:
    获取由所述图像传感器采集的第一图像和第二图像,所述第一图像为第一分辨率,所述第二图像为第二分辨率,所述第一分辨率高于所述第二分辨率;Acquire a first image and a second image collected by the image sensor, the first image is a first resolution, the second image is a second resolution, and the first resolution is higher than the second Resolution
    从所述第一图像提取边缘特征图像;Extracting an edge feature image from the first image;
    将所述第二图像进行超分辨率重建,得到第三图像,所述第三图像为所述第一分辨率;Performing super-resolution reconstruction on the second image to obtain a third image, the third image being the first resolution;
    将所述边缘特征图像和所述第三图像进行融合,得到最终图像。The edge feature image and the third image are fused to obtain a final image.
  2. 根据权利要求1所述的方法,其特征在于,所述图像传感器包括四拜耳图像传感器。The method of claim 1, wherein the image sensor comprises a four-Bayer image sensor.
  3. 根据权利要求2所述的方法,其特征在于,所述获取由所述四拜耳图像传感器采集的第一图像和第二图像,包括:The method according to claim 2, wherein the obtaining the first image and the second image collected by the four Bayer image sensor comprises:
    通过所述四拜耳图像传感器采集基于四拜耳滤色阵列的原始拜耳图像;Collecting the original Bayer image based on the four Bayer color filter array through the four Bayer image sensor;
    对所述原始拜耳图像进行解马赛克处理和去马赛克处理,得到所述第一图像,所述第一图像为所述第一分辨率;Performing demosaic processing and demosaic processing on the original Bayer image to obtain the first image, the first image being the first resolution;
    将所述原始拜耳图像中四个相邻的同颜色像素合并为一个像素,并对合并像素后的拜耳图像进行去马赛克处理,得到所述第二图像,所述第二图像为所述第二分辨率;Combine four adjacent pixels of the same color in the original Bayer image into one pixel, and perform demosaicing processing on the combined Bayer image to obtain the second image. The second image is the second Resolution
    其中,所述第一分辨率为所述第二分辨率的四倍。Wherein, the first resolution is four times the second resolution.
  4. 根据权利要求1所述的方法,其特征在于,所述图像传感器包括第一图像传感器和第二图像传感器,所述第一图像传感器的像素高于所述第二图像传感器;The method according to claim 1, wherein the image sensor comprises a first image sensor and a second image sensor, and the pixels of the first image sensor are higher than those of the second image sensor;
    所述获取由所述图像传感器采集的第一图像和第二图像,包括:The acquiring the first image and the second image collected by the image sensor includes:
    获取由所述第一图像传感器采集的第一图像以及由所述第二图像传感器采集的第二图像。Acquire a first image collected by the first image sensor and a second image collected by the second image sensor.
  5. 根据权利要求1所述的方法,其特征在于,所述从所述第一图像提取边缘特征图像,包括:The method according to claim 1, wherein the extracting an edge feature image from the first image comprises:
    采用傅里叶变换对所述第一图像构建噪声分离函数,以得到去噪约束项;Using Fourier transform to construct a noise separation function on the first image to obtain a denoising constraint item;
    根据所述第一图像在x方向和y方向上的灰度梯度差分,构建差分约束项;Construct a difference constraint term according to the gray gradient difference of the first image in the x direction and the y direction;
    根据所述去噪约束项和所述差分约束项建立约束函数,通过求解所述约束函数的最小值,得到所述第一图像对应的边缘特征图像。A constraint function is established according to the denoising constraint item and the difference constraint item, and an edge feature image corresponding to the first image is obtained by solving the minimum value of the constraint function.
  6. 根据权利要求5所述的方法,其特征在于,所述去噪约束项包括:The method according to claim 5, wherein the denoising constraint item comprises:
    第一子约束项,为所述噪声分离函数的L2范数。The first sub-constraint item is the L2 norm of the noise separation function.
  7. 根据权利要求5所述的方法,其特征在于,所述差分约束项包括:The method according to claim 5, wherein the difference constraint item comprises:
    第二子约束项,为所述第一图像在x方向上的灰度梯度差分的L1范数;以及The second sub-constraint item is the L1 norm of the gray gradient difference of the first image in the x direction; and
    第三子约束项,为所述第一图像在y方向上的灰度梯度差分的L1范数。The third sub-constraint item is the L1 norm of the gray gradient difference of the first image in the y direction.
  8. 根据权利要求5所述的方法,其特征在于,所述根据所述第一图像在x方向和y方向上的灰度梯度差分,构建差分约束项,包括:The method according to claim 5, wherein the constructing a difference constraint item according to the gray gradient difference of the first image in the x direction and the y direction comprises:
    对所述第一图像进行灰度处理;Performing grayscale processing on the first image;
    获取所述第一图像的x方向梯度差分图像和y方向梯度差分图像;Acquiring an x-direction gradient difference image and a y-direction gradient difference image of the first image;
    根据所述x方向梯度差分图像和所述y方向梯度差分图像构建所述差分约束项。The difference constraint term is constructed according to the x-direction gradient difference image and the y-direction gradient difference image.
  9. 根据权利要求1所述的方法,其特征在于,所述第二图像包括连续多帧的第二图像。The method according to claim 1, wherein the second image comprises a second image of multiple consecutive frames.
  10. 根据权利要求9所述的方法,其特征在于,所述将所述第二图像进行超分辨率重建,得到第三图像,包括:The method according to claim 9, wherein the super-resolution reconstruction of the second image to obtain a third image comprises:
    采用超分辨率重建算法对所述连续多帧的第二图像进行处理,得到所述第三图像。The super-resolution reconstruction algorithm is used to process the second image of the continuous multiple frames to obtain the third image.
  11. 根据权利要求10所述的方法,其特征在于,所述采用超分辨率重建算法对所述连续多帧的第二图像进行处理,得到所述第三图像,包括:The method according to claim 10, wherein said using a super-resolution reconstruction algorithm to process the second image of the continuous multiple frames to obtain the third image comprises:
    选取一帧第二图像为参考帧,计算其他帧第二图像和所述参考帧之间的运动变化参数;Selecting a second image of a frame as a reference frame, and calculating motion change parameters between the second image of other frames and the reference frame;
    根据所述运动变化参数将至少两帧第二图像进行图像配准,并对所述至少两帧第二图像之间的互补非冗余信息进行处理,确定插值参数;Performing image registration on at least two frames of second images according to the motion change parameters, and processing complementary non-redundant information between the at least two frames of second images to determine interpolation parameters;
    通过所述插值参数对所述参考帧进行插值,得到第一分辨率的所述第三图像。Interpolate the reference frame by using the interpolation parameter to obtain the third image of the first resolution.
  12. 根据权利要求10所述的方法,其特征在于,所述采用超分辨率重建算法对所述连续多帧的第二图像进行处理,得到所述第三图像,包括:The method according to claim 10, wherein said using a super-resolution reconstruction algorithm to process the second image of the continuous multiple frames to obtain the third image comprises:
    利用多通道输入的卷积神经网络模型处理所述连续多帧的第二图像,输出对应的所述第三图像。The convolutional neural network model of multi-channel input is used to process the second image of consecutive multiple frames, and output the corresponding third image.
  13. 根据权利要求1所述的方法,其特征在于,所述将所述边缘特征图像和所述第三图像进行融合,得到最终图像,包括:The method according to claim 1, wherein the fusing the edge feature image and the third image to obtain a final image comprises:
    定义所述最终图像的自变量;Define the independent variables of the final image;
    根据所述第三图像和所述自变量之差,构建第一子损失项;Construct a first sub-loss item according to the difference between the third image and the independent variable;
    根据所述边缘特征图像和所述自变量在x方向上的灰度梯度差分之差,构建第二子损失项,以及根据所述边缘特征图像和所述自变量在y方向上的灰度梯度差分之差,构建第三子损失项;Construct a second sub-loss item according to the difference between the edge feature image and the gray gradient difference of the independent variable in the x direction, and construct the second sub-loss item, and according to the edge feature image and the gray gradient of the independent variable in the y direction The difference of the difference, construct the third sub-loss item;
    根据所述第一子损失项、第二子损失项和第三子损失项建立损失函数,通过求解所述损失函数的最小值,得到所述最终图像。A loss function is established according to the first sub-loss item, the second sub-loss item, and the third sub-loss item, and the final image is obtained by solving the minimum value of the loss function.
  14. 根据权利要求13所述的方法,其特征在于,所述第一子损失项采用L2范数,所述第二子损失项和所述第三子损失项采用L1范数。The method according to claim 13, wherein the first sub-loss item adopts an L2 norm, and the second sub-loss item and the third sub-loss item adopt an L1 norm.
  15. 根据权利要求1所述的方法,其特征在于,所述将所述边缘特征图像和所述第三图像进行融合,得到最终图像,包括:The method according to claim 1, wherein the fusing the edge feature image and the third image to obtain a final image comprises:
    遍历所述边缘特征图像中每个像素点,若所述像素点的灰度值为0,则采用所述第三图像中的像素值,若所述像素点的灰度值不为0,则采用所述边缘特征图像中的像素值。Traverse each pixel in the edge feature image, if the gray value of the pixel is 0, the pixel value in the third image is used, and if the gray value of the pixel is not 0, then The pixel value in the edge feature image is used.
  16. 根据权利要求1至15任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 15, wherein the method further comprises:
    根据当前曝光参数确定所述第一分辨率和所述第二分辨率。The first resolution and the second resolution are determined according to the current exposure parameters.
  17. 根据权利要求16所述的方法,其特征在于,所述当前曝光参数包括当前的感光参数、快门参数、环境光照参数中的至少一种。The method according to claim 16, wherein the current exposure parameter comprises at least one of a current photosensitive parameter, a shutter parameter, and an ambient light parameter.
  18. 一种图像融合装置,配置于具备图像传感器的终端设备,其特征在于,所述装置包括处理器;其中,所述处理器用于执行存储器中存储的以下程序模块:An image fusion device configured in a terminal device equipped with an image sensor, characterized in that the device includes a processor; wherein the processor is used to execute the following program modules stored in a memory:
    图像获取模块,用于获取由所述图像传感器采集的第一图像和第二图像,所述第一图像为第一分辨率,所述第二图像为第二分辨率,所述第一分辨率高于所述第二分辨率;An image acquisition module for acquiring a first image and a second image collected by the image sensor, the first image is a first resolution, the second image is a second resolution, and the first resolution Higher than the second resolution;
    边缘提取模块,用于从所述第一图像提取边缘特征图像;An edge extraction module for extracting edge feature images from the first image;
    图像重建模块,用于将所述第二图像进行超分辨率重建,得到第三图像,所述第三图像为所述第一分辨率;An image reconstruction module, configured to perform super-resolution reconstruction on the second image to obtain a third image, the third image being the first resolution;
    融合处理模块,用于将所述边缘特征图像和所述第三图像进行融合,得到最终图像。The fusion processing module is used for fusing the edge feature image and the third image to obtain a final image.
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至17任一项所述的方法。A computer-readable storage medium having a computer program stored thereon, wherein the computer program implements the method according to any one of claims 1 to 17 when the computer program is executed by a processor.
  20. 一种终端设备,其特征在于,包括:A terminal device, characterized in that it comprises:
    处理器;processor;
    存储器,用于存储所述处理器的可执行指令;以及A memory for storing executable instructions of the processor; and
    图像传感器;Image Sensor;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至17任一项所述的方法。Wherein, the processor is configured to execute the method according to any one of claims 1 to 17 by executing the executable instructions.
PCT/CN2020/122679 2019-11-01 2020-10-22 Image fusion method and apparatus, and storage medium and terminal device WO2021083020A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911057665.5A CN112767290B (en) 2019-11-01 2019-11-01 Image fusion method, image fusion device, storage medium and terminal device
CN201911057665.5 2019-11-01

Publications (1)

Publication Number Publication Date
WO2021083020A1 true WO2021083020A1 (en) 2021-05-06

Family

ID=75691921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122679 WO2021083020A1 (en) 2019-11-01 2020-10-22 Image fusion method and apparatus, and storage medium and terminal device

Country Status (2)

Country Link
CN (1) CN112767290B (en)
WO (1) WO2021083020A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628134A (en) * 2021-07-28 2021-11-09 商汤集团有限公司 Image noise reduction method and device, electronic equipment and storage medium
CN113689335A (en) * 2021-08-24 2021-11-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
WO2023155999A1 (en) * 2022-02-18 2023-08-24 Dream Chip Technologies Gmbh Method and image processor unit for processing raw image data
CN118505827A (en) * 2024-07-16 2024-08-16 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Color video snapshot compressed sensing reconstruction method based on four Bayer arrays

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345777A (en) * 2021-05-13 2022-11-15 南京大学 Method, apparatus and computer readable medium for imaging
CN113888411A (en) * 2021-09-29 2022-01-04 豪威科技(武汉)有限公司 Resolution improving method and readable storage medium
CN114693580B (en) * 2022-05-31 2022-10-18 荣耀终端有限公司 Image processing method and related device
CN117132629B (en) * 2023-02-17 2024-06-28 荣耀终端有限公司 Image processing method and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060291751A1 (en) * 2004-12-16 2006-12-28 Peyman Milanfar Robust reconstruction of high resolution grayscale images from a sequence of low-resolution frames (robust gray super-resolution)
CN104700379A (en) * 2014-12-29 2015-06-10 烟台大学 Remote sensing image fusion method based on multi-dimensional morphologic element analysis
CN107316274A (en) * 2017-05-10 2017-11-03 重庆邮电大学 A kind of Infrared image reconstruction method that edge is kept
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
CN108257108A (en) * 2018-02-07 2018-07-06 浙江师范大学 A kind of super-resolution image reconstruction method and system
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064476B2 (en) * 2008-10-04 2015-06-23 Microsoft Technology Licensing, Llc Image super-resolution using gradient profile prior
US10806334B2 (en) * 2017-02-28 2020-10-20 Verily Life Sciences Llc System and method for multiclass classification of images using a programmable light source
CN108198147B (en) * 2018-01-02 2021-09-14 昆明理工大学 Multi-source image fusion denoising method based on discriminant dictionary learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060291751A1 (en) * 2004-12-16 2006-12-28 Peyman Milanfar Robust reconstruction of high resolution grayscale images from a sequence of low-resolution frames (robust gray super-resolution)
CN104700379A (en) * 2014-12-29 2015-06-10 烟台大学 Remote sensing image fusion method based on multi-dimensional morphologic element analysis
CN107316274A (en) * 2017-05-10 2017-11-03 重庆邮电大学 A kind of Infrared image reconstruction method that edge is kept
CN107527321A (en) * 2017-08-22 2017-12-29 维沃移动通信有限公司 A kind of image rebuilding method, terminal and computer-readable recording medium
CN108257108A (en) * 2018-02-07 2018-07-06 浙江师范大学 A kind of super-resolution image reconstruction method and system
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628134A (en) * 2021-07-28 2021-11-09 商汤集团有限公司 Image noise reduction method and device, electronic equipment and storage medium
CN113689335A (en) * 2021-08-24 2021-11-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113689335B (en) * 2021-08-24 2024-05-07 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
WO2023155999A1 (en) * 2022-02-18 2023-08-24 Dream Chip Technologies Gmbh Method and image processor unit for processing raw image data
CN118505827A (en) * 2024-07-16 2024-08-16 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Color video snapshot compressed sensing reconstruction method based on four Bayer arrays

Also Published As

Publication number Publication date
CN112767290B (en) 2022-11-11
CN112767290A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2021083020A1 (en) Image fusion method and apparatus, and storage medium and terminal device
JP5272581B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
WO2020192483A1 (en) Image display method and device
WO2021047345A1 (en) Image noise reduction method and apparatus, and storage medium and electronic device
TWI769725B (en) Image processing method, electronic device and computer readable storage medium
WO2021115179A1 (en) Image processing method, image processing apparatus, storage medium, and terminal device
US20220261961A1 (en) Method and device, electronic equipment, and storage medium
JP2013509093A (en) System and method for demosaic processing of image data using weighted gradient
JP2013509092A (en) System and method for detection and correction of defective pixels in an image sensor
WO2004068862A1 (en) Method for creating high resolution color image, system for creating high resolution color image and program for creating high resolution color image
EP2898473A1 (en) Systems and methods for reducing noise in video streams
JP6236259B2 (en) Image processing apparatus, image processing method, and image processing program
WO2024027287A1 (en) Image processing system and method, and computer-readable medium and electronic device
KR102083721B1 (en) Stereo Super-ResolutionImaging Method using Deep Convolutional Networks and Apparatus Therefor
WO2020215180A1 (en) Image processing method and apparatus, and electronic device
CN112750092A (en) Training data acquisition method, image quality enhancement model and method and electronic equipment
CN110958363B (en) Image processing method and device, computer readable medium and electronic device
JP2021043874A (en) Image processing apparatus, image processing method, and program
EP3834170B1 (en) Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion
CN116208812A (en) Video frame inserting method and system based on stereo event and intensity camera
US20230034109A1 (en) Apparatus and method for interband denoising and sharpening of images
JP2008293388A (en) Image processing method, image processor, and electronic equipment comprising image processor
CN114881904A (en) Image processing method, image processing device and chip
CN116567194B (en) Virtual image synthesis method, device, equipment and storage medium
US12148124B2 (en) Apparatus and method for combined intraband and interband multi-frame demosaicing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20880520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20880520

Country of ref document: EP

Kind code of ref document: A1