[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111724448A - Image super-resolution reconstruction method and device and terminal equipment - Google Patents

Image super-resolution reconstruction method and device and terminal equipment Download PDF

Info

Publication number
CN111724448A
CN111724448A CN201910205538.9A CN201910205538A CN111724448A CN 111724448 A CN111724448 A CN 111724448A CN 201910205538 A CN201910205538 A CN 201910205538A CN 111724448 A CN111724448 A CN 111724448A
Authority
CN
China
Prior art keywords
image
images
raw images
format
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910205538.9A
Other languages
Chinese (zh)
Inventor
戴俊
张一帆
王银廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910205538.9A priority Critical patent/CN111724448A/en
Priority to PCT/CN2020/079880 priority patent/WO2020187220A1/en
Publication of CN111724448A publication Critical patent/CN111724448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method, a device and a terminal device for image super-resolution reconstruction, wherein the method comprises the following steps: acquiring N original raw images through a camera, wherein N is 1 or an integer greater than 1; obtaining a first image through preliminary operation according to the N raw images, inputting the N raw images into a neural network, and obtaining first detail information output by the neural network; and superposing the first detail information to the first image to obtain super-resolution images corresponding to the N raw images. By adopting the technical scheme provided by the embodiment of the application, the super-resolution image comprising more detail information can be obtained by superposing the first detail information output by the neural network on the first image obtained by the initial operation, so that a clear image can be obtained by adopting the technical scheme provided by the embodiment of the application.

Description

Image super-resolution reconstruction method and device and terminal equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for super-resolution image reconstruction, and a terminal device.
Background
The portable terminal equipment such as smart phones and tablet computers with light and thin characteristics and provided with the fixed-focus lens has the advantages that a user can adjust composition of a shot through zooming, the fixed-focus lens realizes amplified display of a shot scene by using a digital zooming technology when zooming is realized, however, the definition of an image is usually lost, and especially when the magnification is large, the shot image is often unclear and cannot meet the requirements of the user.
In order to obtain a clear image, an image super-resolution reconstruction method can be adopted, wherein the image super-resolution reconstruction method is based on a signal processing technology to reconstruct a high-resolution image from a single or multiple low-resolution images, and the aim of improving the image resolution is achieved by recovering high-frequency detail information lost in the image acquisition process. The resolution of the final image is greatly improved compared with that of the original image, more detail information can be provided, and the final image is closer to an ideal image.
At present, various methods are available for image super-resolution reconstruction, for example, google proposes a method for image super-resolution reconstruction, which performs demosaicing in a multi-frame registration manner and fills up missing pixels by using inter-frame displacement. Specifically, by sequentially moving the image one pixel to the right, down, and left, a color image can be completely reconstructed. The pixel values of the method are obtained by restoring the real image instead of interpolation, so that the resolution of the image is improved. It should be noted that, the method provided by google theoretically improves the resolution of an image, but is not ideal in practical use because the method needs to shift an image by one pixel, and in practical use, the image is difficult to accurately shift a single pixel, so that an image obtained by super-resolution reconstruction of the image by using the method is not ideal.
Therefore, how to make the terminal device installed with the fixed focus lens shoot clear images is a problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the application provides an image super-resolution reconstruction method, an image super-resolution reconstruction device and terminal equipment, and the reconstructed image can contain more detailed information and is clearer.
In a first aspect, an embodiment of the present application provides an image super-resolution reconstruction method, including: acquiring N original raw images through a camera, wherein N is 1 or an integer greater than 1; obtaining a first image through preliminary operation according to the N raw images, wherein the first image is a colorful three-channel image; inputting the N raw images into a neural network to obtain first detail information output by the neural network; and superposing the first detail information to the first image to obtain super-resolution images corresponding to the N raw images.
The first detail information may be a data set representing image details or an image representing image details.
In specific implementation, a user can trigger a photographing instruction by clicking a photographing key on the display screen, and after the photographing instruction is obtained, the camera can obtain N raw images. It should be noted that, when previewing a shot scene through the display interface before shooting, a user may perform a zoom operation, for example, a distance between two fingers contacting with the display screen is increased, so that a zoom magnification ratio may be increased; the distance between two fingers in contact with the display screen is reduced, and the zoom magnification can be reduced.
In some possible embodiments, acquiring N raw images by the camera may include: and cutting the N images acquired by the camera according to the zooming magnification corresponding to the photographing instruction to obtain N raw images corresponding to the zooming magnification.
In some possible embodiments, the preliminary operations may include: carrying out interpolation operation on any image in the N images to be processed; or, performing interpolation operation on a first noise-reduced image, wherein the first noise-reduced image is an image obtained by performing multi-frame noise reduction on N images to be processed.
In some possible embodiments, the neural network may be obtained by training a neural network to be trained by taking N simulated raw images as input and taking a difference between a super-resolution image and a high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
In some possible embodiments, the format of the N raw images is bayer format or bank android quad format; if the format of the N raw images is the quadra format, obtaining a first image through a preliminary operation according to the N raw images, including: and converting the N raw images into N images in a bayer format through binning processing, and performing the preliminary operation on the N images converted into the bayer format to obtain a first image. If the format of the N pieces of raw images is the quadra format, the N pieces of simulated raw images are images in the quadra format; and the second image is obtained by converting the N simulated raw images into a bayer format through binning processing and performing the preliminary operation on the N simulated raw images in the bayer format after conversion.
By adopting the technical scheme provided by the embodiment of the application, the super-resolution image comprising more detail information can be obtained by superposing the first detail information output by the neural network on the first image obtained by the initial operation, so that a clear image can be obtained by adopting the technical scheme provided by the embodiment of the application.
In a second aspect, an embodiment of the present application provides an image super-resolution reconstruction apparatus, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring N original raw images through a camera, and N is 1 or an integer greater than 1; and the first processing unit is used for obtaining a first image through preliminary operation according to the N raw images, wherein the first image is a colorful three-channel image. The second processing unit is used for inputting the N raw images into a neural network to obtain first detail information output by the neural network; and the superposition unit is used for superposing the first detail information to the first image to obtain the super-resolution image corresponding to the N raw images.
In some possible embodiments, the neural network is obtained by training a neural network to be trained by taking N simulated raw images as input and taking the difference between a super-resolution image and a high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
In some possible embodiments, the preliminary operation comprises: carrying out interpolation operation on any image in the N images to be processed; or, performing interpolation operation on a first noise-reduced image, wherein the first noise-reduced image is an image obtained by performing multi-frame noise reduction on N images to be processed.
In some possible embodiments, the format of the N raw images is bayer format or bank android quad format; if the format of the N raw images is the quadra format, the first processing unit, in obtaining a first image according to the N raw images through a preliminary operation, is configured to: the N raw images are converted into N images in a bayer format through box binning processing, and the preliminary operation is performed on the N images converted into the bayer format to obtain a first image; if the format of the N pieces of raw images is the quadra format, the N pieces of simulated raw images are images in the quadra format; and the second image is obtained by converting the N simulated raw images into a bayer format through binning processing and performing the preliminary operation on the N simulated raw images in the bayer format after conversion.
In some possible embodiments, the obtaining unit is specifically configured to cut the N images obtained by the camera according to a zoom magnification corresponding to the photographing instruction to obtain N raw images corresponding to the zoom magnification.
In a third aspect, an embodiment of the present application provides a terminal device, including a camera, a processor, and a memory, where the camera is configured to obtain N original raw images after the processor obtains a photographing instruction, where N is 1 or an integer greater than 1; the memory for storing a computer program operable on the processor; the processor is configured to perform some or all of the steps of the method as described in the first aspect or any possible embodiment of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing instructions and a computer program corresponding to a neural network, where the instructions, when executed on a terminal device, cause the terminal device to perform the method according to any one of claims 1 to 5.
In a fifth aspect, an embodiment of the present application provides a terminal device, including a camera, a processor, and a neural network unit; the camera is used for acquiring N original raw images after the processor acquires a photographing instruction, wherein N is 1 or an integer greater than 1; the neural network unit is used for taking the N raw images as input to obtain first detail information; the processor is configured to perform some or all of the steps of the method as described in the first aspect or any possible embodiment of the first aspect.
In a sixth aspect, the present application provides a computer-readable storage medium, which stores a computer program, where the computer program includes program instructions, which, when executed by a processor, cause the processor to perform some or all of the steps of the method according to the first aspect or any possible embodiment of the first aspect.
In a seventh aspect, the present application provides a computer program product, where the computer program product includes a computer-readable storage medium storing a computer program, where the computer program is to make a computer execute some or all of the steps of the method described in the first aspect or any possible embodiment of the first aspect.
By adopting the technical scheme provided by the embodiment of the application, the super-resolution image comprising more detail information can be obtained by superposing the first detail information output by the neural network on the first image obtained by the initial operation, so that a clear image can be obtained by adopting the technical scheme provided by the embodiment of the application.
Drawings
Fig. 1 is a schematic flowchart of an image super-resolution reconstruction method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an image super-resolution reconstruction apparatus according to another embodiment of the present application.
Fig. 3A is a schematic structural diagram of a terminal device according to another embodiment of the present application.
Fig. 3B is a schematic structural diagram of a terminal device according to another embodiment of the present application.
Fig. 4 is a schematic structural diagram of a terminal device according to another embodiment of the present application.
Fig. 5A is a low-resolution image obtained by the super-resolution processing unit according to an embodiment of the present application.
Fig. 5B is a detailed diagram of the red channel in the detail image corresponding to fig. 5A.
FIG. 5C is a detail diagram of the green channel in the detail image corresponding to FIG. 5A.
Fig. 5D is a detailed diagram of the blue channel in the detail image corresponding to fig. 5A.
Fig. 5E is a high-resolution image obtained by image super-resolution reconstruction according to this embodiment.
Fig. 6A is a partial schematic diagram of an original image in a bayer format in an embodiment of the present application.
Fig. 6B is a partial schematic diagram of an original image in the quadra format in an embodiment of the present application.
Fig. 6C is a partial schematic diagram of the conversion of the partial schematic diagram of the quadra format shown in fig. 6B to the bayer format.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The image super-resolution reconstruction method provided by the embodiment of the application comprises the following steps: acquiring N original raw images through a camera, wherein N is 1 or an integer greater than 1; obtaining a first image through preliminary operation according to the N raw images, wherein the first image is a colorful three-channel image; inputting the N raw images into a neural network to obtain first detail information output by the neural network; and superposing the first detail information to the first image to obtain super-resolution images corresponding to the N raw images.
In some possible embodiments, before acquiring the N raw images, a photographing instruction may be further included, an image acquired by the camera is cropped according to a zoom magnification corresponding to the photographing instruction, and the image corresponding to the zoom magnification after cropping is used as the raw image.
In the embodiment of the present application, the terminal device may be a portable device having a light and thin characteristic and mounted with a fixed focus lens, for example: mobile phones (or "cellular" phones) with camera functions, smart phones, portable wearable devices (such as smartwatches, etc.), tablet computers, Personal Computers (PCs), PDAs (Personal digital assistants), etc.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image super-resolution reconstruction method according to an embodiment of the present application, in which the method executed by a processor may include the following steps:
step 101, after a photographing instruction is obtained, cutting N images obtained by a camera according to a zoom magnification corresponding to the photographing instruction to obtain N raw images corresponding to the zoom magnification, wherein N is 1 or an integer greater than 1.
In some possible embodiments, the photographing instruction may be triggered by the user clicking a photographing key on the display screen, and before photographing, when the photographed scene is previewed through the display interface, the user may perform a zoom operation, for example, a distance between two fingers in contact with the display screen is increased, and a zoom magnification ratio may be increased; the distance between two fingers in contact with the display screen is reduced, and the zoom magnification can be reduced. The N raw images acquired by the camera in real time can be cut according to the zoom magnification, for example, if the zoom magnification is two times, when any image in the N images acquired by the camera is cut, the center of the currently displayed image is used as the center, and the horizontal edge and the vertical edge are cut by half. It should be noted that, in some possible embodiments, the camera specifically acquires several images, and for a certain type of terminal device, the specific number of frames may be preset before the terminal device leaves a factory, and may be determined according to the performance of the terminal device, for example, the terminal device with poor performance may be set to acquire one frame, the terminal device with better performance may be set to acquire 4 frames, the terminal device with better performance may be set to acquire 6 frames, and the like, and may be specifically preset according to the performance of the terminal device or according to other standards.
And 102, obtaining a first image through a preliminary operation according to the N raw images, wherein the first image is a colorful three-channel image.
In some possible embodiments, the preliminary operations include: carrying out interpolation operation on any image in the N images to be processed; or, performing interpolation operation on a first noise-reduced image, wherein the first noise-reduced image is an image obtained by performing multi-frame noise reduction on N images to be processed.
In some possible embodiments, the preliminary operations may include one or more of the following processing methods: black level correction, lens vignetting correction, demosaicing interpolation processing and the like. It should be noted that, when the preliminary operation is to process 1 image, the image may be any one of the N raw images, or may be an image selected according to a preset rule, for example, the clearest image selected from the N images, which is not limited herein, and it should be noted that a determination method of the image is consistent with a determination method of the neural network to be trained for determining an image from the N simulated raw images to perform the preliminary operation.
In some possible embodiments, N may be an integer greater than 2 or equal to 2, for example, N — 4. The preliminary operations may include one or more of the following operations: multi-frame noise reduction, ghost noise reduction, black level correction, lens vignetting correction, demosaicing interpolation processing and the like.
The neural network may be obtained by training the neural network to be trained by taking N simulated raw images as input and taking the difference between the super-resolution image and the high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images can be images generated by performing degradation processing on the high-definition images and according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are superposition of second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are images obtained through the preliminary operation according to the N simulated raw images.
For example, if N is 4, 1 clear image P is prepared in advance during training, and then 4 degraded images are obtained by subjecting the image P to a pre-constructed degradation process, where the pre-constructed degradation process may be an image obtained by randomly moving the image P, for example, image degradation generated by hand shake or random displacement during photographing simulation such as translating random pixels in a small range or rotating a random angle, and then obtaining a first image by a preliminary operation according to the 4 raw images.
And 103, inputting the N raw images into a neural network to obtain first detail information output by the neural network.
The neural network is obtained by taking N simulated raw images as input and training the neural network to be trained by taking the difference between the super-resolution image and the high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
In some possible embodiments, the loss function may be obtained by directly subtracting the color brightness in the two images or squaring the difference, and when the two images are very different, the value of the loss function is also very large, and how to change the parameters in the neural network to be trained may be reversely deduced so that the loss function is small enough, which is the training process of the neural network to be trained. After the trained neural network inputs at least one image, a data set or detail image representing the image details can be obtained.
And step 104, superimposing the first detail information to the first image to obtain super-resolution images corresponding to the N raw images.
In some possible embodiments, the size of the image corresponding to the first detail information may not be the same as the size of the first image, and then the first image may be scaled to make the size of the first image consistent with the size of the image corresponding to the first detail information, and then the image corresponding to the first detail information and the scaled first image are added to obtain the super-resolution image.
In some possible embodiments, the image obtained after the superposition may be further processed in an enhanced manner.
The image enhancement operation may include: gamma correction, contrast enhancement, sharpening, and the like.
According to the technical scheme provided by the embodiment of the application, the super-resolution image comprising more detail information can be obtained by superposing the first detail information output by the neural network on the first image obtained by the primary operation, and therefore, a clear image can be obtained by adopting the technical scheme provided by the embodiment of the application.
The embodiment of the application is mainly applied to processing the image acquired by the camera after the terminal equipment acquires the triggering photographing instruction to obtain the scene of a clear image. For convenience of understanding, the format of the original image captured by the camera, the number N of the original images captured by the camera after the photographing instruction is obtained, and the number M of the images for performing the preliminary operation are described in the following 5 specific scenes. It can be understood that the present application is not limited to the following 5 application scenarios:
the application scene 1, N equals 4, M equals 4, and the original image shot by the camera is in a bayer format.
The application scene 2, N is 4, M is 1, and the original image shot by the camera is in a bayer format.
The application scene 3, N is 4, M is 2, and the original image shot by the camera is in a bayer format.
The application scene 4, N is 1, M is 1, and the original image shot by the camera is in a bayer format.
And an application scene 5, wherein N is 4, M is 4, and an original image shot by the camera is in a quadra format.
As shown in fig. 3A, the terminal devices corresponding to the above 5 scenes may include: the camera 301, the memory 302, the processor 303 and the display unit 304, the neural network may be software, and specifically may be stored in the memory 302 in the form of application software or dynamic link library, and when the processor 303 receives a photographing instruction, the processor 303 may call the neural network stored in the memory 302. It should be noted that, in some embodiments, the neural network may also exist in a hardware form, that is, a hardware structure integrating software functions, as shown in fig. 3B, the terminal device 300 may include: camera 301, memory 302, processor 303, display unit 304, and neural network unit 305. The terminal devices corresponding to fig. 3A and 3B have no substantial difference in other parts except for the different neural network existing forms when performing image super-resolution reconstruction, and for convenience of description, when describing implementation processes of each application scenario, the structure corresponding to fig. 3A is used for description.
Example one
Corresponding to scene 1, where N is 4 and M is 4, the original image captured by the camera 301 is in bayer format.
In this embodiment, after the processor 303 obtains the photographing instruction, the camera 301 obtains 4 frames of original images in real time, and if the zoom magnification corresponding to the photographing instruction is 2, the processor 4 cuts any original image, specifically, the center of the currently displayed image is taken as the center, and the horizontal side and the vertical side are cut by half. Four images were obtained as T1, T2, T3 and T4, respectively.
The processor 303 invokes the neural network stored in the memory 302, and the neural network outputs the detail information.
The processor 303 performs a preliminary operation on the four images T1, T2, T3, and T4. The first image T5 is obtained by performing operations such as multi-frame noise reduction, ghost noise reduction, black level correction, lens vignetting correction, demosaic interpolation operation, and image scaling. The neural network further performs a learning process on four images, i.e., T1, T2, T3 and T4, input by the processor 303 to obtain a detail image, where the detail image corresponds to three colors of RGB, red, green and blue, and includes a detail image corresponding to a red channel, a detail image corresponding to a green channel and a detail image corresponding to a blue channel. Fig. 5A shows a first image T5 obtained by using this embodiment, and the first detail information output by the neural network includes 3 images shown in fig. 5B, 5C, and 5D, where fig. 5B is a detail diagram corresponding to the red channel in the detail image corresponding to fig. 5A. FIG. 5C is a detail diagram of the green channel in the detail image corresponding to FIG. 5A. Fig. 5D is a detailed diagram of the blue channel in the detail image corresponding to fig. 5A. The darker the color in fig. 5B, 5C and 5D indicates that the corresponding color value is smaller, the brighter the corresponding color value is identified to be larger, and then the processor may superimpose the color information corresponding to fig. 5B, 5C and 5D on the first image shown in fig. 5A, and then perform image enhancement operations such as gamma correction, contrast enhancement and sharpening to obtain a clear super-resolution image shown in fig. 5E.
It should be noted that the preliminary operations are not limited to the operations of multi-frame noise reduction, ghost noise reduction, black level correction, lens vignetting correction, demosaic interpolation operation, image scaling, etc. mentioned above, and may include some of these operations, and may also include various operations mentioned herein, and may also include other image sharpening processing operations, which are not limited herein.
The multi-frame noise reduction is a mature prior art, and mainly selects one image from a plurality of acquired images as a reference frame, and performs registration through a registration technology, so that the contents of the plurality of images are aligned with the reference frame. The multiple images are then averaged to remove noise. If a moving object exists in the images, the position of the object on each image is different, and in the process of averaging multiple images, the area of the moving object does not participate in averaging, but only uses the content of the reference frame, and the process is called ghost detection. The area detected as the ghost is not averaged, and the noise is larger, so that the ghost area can be de-noised in an important mode. Existing single frame noise reduction techniques can be used to achieve this function.
Image processing such as black level correction and lens vignetting correction is a general process performed to obtain an image with correct brightness and color. The processed bayer image is subjected to demosaicing interpolation operation to obtain a colored three-channel image (each pixel has three components of red, green and blue), where the demosaicing operation may be any interpolation mode, and generally, the simpler demosaicing interpolation and the easier matching of a neural network are, so as to obtain a better effect, and therefore, in the embodiment, the demosaicing may be performed by using a bilinear interpolation method.
Example two
Corresponding to scene 2, where N is 4 and M is 1, the original image captured by the camera 301 is in bayer format.
In this embodiment, after the processor 303 obtains the photographing instruction, the camera 301 obtains 4 frames of original images in real time, and if the zoom magnification corresponding to the photographing instruction is 2, the processor 4 cuts any original image, specifically, the center of the currently displayed image is taken as the center, and the horizontal side and the vertical side are cut by half. Four images were obtained as T1, T2, T3 and T4, respectively.
The processor 303 invokes the neural network stored in the memory 302, and the neural network outputs the detail information.
The processor 303 performs preliminary operations on T1, such as black level correction, lens vignetting correction, demosaic interpolation operation, image scaling, etc., to obtain a first image T5. The neural network performs a learning process on four images, namely T1, T2, T3 and T4, input by the processor 303, and outputs a detail image, wherein the detail image corresponds to three colors of RGB, red, green and blue, and comprises a detail image corresponding to a red channel, a detail image corresponding to a green channel and a detail image corresponding to a blue channel. By testing the technical scheme provided by the embodiment, a clear image can be obtained.
It should be noted that when M is 1, the image on which the preliminary operation is performed may be an image randomly determined from T1, T2, T3, and T4, or an image determined according to other standards, for example, the sharpest image of T1, T2, T3, and T4, in this embodiment, the preliminary operation may not include operations of processing multi-frame images such as multi-frame denoising and ghost denoising, the preliminary operation may not be limited to the above-mentioned operations of black level correction, lens vignetting correction, demosaic interpolation operation, and image scaling, may include a part of these operations, may also include various operations mentioned here, and may also include other operations of image sharpening processing on a single-frame image, which is not limited here.
When M is 1, the method of determining the image on which the preliminary operation is performed when performing the image super-resolution construction is the same as the method of determining the image on which the preliminary operation is performed during the training of the neural network to be trained. For example, if the processor performs a preliminary operation on the clearest image of the four images T1, T2, T3 and T4 during super-resolution image resolution construction, the neural network to be trained also performs a preliminary operation on the clearest image of the input 4 images during training.
EXAMPLE III
Corresponding to a scene 3, wherein N is 4, M is 2, and the original image shot by the camera is in a bayer format.
In this embodiment, after the processor 303 obtains the photographing instruction, the camera 301 obtains 4 frames of original images in real time, and if the zoom magnification corresponding to the photographing instruction is 2, the processor 4 cuts any original image, specifically, the center of the currently displayed image is taken as the center, and the horizontal side and the vertical side are cut by half. Four images were obtained as T1, T2, T3 and T4, respectively.
The processor 303 performs preliminary operations on the two images T1 and T3, such as multi-frame noise reduction, ghost noise reduction, black level correction, lens vignetting correction, demosaic interpolation operation, image scaling, and the like to obtain a low-definition image T5. The neural network learns four images, namely T1, T2, T3 and T4, input by the processor 303 to obtain a detail image, wherein the detail image corresponds to three colors of RGB, red, green and blue, and comprises a detail image corresponding to a red channel, a detail image corresponding to a green channel and a detail image corresponding to a blue channel. By testing the technical scheme provided by the embodiment, a clear image can be obtained.
When M is 2, the images to be subjected to the preliminary operation by the processor may be two images randomly determined from T1, T2, T3, and T4, or two images determined according to other criteria, for example, two images with the highest sharpness from T1, T2, T3, and T4, and how to determine the images may be set as needed, which is not limited herein. It should be noted that the determination methods of T1 and T3 are consistent with the determination method of determining the image for performing the preliminary operation when the neural network to be trained is trained.
In this embodiment, the preliminary operations are not limited to the operations of multi-frame noise reduction, ghost noise reduction, black level correction, lens vignetting correction, demosaic interpolation operation, image scaling, etc. mentioned above, and may include a part of these operations, may also include various operations mentioned herein, and may also include other image sharpening processing operations, which are not limited herein.
Example four
Corresponding to scene 4, where N is 1 and M is 1, the original image captured by the camera 301 is in bayer format.
In this embodiment, after the processor 303 obtains the photographing instruction, the camera 301 obtains 1 frame of original image in real time, and if the zoom magnification corresponding to the photographing instruction is 2, the processor 4 cuts the original image, specifically, the center of the currently displayed image is taken as the center, and both the horizontal side and the vertical side are cut by half. An image T1 is obtained.
The processor 303 performs preliminary operations on T1, such as performing black level correction, shot vignetting correction, demosaic interpolation operation, image scaling, etc., to obtain an image T5. The processor inputs T1 into the neural network, the neural network executes the learning process to obtain detail images, and the detail images correspond to RGB red, green and blue colors and comprise detail images corresponding to a red channel, detail images corresponding to a green channel and detail images corresponding to a blue channel. By testing the technical scheme provided by the embodiment, a clear super-resolution image can be obtained.
It should be noted that, in this embodiment, the preliminary operation may not include operations of processing multiple frames of images, such as multiple frame denoising and ghost denoising, and the preliminary operation is not limited to the above-mentioned operations of black level correction, lens vignetting correction, demosaic interpolation operation, and image scaling, and may include a part of these operations, may also include various operations mentioned herein, and may also include other operations of image sharpening processing on a single frame of image, which is not limited herein.
EXAMPLE five
Corresponding to scene 5, where N is 4 and M is 4, the original image captured by the camera 301 is in the quadra format.
In this embodiment, the format of the 4 original images acquired by the camera is not a conventional bayer format, but a special quadra format. The image in the quadra format is characterized by the fact that adjacent four pixels have the same color.
After the processor 303 obtains the photographing instruction, the camera 301 obtains 4 original images in the quadra format in real time, and if the zoom magnification corresponding to the photographing instruction is 2, the processor 303 cuts any original image, specifically, the center of the currently displayed image is taken as the center, and the horizontal edge and the vertical edge are cut by half. Four images were obtained as T1, T2, T3 and T4, respectively.
The processor 303 bins the four images T1, T2, T3, and T4 to obtain four images T1', T2', T3', and T4'. The Binning process means that four pixels having the same color are averaged to obtain one pixel. Fig. 6A is a partial schematic view of an original image in a bayer format, and it can be seen from fig. 6A that colors of two adjacent pixels of the image in the bayer format are different. As shown in fig. 6B and 6C, fig. 6B is a partial schematic diagram of an original image in the quadra format, and fig. 6C is a partial schematic diagram of converting the partial schematic diagram in the quadra format shown in fig. 6B into the bayer format.
The processor 303 performs a preliminary operation on the binned four images T1', T2', T3' and T4', such as multi-frame noise reduction, ghost noise reduction, black level correction, lens vignetting correction, demosaic interpolation, and image scaling, to obtain an image T5 '. The neural network performs a learning process on four images, namely T1, T2, T3 and T4, input by the processor 303 to obtain a detail image, wherein the detail image corresponds to three colors of RGB, red, green and blue, and comprises a detail image corresponding to a red channel, a detail image corresponding to a green channel and a detail image corresponding to a blue channel.
It should be noted that, when the neural network to be trained is trained, the image in the quadra format obtained by image degradation is also subjected to binning processing, so that the format of the image in which the neural network to be trained performs the preliminary operation during training is also in the bayer format.
It should be noted that the technical solution provided by the present application is not limited to the bayer format and the quadra format, and when the image sensor in the camera is in other formats corresponding to other arrangement modes, the technical solution provided by the present application may also be used, and an arbitrary demosaic interpolation mode is used in cooperation with the neural network to implement super-resolution reconstruction of the image, so as to obtain a clear image. It should be understood that the reason why the binning processing is performed on the image in the quadra format is that a common demosaic interpolation method can be used instead of the bayer format, and therefore the quadra format is not limited to the binning processing, and may be performed by another demosaic interpolation method.
Referring to fig. 2, an embodiment of the present invention further provides an image super-resolution reconstruction apparatus, and as shown in fig. 2, an image super-resolution reconstruction apparatus 200 provided in the embodiment of the present invention includes: an acquisition unit 201, a first processing unit 202, a second processing unit 203, and a superimposition unit 204.
An obtaining unit 201, configured to obtain N original raw images through a camera, where N is 1 or an integer greater than 1. A first processing unit 202, configured to obtain a first image through a preliminary operation according to the N raw images. And the second processing unit 203 is configured to input the N raw images into a neural network, so as to obtain first detail information output by the neural network. And the superimposing unit 204 is configured to superimpose the first detail information on the first image to obtain a super-resolution image corresponding to the N raw images.
The neural network is obtained by taking N simulated raw images as input and training the neural network to be trained by taking the difference between the super-resolution image and the high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
The preliminary operation comprises: carrying out interpolation operation on any image in the N images to be processed; or, performing interpolation operation on a first noise-reduced image, wherein the first noise-reduced image is an image obtained by performing multi-frame noise reduction on N images to be processed.
The format of the N raw images can be a Bayer bayer format or a Curtiquad format; if the format of the N raw images is the quadra format, the first processing unit, in obtaining a first image according to the N raw images through a preliminary operation, is configured to: the N raw images are converted into N images in a bayer format through box binning processing, and the preliminary operation is performed on the N images converted into the bayer format to obtain a first image; if the format of the N pieces of raw images is the quadra format, the N pieces of simulated raw images are images in the quadra format; and the second image is obtained by converting the N simulated raw images into a bayer format through binning processing and performing the preliminary operation on the N simulated raw images in the bayer format after conversion.
The obtaining unit 201 is specifically configured to cut the N images obtained by the camera according to the zoom magnification corresponding to the photographing instruction to obtain N raw images corresponding to the zoom magnification.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present application, where the terminal device 400 includes: a radio frequency unit 410, a memory 420, an input unit 430, a camera 440, an audio circuit 450, a processor 460, an external interface 470, and a power supply 480. Among other things, the input unit 430 includes a touch screen 431 and other input devices 432, and the audio circuit 450 includes a speaker 451, a microphone 452, and a headphone jack 453. The touch screen 431 may be a display screen having a touch function. In this embodiment, the user may trigger the photographing instruction by clicking a photographing key displayed on the touch screen 431, and change the zoom magnification by changing the distance of contact with the touch screen. When the processor 460 obtains a photographing instruction, the processor 460 cuts the N frames of original images obtained by the camera 440 according to the zoom magnification corresponding to the photographing instruction to obtain N cut images, where N is 1 or an integer greater than 1. The processor 460 performs a preliminary operation on M of the N images to obtain a first image, where M is a positive integer less than or equal to N. The processor 460 calls the neural network and inputs the N cut images to the neural network, the neural network learns the N input images, and the output first detail information processor 460 superimposes the first detail information on the first image to obtain a super-resolution image. Further, the processor 460 saves the super-resolution image to the memory 420, and when the user triggers an instruction to view the super-resolution image through the touch screen 431, the processor 460 acquires the super-resolution image from the memory 420 and displays it through the touch screen 431 as a display interface.
The neural network is a trained convolutional neural network, N simulated raw images are used as input, and the difference between the super-resolution images and the high-definition images corresponding to the N simulated raw images obtained through the neural network to be trained is used as a loss function to train the neural network to be trained; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
The embodiment of the present application further provides a computer-readable storage medium, where instructions and a computer program corresponding to the neural network are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is caused to perform part of or all of the steps of the image super-resolution reconstruction method according to any one of the foregoing embodiments.
Embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to perform some or all of the steps of an image super-resolution reconstruction method.
The explanations and expressions of the technical features and the extensions of various implementation forms in the above specific method embodiments and embodiments are also applicable to the method execution in the apparatus, and are not repeated in the apparatus embodiments.
It should be understood that the division of the modules in the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. For example, each of the above modules may be a processing element separately set up, or may be implemented by being integrated in a certain chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and a certain processing element of the processor calls and executes the functions of each of the above modules. In addition, the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit chip having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software. The processing element may be a general-purpose processor, such as a Central Processing Unit (CPU), or may be one or more integrated circuits configured to implement the above methods, such as: one or more application-specific integrated circuits (ASICs), or one or more microprocessors (DSPs), or one or more field-programmable gate arrays (FPGAs), among others.
It is to be understood that the terms "first," "second," and the like in the description and in the claims, and in the drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An image super-resolution reconstruction method, comprising:
acquiring N original raw images through a camera, wherein N is 1 or an integer greater than 1;
obtaining a first image through preliminary operation according to the N raw images, wherein the first image is a colorful three-channel image;
inputting the N raw images into a neural network to obtain first detail information output by the neural network;
and superposing the first detail information to the first image to obtain super-resolution images corresponding to the N raw images.
2. The method of claim 1,
the neural network is obtained by taking N simulated raw images as input and training the neural network to be trained by taking the difference between the super-resolution image and the high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
3. Method according to claim 1 or 2, characterized in that said preliminary operations comprise:
carrying out interpolation operation on any image in the N images to be processed;
or, performing interpolation operation on a first noise-reduced image, wherein the first noise-reduced image is an image obtained by performing multi-frame noise reduction on N images to be processed.
4. The method according to claim 2, wherein the format of the N raw images is bayer or celpin quadra format;
if the format of the N raw images is the quadra format, obtaining a first image through a preliminary operation according to the N raw images, including: the N raw images are converted into N images in a bayer format through box binning processing, and the preliminary operation is performed on the N images converted into the bayer format to obtain a first image;
if the format of the N pieces of raw images is the quadra format, the N pieces of simulated raw images are images in the quadra format; and the second image is obtained by converting the N simulated raw images into a bayer format through binning processing and performing the preliminary operation on the N simulated raw images in the bayer format after conversion.
5. The method according to any one of claims 1 to 4, wherein the acquiring N raw images by the camera comprises:
and cutting the N images acquired by the camera according to the zooming magnification corresponding to the photographing instruction to obtain N raw images corresponding to the zooming magnification.
6. An image super-resolution reconstruction apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring N original raw images through a camera, and N is 1 or an integer greater than 1;
the first processing unit is used for obtaining a first image through preliminary operation according to the N raw images, and the first image is a colorful three-channel image;
the second processing unit is used for inputting the N raw images into a neural network to obtain first detail information output by the neural network;
and the superposition unit is used for superposing the first detail information to the first image to obtain the super-resolution image corresponding to the N raw images.
7. The apparatus of claim 6,
the neural network is obtained by taking N simulated raw images as input and training the neural network to be trained by taking the difference between the super-resolution image and the high-definition image corresponding to the N simulated raw images obtained by the neural network to be trained as a loss function; the N simulated raw images are generated by performing degradation processing on the high-definition image according to the data format of the raw images; the super-resolution images corresponding to the N simulated raw images are obtained by superposing second detail information corresponding to the N simulated raw images output by the neural network to be trained and second images obtained according to the N simulated raw images, and the second images are obtained through the preliminary operation according to the N simulated raw images.
8. The device according to claim 6 or 7, characterized in that said preliminary operations comprise:
carrying out interpolation operation on any image in the N images to be processed;
or, performing interpolation operation on a first noise-reduced image, wherein the first noise-reduced image is an image obtained by performing multi-frame noise reduction on N images to be processed.
9. The method according to claim 7, wherein the format of the N raw images is a Bayer bayer format or a Cocquequalra format;
if the format of the N raw images is the quadra format, the first processing unit, in obtaining a first image according to the N raw images through a preliminary operation, is configured to: the N raw images are converted into N images in a bayer format through box binning processing, and the preliminary operation is performed on the N images converted into the bayer format to obtain a first image;
if the format of the N pieces of raw images is the quadra format, the N pieces of simulated raw images are images in the quadra format; and the second image is obtained by converting the N simulated raw images into a bayer format through binning processing and performing the preliminary operation on the N simulated raw images in the bayer format after conversion.
10. The device according to any one of claims 6 to 9,
the acquiring unit is specifically configured to cut the N images acquired by the camera according to the zoom magnification corresponding to the photographing instruction to obtain N raw images corresponding to the zoom magnification.
11. A terminal device, comprising a camera, a processor and a memory, wherein,
the camera is used for acquiring N original raw images after the processor acquires a photographing instruction, wherein N is 1 or an integer greater than 1;
the memory for storing a computer program operable on the processor;
the processor configured to perform the method of any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that it stores a computer program of instructions corresponding to a neural network, which instructions, when run on a terminal device, cause the terminal device to carry out the method according to any one of claims 1 to 5.
13. The terminal equipment is characterized by comprising a camera, a processor and a neural network unit;
the camera is used for acquiring N original raw images after the processor acquires a photographing instruction, wherein N is 1 or an integer greater than 1;
the neural network unit is used for taking the N raw images as input to obtain first detail information;
the processor configured to perform the method of any one of claims 1 to 5.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1 to 5.
15. A computer program product, characterized in that it comprises a computer-readable storage medium storing a computer program for causing a computer to perform some or all of the steps of a method according to any one of claims 1 to 5.
CN201910205538.9A 2019-03-18 2019-03-18 Image super-resolution reconstruction method and device and terminal equipment Pending CN111724448A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910205538.9A CN111724448A (en) 2019-03-18 2019-03-18 Image super-resolution reconstruction method and device and terminal equipment
PCT/CN2020/079880 WO2020187220A1 (en) 2019-03-18 2020-03-18 Image super-resolution reconstruction method and apparatus, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910205538.9A CN111724448A (en) 2019-03-18 2019-03-18 Image super-resolution reconstruction method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN111724448A true CN111724448A (en) 2020-09-29

Family

ID=72519585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910205538.9A Pending CN111724448A (en) 2019-03-18 2019-03-18 Image super-resolution reconstruction method and device and terminal equipment

Country Status (2)

Country Link
CN (1) CN111724448A (en)
WO (1) WO2020187220A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689335A (en) * 2021-08-24 2021-11-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN114338958A (en) * 2020-09-30 2022-04-12 华为技术有限公司 Image processing method and related equipment
WO2022141265A1 (en) * 2020-12-30 2022-07-07 华为技术有限公司 Image processing method and device
CN118474556A (en) * 2023-11-22 2024-08-09 荣耀终端有限公司 Image processing method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1124530A (en) * 1993-03-25 1996-06-12 利福图象公司 Method and system for image processing
US20130051519A1 (en) * 2011-08-31 2013-02-28 Dong Yang Methods and apparatus for super resolution scanning for cbct system and cone-beam image reconstruction
CN107767343A (en) * 2017-11-09 2018-03-06 京东方科技集团股份有限公司 Image processing method, processing unit and processing equipment
CN108391060A (en) * 2018-03-26 2018-08-10 华为技术有限公司 A kind of image processing method, image processing apparatus and terminal
CN109360151A (en) * 2018-09-30 2019-02-19 京东方科技集团股份有限公司 Image processing method and system, increase resolution method, readable storage medium storing program for executing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192316A1 (en) * 2014-06-17 2015-12-23 Beijing Kuangshi Technology Co., Ltd. Face hallucination using convolutional neural networks
US20170019615A1 (en) * 2015-07-13 2017-01-19 Asustek Computer Inc. Image processing method, non-transitory computer-readable storage medium and electrical device thereof
CN107492070B (en) * 2017-07-10 2019-12-03 华北电力大学 A kind of single image super-resolution calculation method of binary channels convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1124530A (en) * 1993-03-25 1996-06-12 利福图象公司 Method and system for image processing
US20130051519A1 (en) * 2011-08-31 2013-02-28 Dong Yang Methods and apparatus for super resolution scanning for cbct system and cone-beam image reconstruction
CN107767343A (en) * 2017-11-09 2018-03-06 京东方科技集团股份有限公司 Image processing method, processing unit and processing equipment
CN108391060A (en) * 2018-03-26 2018-08-10 华为技术有限公司 A kind of image processing method, image processing apparatus and terminal
CN109360151A (en) * 2018-09-30 2019-02-19 京东方科技集团股份有限公司 Image processing method and system, increase resolution method, readable storage medium storing program for executing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114338958A (en) * 2020-09-30 2022-04-12 华为技术有限公司 Image processing method and related equipment
WO2022141265A1 (en) * 2020-12-30 2022-07-07 华为技术有限公司 Image processing method and device
CN113689335A (en) * 2021-08-24 2021-11-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113689335B (en) * 2021-08-24 2024-05-07 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN118474556A (en) * 2023-11-22 2024-08-09 荣耀终端有限公司 Image processing method and related equipment

Also Published As

Publication number Publication date
WO2020187220A1 (en) 2020-09-24

Similar Documents

Publication Publication Date Title
US20220207680A1 (en) Image Processing Method and Apparatus
CN111353948B (en) Image noise reduction method, device and equipment
JP5784587B2 (en) Method and device for image selection and combination
JP6066536B2 (en) Generation of high dynamic range images without ghosting
US20090161982A1 (en) Restoring images
JP5933105B2 (en) Image processing apparatus, imaging apparatus, filter generation apparatus, image restoration method, and program
WO2018176925A1 (en) Hdr image generation method and apparatus
WO2020187220A1 (en) Image super-resolution reconstruction method and apparatus, and terminal device
US10853926B2 (en) Image processing device, imaging device, and image processing method
TW201003566A (en) Interpolation system and method
US8861846B2 (en) Image processing apparatus, image processing method, and program for performing superimposition on raw image or full color image
CN111275653A (en) Image denoising method and device
CN111784603A (en) RAW domain image denoising method, computer device and computer readable storage medium
US20150116464A1 (en) Image processing apparatus and image capturing apparatus
CN111696039B (en) Image processing method and device, storage medium and electronic equipment
US10863148B2 (en) Tile-selection based deep demosaicing acceleration
CN113689335B (en) Image processing method and device, electronic equipment and computer readable storage medium
US20170256035A1 (en) Method and system for processing a multi-channel image
US20240029460A1 (en) Apparatus and method for performing image authentication
CN117768774A (en) Image processor, image processing method, photographing device and electronic device
EP3550818B1 (en) Demosaicing method and device
CN115471417A (en) Image noise reduction processing method, apparatus, device, storage medium, and program product
CN117440241A (en) Video processing method and device
CN117115593A (en) Model training method, image processing method and device thereof
CN117726564A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200929

WD01 Invention patent application deemed withdrawn after publication