[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111353948B - Image noise reduction method, device and equipment - Google Patents

Image noise reduction method, device and equipment Download PDF

Info

Publication number
CN111353948B
CN111353948B CN201811583920.5A CN201811583920A CN111353948B CN 111353948 B CN111353948 B CN 111353948B CN 201811583920 A CN201811583920 A CN 201811583920A CN 111353948 B CN111353948 B CN 111353948B
Authority
CN
China
Prior art keywords
image
frame
motion vector
aligned
adjacent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811583920.5A
Other languages
Chinese (zh)
Other versions
CN111353948A (en
Inventor
李松南
马岚
俞大海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201811583920.5A priority Critical patent/CN111353948B/en
Publication of CN111353948A publication Critical patent/CN111353948A/en
Application granted granted Critical
Publication of CN111353948B publication Critical patent/CN111353948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

An image denoising method includes: acquiring a multi-frame image, and determining a basic frame and an adjacent frame included in the multi-frame image; calculating a motion vector of the adjacent frame according to a basic frame, and converting the adjacent frame into an aligned image aligned with the basic frame according to the motion vector; and fusing the aligned image and the basic frame through a convolutional neural network to obtain a noise reduction image. Because the noise reduction image is formed by automatically fusing the basic frame and the alignment image through the convolutional neural network, the noise can be reduced more accurately, the real scene content can be effectively reserved, and the noise reduction image quality can be greatly improved.

Description

Image noise reduction method, device and equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to an image noise reduction method, device and equipment.
Background
Due to the popularity of smartphones, the continual improvement of the hardware quality of the mobile phone cameras, and the portability of mobile phone photography, more and more people use mobile phones to shoot, edit and share their pictures and video content. Thus, it is becoming more and more important to improve the quality of pictures shot by mobile phones.
The picture quality of a picture taken by a mobile phone is affected by various phonemes, such as noise, resolution, sharpness, color fidelity, etc., where noise is a very critical affecting phoneme. The noise sources in the mobile phone picture are various, such as photon shot noise, dark current noise, dead pixels, fixed pattern noise, readout noise and the like. Photon shot noise in noise is a main source of noise and is limited by physical laws, and the photon shot noise always exists no matter what place the hardware technology is developed to. Therefore, at present, the shooting content and noise are generally distinguished through a design algorithm, so that the noise intensity in the mobile phone image is reduced.
In the currently used single-frame image noise reduction algorithm, theoretical deduction proves that increasing the luminous flux can effectively improve the signal-to-noise ratio (SNR) of the image. There are various ways to increase the luminous flux, one of which is to increase the exposure time period. However, in the case of a hand-held camera, increasing the exposure time period causes a jittery blur to the picture. Therefore, the mainstream method in the industry at present achieves the purpose of improving the signal-to-noise ratio of the image by changing the phase and increasing the exposure time through a multi-frame fusion mode. However, when denoising is performed by using a multi-frame fusion method, noise cannot be accurately reduced, and real scene contents cannot be effectively reserved.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method, an apparatus, and a device for image denoising, so as to solve the problems that in the prior art, when denoising in a multi-frame fusion manner, noise cannot be accurately reduced, and real scene content cannot be effectively reserved.
A first aspect of an embodiment of the present application provides an image noise reduction method, including:
acquiring a multi-frame image, and determining a basic frame and an adjacent frame included in the multi-frame image;
calculating a motion vector of the adjacent frame according to a basic frame, and converting the adjacent frame into an aligned image aligned with the basic frame according to the motion vector;
and fusing the aligned image and the basic frame through a convolutional neural network to obtain a noise reduction image.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of determining a base frame included in the multi-frame image includes:
acquiring a main body in the multi-frame image;
and calculating the definition of the main body of the multi-frame image, and selecting the image frame with the highest definition as a basic frame.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the step of calculating the sharpness of the main body of the multi-frame image and selecting the image frame with the highest sharpness as the base frame includes:
converting the multi-frame image into a brightness map;
and carrying out edge filtering in a preset area around the main body center of the multi-frame image, acquiring a response average value of the edge filtering, and selecting an image with the highest response average value as a basic frame.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the step of calculating a motion vector of the neighboring frame according to a base frame, and transforming the neighboring frame into an aligned image aligned with the base frame according to the motion vector includes:
dividing a base frame and an adjacent frame into a plurality of image blocks respectively;
determining a motion vector for each image block of the neighboring frame based on the motion estimation of the block;
and rearranging the image blocks in the adjacent frames according to the motion vectors to obtain an aligned image.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the step of dividing the base frame and the adjacent frame into a plurality of image blocks includes:
performing Gaussian downsampling on the basic frame and the adjacent frames for a plurality of times to obtain images of each frame of image at different resolutions;
the step of determining a motion vector for each image block of an adjacent frame based on the block-based motion estimation comprises:
performing motion estimation on the image blocks of the basic frame and the adjacent frames with the first resolution, and determining a first motion vector corresponding to the image block with the first resolution;
and transmitting the first motion vector to an image block with a second resolution for motion estimation, and correcting the first motion vector to obtain a second motion vector, wherein the first resolution is lower than the second resolution.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the step of fusing, by a convolutional neural network, the aligned image and the base frame to obtain a noise reduction image includes:
converting the aligned image and the basic frame into a single-color multi-channel image, and splicing the aligned image and the channel image of the same color of the basic frame;
and fusing the spliced images through a convolutional neural network to obtain a noise reduction image.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, before the step of fusing, by a convolutional neural network, the aligned image and the base frame to obtain a noise reduction image, the method further includes:
obtaining M sample pictures by using the same exposure parameters;
determining basic frames and adjacent frames in M sample pictures, and aligning the determined adjacent frames with the basic frames;
and selecting one or more of white balance, black level removal, lens correction, demosaicing, color space conversion, sharpening and enhancement, and performing image processing on the basic frame and the aligned image to obtain a noise reduction picture corresponding to the sample picture.
A second aspect of the embodiments of the present application provides an image noise reduction apparatus, including:
the frame image acquisition unit is used for acquiring a multi-frame image and determining a basic frame and an adjacent frame included in the multi-frame image;
an alignment unit for calculating a motion vector of the neighboring frame according to a base frame, and converting the neighboring frame into an aligned image for aligning the base frame according to the motion vector;
and the fusion unit is used for fusing the aligned image and the basic frame through a convolutional neural network to obtain a noise reduction image.
A third aspect of embodiments of the present application provides an image noise reduction device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: and determining a basic frame and an adjacent frame according to the multi-frame image, calculating the motion vector of the adjacent frame according to the basic frame, transforming the adjacent frame by the calculated motion vector to obtain an aligned image, and fusing the basic frame and the aligned image by a convolutional neural network to obtain the noise reduction image. Because the noise reduction image is formed by automatically fusing the basic frame and the alignment image through the convolutional neural network, the noise can be reduced more accurately, the real scene content can be effectively reserved, and the noise reduction image quality can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flowchart of an implementation of an image denoising method according to an embodiment of the present application;
FIG. 1a is a schematic diagram of an image noise reduction frame according to an embodiment of the present application;
fig. 2 is a schematic implementation flow chart of a method for determining a base frame according to an embodiment of the present application;
fig. 3 is a schematic implementation flow chart of a method for acquiring an alignment image according to an embodiment of the present application;
fig. 4 is a schematic implementation flow chart of a method for obtaining training samples according to an embodiment of the present application;
fig. 5 is a schematic diagram of an image noise reduction device according to an embodiment of the present application;
fig. 6 is a schematic diagram of an image noise reduction apparatus provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Fig. 1 is a schematic implementation flow chart of an image denoising method according to an embodiment of the present application, which is described in detail below:
in step S101, a multi-frame image is acquired, and a base frame and an adjacent frame included in the multi-frame image are determined;
specifically, the multi-frame image described in the present application may be a picture in a RAW (unprocessed) domain, or an RGB (red, green and blue) three-channel picture, or may also be a Y-channel picture.
The acquired multi-frame images may be acquired in a very short time using a Burst (continuous shooting) mode of the camera, and may have the same ISP (image signal processing) processing parameters, for example, may include the same exposure, white balance parameters, noise reduction or sharpening strength, and the like. The number N of typical input frames may vary from 3 to 8 depending on the brightness of the scene. The multi-frame image noise reduction frame diagram shown in fig. 1a shows a case where the input frame number is 3, and the other frame numbers are similar to this.
After the acquisition of the multi-frame input image, we need to select one base frame and then align all other frames, i.e. neighboring frames, with the base frame. The selected base frame may be the sharpest of all the input frames, which ensures that the final output result is clear. In addition, in order to ensure the definition of the photographed subject, we can select the image of the photographed subject portion that is the most clear as the base image.
The position of the subject may be determined by, for example, face position detection to determine a person image area where a face is located, or may be determined according to a touch point of a user on a screen. For the picture of the shot subject which cannot be judged, we assume that the center of the picture is the shot subject. Assuming that we obtain the center position of the subject in the above manner, the basic frame determination flow is as shown in fig. 2, including:
in step S201, a multi-frame image is converted into a luminance map;
the luminance map may be fused by using a weighted average of a multi-frame image including a plurality of colors, for example, a bell image including GBRG four color values.
In step S202, edge filtering is performed in a predetermined area around the main body center of the multi-frame image, and a response average value of the edge filtering is obtained;
in the luminance map, a predetermined area around the center of the subject may be selected for edge filtering, for example, by the sobel operator, to obtain a response average value. Of course, if the subject determines, the subject may be edge filtered directly. If the subject is not determined, it may be assumed that the subject is the center of the image, a predetermined area around the center position may be edge-filtered to obtain a response average value, and the size of the predetermined area may be set according to the size of the image, or may be set according to the size of the subject to be photographed.
In step S203, an image having the highest response average value is selected as a base frame.
Because the multi-frame image only needs to determine one basic frame, the image with the highest response average value, namely the image with the most clear main body, can be selected as the basic frame, thereby being beneficial to ensuring the definition of the fused noise reduction image.
In step S102, motion vectors of the neighboring frames are calculated according to the base frame, and the neighboring frames are transformed into aligned images aligned with the base frame according to the motion vectors;
and dividing the image blocks of the base frame and the adjacent frame, and performing motion estimation on the image blocks of the adjacent frame according to the similarity between the adjacent frame and the image blocks in the base frame to determine the motion vectors of the image blocks in the adjacent frame. After the motion vector of the image block of the adjacent frame is determined, the image block of the adjacent frame can be rearranged according to the motion vector, so that the adjacent frame is aligned with the base frame, and an aligned image is obtained.
Specifically, as shown in fig. 3, the process of generating the alignment image may specifically include the following steps:
in step S301, the base frame and the adjacent frame are divided into a plurality of image blocks, respectively;
in a preferred embodiment, the base frame and the adjacent frames may be subjected to multiple gaussian downsampling to obtain images of different resolutions of each frame of image. The method comprises the steps of obtaining a plurality of base frame images with different resolutions through multiple Gaussian downsampling for a base frame, and obtaining a plurality of adjacent frame images with different resolutions through multiple Gaussian downsampling for an adjacent frame. The downsampled images of different resolutions are then separately segmented into image blocks so that step S302 completes block-based motion estimation.
In step S302, a motion vector of each image block of the neighboring frame is determined based on the motion estimation of the block;
after obtaining a plurality of blurred images with different resolutions in step S301, motion estimation may be performed on image blocks of a base frame and an adjacent frame with a first resolution, to determine a first motion vector corresponding to the image block with the first resolution; and transmitting the first motion vector to an image block with a second resolution for motion estimation, and correcting the first motion vector to obtain a second motion vector, wherein the first resolution is lower than the second resolution. And repeating the transmission until the original brightness map is transmitted, and correcting the original brightness map.
And carrying out Gaussian pyramid on each frame of image by using a pyramid-based block alignment mode to obtain a series of pictures with different resolutions, carrying out block-based motion estimation on the picture with the lowest resolution to obtain a motion vector of each block, directing the motion vector to the block most similar to the block in the adjacent frame, transferring the motion vector to the next layer of picture with higher resolution, continuing to carry out motion estimation on the picture with higher resolution by taking the motion vector as the center, carrying out finer correction on the motion vector, and repeating the following processes until the motion vector is transferred to the original brightness picture with the lowest layer and is corrected.
In step S303, the image blocks in the adjacent frames are rearranged according to the motion vector, so as to obtain an aligned image.
After the motion vector is determined, the image blocks in the adjacent frames are rearranged, and an aligned image corresponding to the adjacent frames can be obtained.
The use of the pyramid type motion estimation method has two benefits: (1) In low resolution pictures, the signal-to-noise ratio of the image increases, so that the motion estimation process is less affected by noise, (2) the complexity of the motion estimation process is reduced, and the search range of motion estimation is increased. It should be noted that, besides the pyramid block alignment method, other conventional image alignment methods, such as the feature point-based image alignment method and the frequency domain image alignment method, can be used instead of the currently used methods. In addition, the image alignment effect is very good by using a deep learning mode, but the complexity is also increased obviously.
After the above block alignment process, a set of motion vectors can be obtained for each adjacent frame, and using the set of motion vectors, we can rearrange the pixel positions of the adjacent frames, so that the aligned image obtained after rearrangement is similar to the base frame. It is noted that since the estimated motion vector may be inaccurate or the motion pattern from frame to frame cannot be described by simple translation, the similarity of the aligned image to the underlying frame may be very low at some locations. In the next step, the basic frame is fused with the aligned multi-frame images, and a deep learning method is used to help us obtain a good fusion effect at the inaccurate alignment positions.
In step S103, the aligned image and the base frame are fused by a convolutional neural network, so as to obtain a noise reduction image.
The convolutional neural network may be a convolutional neural network which is trained by sample data in advance, or may be a convolutional neural network in the training process.
When the image fusion is carried out, the mode of the image is required to be converted, the basic frame and the aligned image are converted from the bell mode comprising the multi-color image into the single-color multi-channel mode, the color channels are spliced (spliced) together, and the fusion processing is carried out by using a Convolutional Neural Network (CNN) to generate an RGB three-channel image. The output RGB image has low noise and very good image quality.
It should be noted that for different input frame numbers, we can use the same image alignment method, but need to train different CNN models.
In addition, the convolutional neural network CNN process may be implemented using a classical convolutional neural network with the pooling layer and the full-Connection layer removed, such as AlexNet (alice convolutional neural network), VGG (Visual Geometry Group computer vision group neural network), etc., and the residual error may be learned by adding Skip Connection (Skip Connection), so as to accelerate the training process of the network. Existing CNN models (such as DnCNN (feedforward noise reduction convolutional neural network) for noise reduction, etc.) can also be used to implement this processing step. It should be noted that, the multi-frame information can be input, so that the difficulty of image denoising can be reduced, the purpose of denoising can be achieved by using a relatively small number of convolution layers, and the calculated amount of the CNN process is simplified.
In the noise reduction frame diagram shown in fig. 1a, the number of acquired multi-frame images is n=3. Of course, the N may be other natural numbers greater than 3. As shown in fig. 1a, RAW domain bell images of the same exposure are acquired, and the resolution may be h×w. One of the frames is set as a base frame and the other frames are set as adjacent frames. And (3) carrying out RAW domain multi-frame alignment operation on each adjacent frame to align the adjacent frame with the basic frame. After alignment, the base frame and the aligned adjacent frames are converted into a single-color multi-channel mode, and the resolution of each channel is that
Figure BDA0001918611860000091
Then splice (Concate) the data together, process it with convolutional neural network, produce a RGB three channel image after noise reduction, the resolution of each color channel is the same as the resolution of the input frame,i.e. the resolution of the three-channel image is H x W.
Of course, before the fusion is performed by using the convolutional neural network, the method further comprises a step of training the convolutional neural network by acquiring sample images, as shown in fig. 4, wherein the image sample acquisition process comprises the following steps:
in step S401, M sample pictures are acquired using the same exposure parameters;
the same exposure parameters can be used for shooting M pictures by a handheld device such as a mobile phone, and the larger M is, the better M is, so that the accuracy of training the convolutional neural network is improved.
In step S402, a base frame and an adjacent frame in M sample pictures are determined, and the determined adjacent frame is aligned to the base frame;
the selection of the base frame and the alignment of the adjacent frames of the sample picture can be completed according to the determination mode of the base frame and the adjacent frames in the multi-frame image and the alignment mode of the adjacent frames and the base frame in fig. 1.
In step S403, one or more of white balance, black level removal, lens correction, demosaicing, color space conversion, sharpening, and enhancement are selected, and image processing is performed on the base frame and the aligned image, so as to obtain a noise reduction picture corresponding to the sample picture.
The traditional Image Signal Processing (ISP) flow can be used for performing white balance, black level removal, lens correction, demosaicing, color space conversion, sharpening, enhancement and other processing on the RAW domain picture after noise reduction, and finally generating a high-quality picture containing RGB three channels as an output result of the convolutional neural network to train the convolutional neural network.
Since the process of preparing training data does not need to take into account processing time, we can use very complex multi-frame fusion algorithms, white balance algorithms, anti-mosaic algorithms, image enhancement algorithms, etc., and use very many input frames (e.g. set m=30) to get the ideal output image. Multi-frame deep learning noise reduction algorithms generally use only a relatively small number of frames as input due to the complexity of execution that needs to be considered. Thus, our training process is actually a deep learning algorithm (M > N) that trains N frames of images as input by a conventional image processing algorithm that uses M frames of images as input. Compared with the current construction mode of image data containing real noise, the method used by the method is faster and more convenient because a tripod is not needed and multi-frame images are not needed to be screened.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 5 is a schematic structural diagram of an image noise reduction device according to an embodiment of the present application, which is described in detail below:
the image noise reduction apparatus includes:
a frame image acquiring unit 501 configured to acquire a multi-frame image, and determine a base frame and an adjacent frame included in the multi-frame image;
an alignment unit 502, configured to calculate a motion vector of the neighboring frame according to a base frame, and transform the neighboring frame into an aligned image aligned with the base frame according to the motion vector;
and a fusion unit 503, configured to fuse the aligned image and the base frame through a convolutional neural network, so as to obtain a noise reduction image.
The image denoising apparatus shown in fig. 5 corresponds to the image denoising method shown in fig. 1.
Fig. 6 is a schematic diagram of an image noise reduction apparatus according to an embodiment of the present application. As shown in fig. 6, the image noise reduction device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as an image noise reduction program, stored in the memory 61 and executable on the processor 60. The processor 60, when executing the computer program 62, implements the steps of the various image denoising method embodiments described above, such as steps 101 through 103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 501 to 503 shown in fig. 5.
By way of example, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function describing the execution of the computer program 62 in the image noise reduction device 6. For example, the computer program 62 may be divided into a frame image acquisition unit, an alignment unit and a fusion unit, each unit functioning specifically as follows:
the frame image acquisition unit is used for acquiring a multi-frame image and determining a basic frame and an adjacent frame included in the multi-frame image;
an alignment unit for calculating a motion vector of the neighboring frame according to a base frame, and converting the neighboring frame into an aligned image for aligning the base frame according to the motion vector;
and the fusion unit is used for fusing the aligned image and the basic frame through a convolutional neural network to obtain a noise reduction image.
The image noise reduction device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The image noise reduction device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the image noise reduction device 6 and is not meant to be limiting of the image noise reduction device 6, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the image noise reduction device may also include input and output devices, network access devices, buses, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the image noise reduction device 6, for example, a hard disk or a memory of the image noise reduction device 6. The memory 61 may also be an external storage device of the image noise reduction device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the image noise reduction device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the image noise reduction device 6. The memory 61 is used to store the computer program and other programs and data required for the image noise reduction apparatus. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. . Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. An image denoising method, comprising:
acquiring a multi-frame image, and determining a basic frame and an adjacent frame included in the multi-frame image;
calculating a motion vector of the adjacent frame according to a basic frame, and converting the adjacent frame into an aligned image aligned with the basic frame according to the motion vector;
fusing the aligned image and the basic frame through a convolutional neural network to obtain a noise reduction image;
the step of calculating the motion vector of the adjacent frame according to the base frame, and transforming the adjacent frame into an aligned image aligned with the base frame according to the motion vector includes:
dividing a base frame and an adjacent frame into a plurality of image blocks respectively;
determining a motion vector for each image block of the neighboring frame based on the motion estimation of the block;
rearranging the image blocks in the adjacent frames according to the motion vectors to obtain aligned images;
the step of determining a motion vector for each image block of an adjacent frame based on the block-based motion estimation comprises:
performing motion estimation on the image blocks of the basic frame and the adjacent frames with the first resolution, and determining a first motion vector corresponding to the image block with the first resolution;
and transmitting the first motion vector to an image block with a second resolution for motion estimation, and correcting the first motion vector to obtain a second motion vector, wherein the first resolution is lower than the second resolution.
2. The image noise reduction method according to claim 1, wherein the step of determining a base frame included in the multi-frame image includes:
acquiring a main body in the multi-frame image;
and calculating the definition of the main body of the multi-frame image, and selecting the image frame with the highest definition as a basic frame.
3. The image denoising method according to claim 2, wherein the step of calculating the sharpness of the main body of the multi-frame image and selecting the image frame with the highest sharpness as the base frame comprises:
converting the multi-frame image into a brightness map;
edge filtering is carried out on a preset area around the main body center of the multi-frame image, and a response average value of the edge filtering is obtained;
and selecting the image with the highest response average value as a basic frame.
4. A method of image denoising according to claim 3, wherein the step of dividing the base frame and the adjacent frame into a plurality of image blocks, respectively, comprises:
and performing Gaussian downsampling on the basic frame and the adjacent frames for a plurality of times to obtain images of each frame of image at different resolutions.
5. The image denoising method according to claim 1, wherein the step of fusing the aligned image and the base frame by a convolutional neural network to obtain a denoised image comprises:
converting the aligned image and the basic frame into a single-color multi-channel image, and splicing the aligned image and the channel image of the same color of the basic frame;
and fusing the spliced images through a convolutional neural network to obtain a noise reduction image.
6. The image denoising method according to claim 1, wherein before the step of fusing the aligned image and the base frame by a convolutional neural network to obtain a denoised image, the method further comprises:
obtaining M sample pictures by using the same exposure parameters;
determining basic frames and adjacent frames in M sample pictures, aligning the determined adjacent frames with the basic frames, wherein M is greater than the frame number of the multi-frame images;
and selecting one or more of white balance, black level removal, lens correction, demosaicing, color space conversion, sharpening and enhancement, and performing image processing on the basic frame and the aligned image to obtain a noise reduction picture corresponding to the sample picture.
7. An image noise reduction apparatus, characterized by comprising:
the frame image acquisition unit is used for acquiring a multi-frame image and determining a basic frame and an adjacent frame included in the multi-frame image;
an alignment unit for calculating a motion vector of the neighboring frame according to a base frame, and converting the neighboring frame into an aligned image for aligning the base frame according to the motion vector;
the fusion unit is used for fusing the aligned image and the basic frame through a convolutional neural network to obtain a noise reduction image;
the alignment unit includes:
the segmentation module is used for respectively segmenting the basic frame and the adjacent frames into a plurality of image blocks;
a motion amount determining module for determining a motion vector of each image block of the neighboring frame based on motion estimation of the block;
the arrangement module is used for rearranging the image blocks in the adjacent frames according to the motion vectors to obtain aligned images;
the motion amount determination module includes:
the motion estimation sub-module is used for performing motion estimation on the image blocks of the basic frame and the adjacent frames with the first resolution, and determining a first motion vector corresponding to the image block with the first resolution;
and the motion quantity correction module is used for transmitting the first motion vector to the image block with the second resolution for motion estimation, and correcting the first motion vector to obtain the second motion vector, wherein the first resolution is lower than the second resolution.
8. An image noise reduction device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
CN201811583920.5A 2018-12-24 2018-12-24 Image noise reduction method, device and equipment Active CN111353948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811583920.5A CN111353948B (en) 2018-12-24 2018-12-24 Image noise reduction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811583920.5A CN111353948B (en) 2018-12-24 2018-12-24 Image noise reduction method, device and equipment

Publications (2)

Publication Number Publication Date
CN111353948A CN111353948A (en) 2020-06-30
CN111353948B true CN111353948B (en) 2023-06-27

Family

ID=71195534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811583920.5A Active CN111353948B (en) 2018-12-24 2018-12-24 Image noise reduction method, device and equipment

Country Status (1)

Country Link
CN (1) CN111353948B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784733B (en) * 2020-07-06 2024-04-16 深圳市安健科技股份有限公司 Image processing method, device, terminal and computer readable storage medium
KR20230034384A (en) * 2020-08-07 2023-03-09 나노트로닉스 이미징, 인코포레이티드 Deep Learning Model for Noise Reduction in Low SNR Imaging Conditions
CN111932459A (en) * 2020-08-10 2020-11-13 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN112351271A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Camera shielding detection method and device, storage medium and electronic equipment
RU2764395C1 (en) 2020-11-23 2022-01-17 Самсунг Электроникс Ко., Лтд. Method and apparatus for joint debayering and image noise elimination using a neural network
CN112488027B (en) * 2020-12-10 2024-04-30 Oppo(重庆)智能科技有限公司 Noise reduction method, electronic equipment and computer storage medium
CN114677287A (en) * 2020-12-25 2022-06-28 北京小米移动软件有限公司 Image fusion method, image fusion device and storage medium
CN112801908B (en) * 2021-02-05 2022-04-22 深圳技术大学 Image denoising method and device, computer equipment and storage medium
CN113112428A (en) * 2021-04-16 2021-07-13 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN113469908B (en) * 2021-06-29 2022-11-18 展讯通信(上海)有限公司 Image noise reduction method, device, terminal and storage medium
CN113628134B (en) * 2021-07-28 2024-06-14 商汤集团有限公司 Image noise reduction method and device, electronic equipment and storage medium
US20230097592A1 (en) * 2021-09-30 2023-03-30 Waymo Llc Systems, Methods, and Apparatus for Aligning Image Frames
CN114331902B (en) * 2021-12-31 2022-09-16 英特灵达信息技术(深圳)有限公司 Noise reduction method and device, electronic equipment and medium
CN117716705A (en) * 2022-06-20 2024-03-15 北京小米移动软件有限公司 Image processing method, image processing device and storage medium
CN117616455A (en) * 2022-06-20 2024-02-27 北京小米移动软件有限公司 Multi-frame image alignment method, multi-frame image alignment device and storage medium
CN115187491B (en) * 2022-09-08 2023-02-17 阿里巴巴(中国)有限公司 Image denoising processing method, image filtering processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network
CN107292850B (en) * 2017-07-03 2019-08-02 北京航空航天大学 A kind of light stream parallel acceleration method based on Nearest Neighbor Search
CN107680043B (en) * 2017-09-29 2020-09-22 杭州电子科技大学 Single image super-resolution output method based on graph model
CN108898567B (en) * 2018-09-20 2021-05-28 北京旷视科技有限公司 Image noise reduction method, device and system

Also Published As

Publication number Publication date
CN111353948A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111353948B (en) Image noise reduction method, device and equipment
US9591237B2 (en) Automated generation of panning shots
US10708525B2 (en) Systems and methods for processing low light images
KR102306283B1 (en) Image processing method and device
US9117134B1 (en) Image merging with blending
EP4150559A1 (en) Machine learning based image adjustment
US20220138964A1 (en) Frame processing and/or capture instruction systems and techniques
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
CN111194458A (en) Image signal processor for processing image
US20090161982A1 (en) Restoring images
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN113850367B (en) Network model training method, image processing method and related equipment thereof
CN105427263A (en) Method and terminal for realizing image registering
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN110930301A (en) Image processing method, image processing device, storage medium and electronic equipment
WO2021179764A1 (en) Image processing model generating method, processing method, storage medium, and terminal
CN113628134B (en) Image noise reduction method and device, electronic equipment and storage medium
CN107633497A (en) A kind of image depth rendering intent, system and terminal
CN113962859A (en) Panorama generation method, device, equipment and medium
CN113379609B (en) Image processing method, storage medium and terminal equipment
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN113689335B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2021145913A1 (en) Estimating depth based on iris size
CN107454328B (en) Image processing method, device, computer readable storage medium and computer equipment
US20230319401A1 (en) Image capture using dynamic lens positions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant