[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111275626B - Video deblurring method, device and equipment based on ambiguity - Google Patents

Video deblurring method, device and equipment based on ambiguity Download PDF

Info

Publication number
CN111275626B
CN111275626B CN201811477483.9A CN201811477483A CN111275626B CN 111275626 B CN111275626 B CN 111275626B CN 201811477483 A CN201811477483 A CN 201811477483A CN 111275626 B CN111275626 B CN 111275626B
Authority
CN
China
Prior art keywords
frame
pixel point
blurred
optical flow
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811477483.9A
Other languages
Chinese (zh)
Other versions
CN111275626A (en
Inventor
王雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weibo Technology Co ltd
Original Assignee
Shenzhen Weibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weibo Technology Co ltd filed Critical Shenzhen Weibo Technology Co ltd
Priority to CN201811477483.9A priority Critical patent/CN111275626B/en
Publication of CN111275626A publication Critical patent/CN111275626A/en
Application granted granted Critical
Publication of CN111275626B publication Critical patent/CN111275626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical fields of computer vision and image processing, and discloses a video deblurring method, a device and equipment based on ambiguity, which comprises the following steps: calculating the ambiguity of the video frame; determining a clear frame and a blurred frame according to the blurring degree; generating a reference frame according to the clear frame and the blurred frame; extracting image blocks from the blurred frame and the reference frame; weighting and fusing are carried out according to the weights corresponding to the pixel points in the image block, and the fused image block is obtained; and recombining the fused image blocks to obtain an output image. Because the embodiment does not need to estimate the fuzzy core, but calculates the fuzzy degree of the video frame, and determines the clear frame and the fuzzy frame according to the fuzzy degree, the calculation complexity is effectively reduced, and the calculation speed is improved; and the weights of the reference frames are considered, and weighted fusion is carried out according to the weights corresponding to the pixel points in the extracted image blocks, so that the definition of the finally output image is higher.

Description

Video deblurring method, device and equipment based on ambiguity
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to a video deblurring method, device and equipment based on ambiguity.
Background
The video sequence is influenced by factors such as shaking of equipment, uneven running road, shake of hands and the like, which can cause irregular movement caused by the change of the posture of the main body of the video capturing equipment or the disturbance of movement, so that a video picture obtained after imaging is blurred. Blurred video not only brings about a very poor viewing experience, but also is unfavorable for viewing and extracting useful information in the video, and therefore, deblurring processing is required for the blurred video.
Currently, a method for deblurring video mainly uses a blur kernel to deblur video. Depending on the nature of the blur kernel, it can be classified into non-blind deblurring and blind deblurring. Non-blind deblurring needs to be performed with the blur kernel known, but in video editing in different scenes, the blur kernel cannot be known in advance. Whereas blind deblurring requires estimation of the blur kernel, estimation of the blur kernel requires a large number of operations, making the computational complexity too high.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a video deblurring method, apparatus and terminal device based on ambiguity, so as to solve the problem in the prior art that the computational complexity of video deblurring is too high.
A first aspect of an embodiment of the present invention provides a video deblurring method based on ambiguity, including:
calculating the ambiguity of the video frame;
determining a clear frame and a blurred frame according to the blurring degree;
generating a reference frame according to the clear frame and the blurred frame;
extracting image blocks from the blurred frame and the reference frame;
weighting and fusing are carried out according to the weights corresponding to the pixel points in the image block, and the fused image block is obtained;
and recombining the fused image blocks to obtain an output image.
A second aspect of an embodiment of the present invention provides a video deblurring apparatus for blur, including:
the ambiguity calculating module is used for calculating the ambiguity of the video frame;
the clear frame and fuzzy frame determining module is used for determining a clear frame and a fuzzy frame according to the fuzziness;
the reference frame generation module is used for generating a reference frame according to the clear frame and the fuzzy frame;
the image block extraction module is used for extracting the image blocks of the blurred frame and the reference frame;
the weighting fusion module is used for carrying out weighting fusion according to the weights corresponding to the pixel points in the image block to obtain a fused image block;
and the image block reorganization module is used for reorganizing the fused image blocks to obtain an output image.
A video deblurring device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of the first aspect when executing the computer program.
A fourth aspect of an embodiment of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of the first aspect.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: because the embodiment does not need to estimate the fuzzy core, but calculates the fuzzy degree of the video frame, and determines the clear frame and the fuzzy frame according to the fuzzy degree, the calculation complexity is effectively reduced, and the calculation speed is improved; and the weights of the reference frames are considered, and weighted fusion is carried out according to the weights corresponding to the pixel points in the extracted image blocks, so that the definition of the finally output image is higher.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a video deblurring method based on ambiguity provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of specific steps for generating a reference frame from the sharp frame and the blurred frame;
FIG. 3 is an image block obtained by extracting an image block from an image;
FIG. 4 is a schematic diagram of a video deblurring apparatus based on blur degree according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a video deblurring apparatus according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Embodiment one:
referring to fig. 1, fig. 1 is a schematic flowchart of a video deblurring method based on ambiguity according to an embodiment of the present invention, and is described in detail as follows:
step S101: the ambiguity of the video frame is calculated.
Preferably, before calculating the ambiguity of the video frame, it comprises: and acquiring the video to be processed. After the video to be processed is acquired, the video is parsed into video frames at a frame rate, for example 50 fps.
Specifically, the calculating the ambiguity of the video frame specifically includes:
carrying out graying treatment on the video frame to obtain a gray image;
filtering the gray level image to obtain a filtered image;
and calculating the variance of the filtered image to obtain the ambiguity of the video frame.
In general, since the video frame is a color image, in order to reduce the amount of computation, it is necessary to perform a grayscale process on the video frame, that is, to convert the color image into a grayscale image. Of course, if the video frame itself is a grayscale image, the graying process is not required.
And filtering the gray level image obtained after the graying treatment to realize the purpose of calculating the image edge gradient. In general, a laplace filtering method is selected to filter a gray image, and a filtered image is obtained after laplace filtering.
Calculating the variance of the filtered images, the variance of each image may be calculated according to the variance function:
D(f)=Σ y Σ x |f(x,y)-μ| 2
in the above formula, D (f) is the calculated variance, f (x, y) is the gray value of a certain pixel in the image, i.e. the pixel value, μ is the average gray value of the whole image, x is the abscissa of the pixel, and y is the ordinate of the pixel. The variance is used to represent the blur of the filtered image, resulting in the blur of the video frame.
It should be noted that, since the sharp image has a larger difference than the edge gradient of the blurred image, the relationship between the variance and the blur is inversely correlated, that is, the larger the variance is, the smaller the corresponding filtered image blur is, that is, the higher the sharpness is; the smaller the variance, the greater the corresponding filtered image blur, i.e., the lower the sharpness.
Step S102: and determining a clear frame and a blurred frame according to the blurring degree.
Specifically, the determining the clear frame and the blurred frame according to the blur degree specifically includes:
sequencing the video frames according to the ambiguity to obtain a sequenced image sequence;
and selecting a plurality of video frames with larger blur degree from the image sequence as blur frames, and selecting a plurality of video frames with smaller blur degree as clear frames.
The video frames are ranked according to the obtained ambiguity, and a specific ranking mode can be ranked from big to small according to the ambiguity or from small to big. To illustrate, in this embodiment, the ambiguity is selected to be ranked from large to small, a plurality of video frames with larger ambiguity are selected as the ambiguity frames, that is, a video frames with a bit a before the ambiguity ranking are selected as the ambiguity frames, a video frame with the minimum ambiguity is selected as the clear frames, that is, b video frames with b bit after the ranking are selected as the clear frames. The a and the b may be set according to the needs, for convenience of explanation, in this embodiment, a is 3, b is 2, that is, three video frames with the top 3 bits of the ambiguity ranking are selected as the ambiguity frames, and two video frames with the bottom 2 bits of the ambiguity ranking are selected as the clear frames.
Step S103: and generating a reference frame according to the clear frame and the blurred frame.
Preferably, before the reference frame is generated according to the clear frame and the blurred frame, in order to reduce the amount of calculation and increase the processing speed, the downsampling operation needs to be performed on the clear frame and the blurred frame, that is, the resolution of the image is reduced. The principle of downsampling is as follows: assuming that the size of an image is m×n (M and N are the number of pixels), downsampling the image at a sampling rate s results in an image resolution of (M/s).
Referring to fig. 2, fig. 2 is a specific step of generating a reference frame according to the clear frame and the blurred frame, and the specific step of generating a reference frame according to the clear frame and the blurred frame is as follows:
A1. calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise a clear frame and a blurred frame;
A2. taking a certain pixel point of the blurred frame as a current pixel point, wherein the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the clear frame;
A3. according to the forward optical flow, calculating the forward optical flow of the current pixel point to the pixel point of the clear frame, namely a forward pixel point; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the clear frame to the blurred frame;
A4. according to the backward optical flow, calculating a backward optical flow of the forward pixel point to be directed to the pixel point of the blurred frame, namely a backward pixel point;
A5. calculating the position errors of the current pixel point and the backward pixel point;
A6. and generating a reference frame according to the position error.
Further, the step A6 specifically includes the following steps:
B1. and constructing a mask according to the position error, wherein the mask is specifically: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise marking the current pixel point as 0;
B2. if the position error is smaller than a second preset value, the forward pixel point is used as a reference point of the current pixel point;
B3. generating a reference alignment frame according to the reference point;
B4. and generating a reference frame according to the mask, the blurred frame and the reference alignment frame.
Wherein, the optical flow method is TVL1 optical flow method. There are various optical flow methods currently used, namely TVL1 optical flow method, LK optical flow method, ctfLK optical flow method, HS optical flow method, etc., and since TVL1 optical flow method can achieve better effect in the case of object shielding, the optical flow method adopted in this embodiment is TVL1 optical flow method.
The TVL1 optical flow method is used for calculating the forward optical flow and the backward optical flow of two adjacent frames, wherein the two adjacent frames comprise clear frames and fuzzy frames, the fuzzy frames are used as backward reference frames, and the clear frames are used as forward reference frames. The alignment of the images can be realized by a TVL1 optical flow method, namely, feature points are extracted from the blurred frames, and then feature points similar to the feature points are found from corresponding positions of the clear frames.
And selecting a certain pixel point of the blurred frame, such as a point A, as a current pixel point, wherein the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the clear frame. The forward optical flow can be calculated through a TVL1 optical flow method, and then according to the forward optical flow, the forward optical flow of the point A points to the pixel point of the clear frame, namely the forward pixel point, and the forward pixel point is assumed to be the point B.
The backward optical flow refers to a backward motion vector of the forward pixel point, namely the point B, pointing from the clear frame to the blurred frame. And calculating a backward optical flow by a TVL1 optical flow method, and then calculating a backward optical flow of a point B to point to the pixel point of the blurred frame, namely a backward pixel point, according to the backward optical flow, wherein the backward pixel point is assumed to be a point C.
It should be noted that, after the forward optical flow and the backward optical flow are obtained, an up-sampling operation is also required for the image, because in order to reduce the amount of computation and increase the processing speed, it is required to perform a down-sampling operation on the clear frame and the blurred frame first, where the down-sampling operation reduces the image, and it is now required to restore the reduced image to the original size, that is, perform the up-sampling operation on the image. Upsampling is typically achieved by inserting new elements between pixels of the original image, in a variety of ways, and for better results, bi-cubic spline differences may be used to achieve upsampling, while bilinear differences may be used instead if faster computation is desired.
As can be seen from the above, both points a and C are located on the blurred frame, while point B is located on the clear frame. In order to realize the alignment of the characteristic points of the blurred frame and the clear frame, the position error of the point A and the point C is calculated, and the obtained position of the point A on the blurred frame is assumed to be (x 1 ,y 1 ) The position of C point on the blurred frame is (x) 2 ,y 2 ) The position error of points a and C can be calculated by the following equation:
Figure BDA0001892524210000071
in the above equation, e is the calculated position error.
And then constructing a mask according to the position error, specifically: and if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise marking the current pixel point as 0.
The first preset value may be set as required, for example, may be set to 1e-3 or other values, which are not limited herein.
The mask in the above description refers to a region for controlling image processing by masking a processed image with a selected image.
After the mask is obtained, in order to smooth the transition between the replaced pixel point and the surrounding non-replaced pixel point, the fusion is not abrupt, and the obtained mask needs to be subjected to gaussian blur.
The reference point is then determined in the following specific manner: and if the position error is smaller than a second preset value, taking the forward pixel point as a reference point of the current pixel point.
The second preset value may be set as needed, for example, 1e-4, that is, 0.0001.
When the position error is smaller than the second preset value, the current pixel point A and the backward pixel point C can be approximately regarded as the same pixel point, and then the current pixel point A and the forward pixel point B are the same pixel point positioned on different images, so that the point B is taken as a reference point of the point A. However, if the calculated position error is greater than or equal to the second preset value, it indicates that the point a and the point C are not the same pixel point, i.e. the point a has no reference point.
After determining the reference point, generating a reference alignment frame according to the reference point, wherein the specific method is as follows: the reference point is replaced with the current pixel point, and if the current pixel point has no reference point, 0 may be used to replace the current pixel point.
And then generating a reference frame according to the mask, the blurred frame and the reference alignment frame, wherein the specific method comprises the following steps: taking a mask as a weight, and carrying out weighted fusion on the blurred frame and the reference alignment frame, wherein the reference frame can be obtained according to the following formula:
Figure BDA0001892524210000081
in the above formula, cMAp is a mask, I m To generate reference alignment frame, I M In order to blur the frames of the video,
Figure BDA0001892524210000085
then it is the generated reference frame.
Step S104: and extracting image blocks from the blurred frame and the reference frame.
And combining the blurred frame and the reference frame to form a frame set to be processed, and extracting image blocks from each frame to be processed in the frame set.
For each frame to be processed, there are a plurality of extracted image blocks, if the size of an image is 512×512 and the size of an extracted image block is 128×128, 16 image blocks can be extracted from the image, as shown in fig. 3.
Step S105: and carrying out weighted fusion according to the weights corresponding to the pixel points in the image block to obtain the fused image block.
Specifically, the weighting and fusing are performed according to weights corresponding to pixel points in the image block, so as to obtain a fused image block, which specifically includes:
performing Fourier FFT (fast Fourier transform) on pixel points in the image block to obtain FFT values of the pixel points;
the weight of the pixel point is calculated, specifically: carrying out Gaussian blur on the FFT value to obtain a blurred FFT value, and taking the 11 th power of the blurred FFT value as the weight of the pixel point;
and then weighting and fusing all pixel points in the same position in all the image blocks according to the following formula:
Figure BDA0001892524210000082
wherein Wm is the weight of the pixel point,
Figure BDA0001892524210000083
the FFT value of the pixel point is represented by m, the index of the pixel point is represented by epsilon, and the value for preventing denominator from being zero is represented by +.>
Figure BDA0001892524210000084
The obtained weighted fusion value is obtained;
performing Fourier IFFT on the weighted fusion value to obtain a fusion pixel value of the pixel point;
and combining all the pixel points to obtain a fused image block.
Further, performing fourier FFT on the pixel points in the image block to obtain an FFT value of the pixel points, which specifically includes:
respectively calculating FFT component values corresponding to three channels of red R, green G and blue B of the pixel point;
and averaging the FFT component values to obtain the FFT value of the pixel point.
The ambiguity of the image is positively correlated with the attenuation amplitude of the fourier coefficient corresponding to the image, that is, when the ambiguity of the image is large, the fourier coefficient of the image is greatly attenuated, but the ambiguity of the image is small, the fourier coefficient of the image is slightly attenuated, and when the ambiguity of the image is not, the fourier coefficient of the image is not attenuated. Therefore, the image can be transformed into the frequency domain through FFT transformation, the weight of the pixel points of each image block is calculated in the frequency domain, and the weight value of the pixel points in the clear frame can be effectively enlarged, so that the finally output restored image is clearer.
Since the pixel value of each pixel is composed of components of three channels of red R, green G and blue B, it is necessary to calculate FFT component values corresponding to the R, G, B three channels of the pixel, respectively. Since the image is two-dimensional, the calculated FFT component values are complex, assuming that the three FFT component values are a, respectively 1 +jb 1 、a 2 +jb 2 And a 3 +jb 3 The FFT value of the pixel point can be calculated according to the following equation:
Figure BDA0001892524210000091
in the above
Figure BDA0001892524210000092
Is the FFT value of the pixel point.
And then calculating the weight of the pixel points, wherein the concrete method comprises the following steps: and carrying out Gaussian blur on the FFT value to obtain a blurred FFT value, and taking the 11 th power of the blurred FFT value as the weight of the pixel point.
And then weighting and fusing all pixel points in the same position in all the image blocks according to the following formula:
Figure BDA0001892524210000093
wherein Wm is the weight of the pixel point,
Figure BDA0001892524210000094
the FFT value of the pixel point is represented by m, the index of the pixel point is represented by epsilon, and the value for preventing denominator from being zero is represented by +.>
Figure BDA0001892524210000095
To obtain a weighted fusion value.
Here, epsilon may be set as needed, for example, 1e-8, and the purpose of epsilon is to prevent the denominator in the above formula from being zero, so that the calculation of the next step cannot be continued.
After the weighted fusion value is obtained, since the weighted fusion value is a result obtained by calculation in a frequency domain, the weighted fusion value also needs to be subjected to inverse fourier transform, i.e. IFFT, to obtain a value in a corresponding time domain, i.e. a fused pixel value of a pixel point. The fused pixel value is the final pixel value of the pixel points subjected to weighted fusion processing, and the fused image block can be obtained by combining all the pixel points.
Step S106: and recombining the fused image blocks to obtain an output image.
And recombining the image blocks obtained after fusion, namely splicing all the image blocks to obtain an output image, wherein the output image is a clear image subjected to deblurring. As shown in fig. 3, when the weighted fusion processing is performed, the original image is divided into 16 image blocks, and after the processing, the image blocks need to be spliced together to obtain an output restored image.
According to the embodiment, the computing complexity is effectively reduced by computing the ambiguity of the video frame and determining the clear frame and the fuzzy frame according to the ambiguity, so that the computing speed is improved; and the weights of the reference frames are considered, and weighted fusion is carried out according to the weights corresponding to the pixel points in the extracted image blocks, so that the definition of the finally output image is higher.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Embodiment two:
fig. 4 is a schematic diagram of a video deblurring apparatus based on ambiguity according to an embodiment of the present invention, where the apparatus includes: the blur level calculation module 41, the clear and blurred frame determination module 42, the reference frame generation module 43, the image block extraction module 44, the weighted fusion module 45 and the image block reorganization module 46.
Wherein the ambiguity calculating module 41 is configured to calculate the ambiguity of the video frame.
Further, the ambiguity calculating module 41 specifically includes:
a graying processing unit 411, configured to perform graying processing on the video frame to obtain a gray image;
a filtering unit 412, configured to filter the gray-scale image to obtain a filtered image;
and a variance calculating unit 413, configured to calculate a variance of the filtered image, so as to obtain the ambiguity of the video frame.
A clear and blurred frame determination module 42 for determining a clear frame and a blurred frame from said blur.
Still further, the clear frame and blurred frame determination module 42 specifically includes:
a ambiguity ranking unit 421, configured to rank the video frames according to the ambiguities, so as to obtain a ranked image sequence;
a blurred and clear frame selection unit 422, configured to select, from the image sequence, a plurality of video frames with greater blur as blurred frames and a plurality of video frames with less blur as clear frames.
A reference frame generating module 43, configured to generate a reference frame according to the clear frame and the blurred frame.
Further, the reference frame generating module 43 further includes:
an optical flow calculation unit 431 for calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise a clear frame and a blurred frame;
a current pixel point selecting unit 432, configured to take a certain pixel point of the blurred frame as a current pixel point, where the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the clear frame;
a forward pixel point calculating unit 433, configured to calculate, according to the forward optical flow, that the forward optical flow of the current pixel point points to a pixel point of the clear frame, that is, a forward pixel point; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the clear frame to the blurred frame;
a backward pixel calculation unit 434, configured to calculate, according to the backward optical flow, a backward optical flow of the forward pixel, where the backward optical flow points to a pixel of the blurred frame, that is, a backward pixel;
a position error calculating unit 435 configured to calculate a position error of the current pixel point and the backward pixel point;
a reference frame generating unit 436 for generating a reference frame based on the position error.
Further, the reference frame generating unit 436 specifically includes:
mask construction subunit 4361, configured to construct a mask according to the position error, specifically: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise marking the current pixel point as 0;
a reference point determining subunit 4362, configured to take the forward pixel point as a reference point of the current pixel point if the position error is smaller than a second preset value;
a reference alignment frame generation subunit 4363, configured to generate a reference alignment frame according to the reference point;
a reference frame generation subunit 4364, configured to generate a reference frame according to the mask, the blurred frame, and the reference alignment frame.
And an image block extraction module 44, configured to perform image block extraction on the blurred frame and the reference frame.
And the weighted fusion module 45 is configured to perform weighted fusion according to weights corresponding to pixel points in the image block, so as to obtain a fused image block.
Further, the weighted fusion module 45 specifically includes:
an FFT conversion unit 451, configured to perform fourier FFT on pixel points in the image block, so as to obtain FFT values of the pixel points;
the weight calculating unit 452 is configured to calculate a weight of the pixel, specifically: carrying out Gaussian blur on the FFT value to obtain a blurred FFT value, and taking the 11 th power of the blurred FFT value as the weight of the pixel point;
the weighted fusion unit 453 is configured to perform weighted fusion on the pixel points in the same position in each image block according to the following equation:
Figure BDA0001892524210000121
in which W is m As the weight of the pixel point,
Figure BDA0001892524210000122
the FFT value of the pixel point is represented by m, the index of the pixel point is represented by epsilon, and the value for preventing denominator from being zero is represented by +.>
Figure BDA0001892524210000123
The obtained weighted fusion value is obtained;
the IFFT transformation unit 454 is configured to perform inverse fourier IFFT on the weighted fusion value to obtain a fused pixel value of the pixel point;
the pixel point combining unit 455 is configured to combine all the pixel points to obtain a fused image block.
And the image block reorganizing module 46 is configured to reorganize the fused image blocks to obtain an output image.
Embodiment III:
fig. 5 is a schematic diagram of a video deblurring apparatus according to an embodiment of the present invention. As shown in fig. 5, a video deblurring apparatus 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in the memory 51 and executable on the processor 50, for example a video deblurring program based on blur. The processor 50, when executing the computer program 52, implements the steps of the various embodiments of the blur degree based video deblurring method described above, such as steps 101 through 106 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 41-46 shown in fig. 4.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 52 in the video deblurring device 5. For example, the computer program 52 may be partitioned into an ambiguity calculation module, a clear frame and blurred frame determination module, a reference frame generation module, an image block extraction module, a weighted fusion module, and an image block reorganization module, each of which function specifically as follows:
the ambiguity calculating module is used for calculating the ambiguity of the video frame;
the clear frame and fuzzy frame determining module is used for determining a clear frame and a fuzzy frame according to the fuzziness;
the reference frame generation module is used for generating a reference frame according to the clear frame and the fuzzy frame;
the image block extraction module is used for extracting the image blocks of the blurred frame and the reference frame;
the weighting fusion module is used for carrying out weighting fusion according to the weights corresponding to the pixel points in the image block to obtain a fused image block;
and the image block reorganization module is used for reorganizing the fused image blocks to obtain an output image.
The video deblurring device 5 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The video deblurring device may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the video deblurring device 5 and is not limiting of the video deblurring device 5, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the video deblurring device may also include input-output devices, network access devices, buses, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the video deblurring device 5, such as a hard disk or a memory of the video deblurring device 5. The memory 51 may also be an external storage device of the video deblurring device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the video deblurring device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the video deblurring device 5. The memory 51 is used for storing the computer program as well as other programs and data required by the video deblurring device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (9)

1. A method for deblurring video based on ambiguity, comprising:
calculating the ambiguity of the video frame;
determining a clear frame and a blurred frame according to the blurring degree;
calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise a clear frame and a blurred frame;
taking a certain pixel point of the blurred frame as a current pixel point, wherein the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the clear frame;
according to the forward optical flow, calculating the forward optical flow of the current pixel point to the pixel point of the clear frame, namely a forward pixel point; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the clear frame to the blurred frame;
according to the backward optical flow, calculating a backward optical flow of the forward pixel point to be directed to the pixel point of the blurred frame, namely a backward pixel point;
calculating the position errors of the current pixel point and the backward pixel point;
generating a reference frame according to the position error;
extracting image blocks from the blurred frame and the reference frame;
weighting and fusing are carried out according to the weights corresponding to the pixel points in the image block, and the fused image block is obtained;
and recombining the fused image blocks to obtain an output image.
2. The method of claim 1, wherein calculating the ambiguity of the video frame specifically comprises:
carrying out graying treatment on the video frame to obtain a gray image;
filtering the gray level image to obtain a filtered image;
and calculating the variance of the filtered image to obtain the ambiguity of the video frame.
3. The method of claim 2, wherein said determining a clear frame and a blurred frame from said blur comprises:
sequencing the video frames according to the ambiguity to obtain a sequenced image sequence;
and selecting a video frames with a bit before the ambiguity ranking from the image sequence as the ambiguity frames, and selecting b video frames with b bit after the ranking as the clear frames, wherein a and b are positive integers which are more than or equal to 1.
4. The method according to claim 1, wherein the generating a reference frame according to the position error specifically comprises:
and constructing a mask according to the position error, wherein the mask is specifically: if the position error is smaller than a first preset value, marking the current pixel point as 1, otherwise marking the current pixel point as 0;
if the position error is smaller than a second preset value, the forward pixel point is used as a reference point of the current pixel point;
generating a reference alignment frame according to the reference point;
and generating a reference frame according to the mask, the blurred frame and the reference alignment frame.
5. The method of claim 1, wherein the weighting and fusing are performed according to weights corresponding to pixel points in the image block, so as to obtain a fused image block, and specifically include:
performing Fourier FFT (fast Fourier transform) on pixel points in the image block to obtain FFT values of the pixel points;
the weight of the pixel point is calculated, specifically: carrying out Gaussian blur on the FFT value to obtain a blurred FFT value, and taking the 11 th power of the blurred FFT value as the weight of the pixel point;
and then weighting and fusing all pixel points in the same position in all the image blocks according to the following formula:
Figure FDA0004164973440000021
in which W is m As the weight of the pixel point,
Figure FDA0004164973440000022
the FFT value of the pixel point is represented by m, the index of the pixel point is represented by epsilon, and the value for preventing denominator from being zero is represented by +.>
Figure FDA0004164973440000023
The obtained weighted fusion value is obtained;
performing Fourier IFFT on the weighted fusion value to obtain a fusion pixel value of the pixel point;
and combining all the pixel points to obtain a fused image block.
6. The method of claim 5, wherein performing fourier FFT on the pixels in the image block to obtain FFT values of the pixels, specifically comprises:
respectively calculating FFT component values corresponding to three channels of red R, green G and blue B of the pixel point;
and averaging the FFT component values to obtain the FFT value of the pixel point.
7. A video deblurring apparatus based on ambiguity, comprising:
the ambiguity calculating module is used for calculating the ambiguity of the video frame;
the clear frame and fuzzy frame determining module is used for determining a clear frame and a fuzzy frame according to the fuzziness;
an optical flow calculation unit for calculating a forward optical flow and a backward optical flow between two adjacent frames by using an optical flow method; wherein the two adjacent frames comprise the clear frame and the blurred frame;
a current pixel point selecting unit, configured to select a certain pixel point of the blurred frame as a current pixel point, where the forward optical flow refers to a forward motion vector of the current pixel point from the blurred frame to the clear frame;
a forward pixel point calculating unit, configured to calculate, according to the forward optical flow, that the forward optical flow of the current pixel point points to a pixel point of the clear frame, that is, a forward pixel point; the backward optical flow refers to a backward motion vector of the forward pixel point pointing from the clear frame to the blurred frame;
a backward pixel point calculating unit, configured to calculate, according to the backward optical flow, a backward optical flow of the forward pixel point to a pixel point of the blurred frame, that is, a backward pixel point;
a position error calculation unit, configured to calculate a position error of the current pixel point and the backward pixel point;
a reference frame generating unit for generating a reference frame according to the position error;
the image block extraction module is used for extracting the image blocks of the blurred frame and the reference frame;
the weighting fusion module is used for carrying out weighting fusion according to the weights corresponding to the pixel points in the image block to obtain a fused image block;
and the image block reorganization module is used for reorganizing the fused image blocks to obtain an output image.
8. A video deblurring device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
CN201811477483.9A 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity Active CN111275626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811477483.9A CN111275626B (en) 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811477483.9A CN111275626B (en) 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity

Publications (2)

Publication Number Publication Date
CN111275626A CN111275626A (en) 2020-06-12
CN111275626B true CN111275626B (en) 2023-06-23

Family

ID=71001439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811477483.9A Active CN111275626B (en) 2018-12-05 2018-12-05 Video deblurring method, device and equipment based on ambiguity

Country Status (1)

Country Link
CN (1) CN111275626B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416937B (en) * 2020-03-25 2021-08-20 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and mobile equipment
CN112001355A (en) * 2020-09-03 2020-11-27 杭州云栖智慧视通科技有限公司 Training data preprocessing method for fuzzy face recognition under outdoor video
CN112801890B (en) * 2021-01-08 2023-07-25 北京奇艺世纪科技有限公司 Video processing method, device and equipment
CN112767250B (en) * 2021-01-19 2021-10-15 南京理工大学 Video blind super-resolution reconstruction method and system based on self-supervision learning
CN113067979A (en) * 2021-03-04 2021-07-02 北京大学 Imaging method, device, equipment and storage medium based on bionic pulse camera
CN113327206B (en) * 2021-06-03 2022-03-22 江苏电百达智能科技有限公司 Image fuzzy processing method of intelligent power transmission line inspection system based on artificial intelligence
CN113409203A (en) * 2021-06-10 2021-09-17 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113409209B (en) * 2021-06-17 2024-06-21 Oppo广东移动通信有限公司 Image deblurring method, device, electronic equipment and storage medium
CN113706414B (en) * 2021-08-26 2022-09-09 荣耀终端有限公司 Training method of video optimization model and electronic equipment
CN113781336B (en) * 2021-08-31 2024-02-02 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN113781357B (en) * 2021-09-24 2024-10-29 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN115546042B (en) * 2022-03-31 2023-09-29 荣耀终端有限公司 Video processing method and related equipment thereof
CN116934654B (en) * 2022-03-31 2024-08-06 荣耀终端有限公司 Image ambiguity determining method and related equipment thereof
CN114708166A (en) * 2022-04-08 2022-07-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and terminal
CN115187446A (en) * 2022-05-26 2022-10-14 北京健康之家科技有限公司 Face changing video generation method and device, computer equipment and readable storage medium
CN115311175B (en) * 2022-10-10 2022-12-09 季华实验室 Multi-focus image fusion method based on no-reference focus quality evaluation
CN115866295A (en) * 2022-11-22 2023-03-28 东南大学 Video key frame secondary extraction method and system for terminal row of convertor station
CN116385302A (en) * 2023-04-07 2023-07-04 北京拙河科技有限公司 Dynamic blur elimination method and device for optical group camera
CN116128769B (en) * 2023-04-18 2023-06-23 聊城市金邦机械设备有限公司 Track vision recording system of swinging motion mechanism
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2680568A1 (en) * 2012-06-25 2014-01-01 ST-Ericsson SA Video stabilisation with deblurring
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN107895349A (en) * 2017-10-23 2018-04-10 电子科技大学 A kind of endoscopic video deblurring method based on synthesis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7119837B2 (en) * 2002-06-28 2006-10-10 Microsoft Corporation Video processing system and method for automatic enhancement of digital video
US7728909B2 (en) * 2005-06-13 2010-06-01 Seiko Epson Corporation Method and system for estimating motion and compensating for perceived motion blur in digital video
TWI381719B (en) * 2008-02-18 2013-01-01 Univ Nat Taiwan Full-frame video stabilization with a polyline-fitted camcorder path
KR102094506B1 (en) * 2013-10-14 2020-03-27 삼성전자주식회사 Method for measuring changes of distance between the camera and the object using object tracking , Computer readable storage medium of recording the method and a device measuring changes of distance
EP3316212A1 (en) * 2016-10-28 2018-05-02 Thomson Licensing Method for deblurring a video, corresponding device and computer program product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2680568A1 (en) * 2012-06-25 2014-01-01 ST-Ericsson SA Video stabilisation with deblurring
US9355439B1 (en) * 2014-07-02 2016-05-31 The United States Of America As Represented By The Secretary Of The Navy Joint contrast enhancement and turbulence mitigation method
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
CN107895349A (en) * 2017-10-23 2018-04-10 电子科技大学 A kind of endoscopic video deblurring method based on synthesis

Also Published As

Publication number Publication date
CN111275626A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
Wang et al. Real-esrgan: Training real-world blind super-resolution with pure synthetic data
CN110324664B (en) Video frame supplementing method based on neural network and training method of model thereof
Yu et al. A unified learning framework for single image super-resolution
CN108765343B (en) Image processing method, device, terminal and computer readable storage medium
US9998666B2 (en) Systems and methods for burst image deblurring
CN110163237B (en) Model training and image processing method, device, medium and electronic equipment
EP2164040B1 (en) System and method for high quality image and video upscaling
US9258518B2 (en) Method and apparatus for performing super-resolution
CN110827200A (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
CN110570356B (en) Image processing method and device, electronic equipment and storage medium
US20140354886A1 (en) Device, system, and method of blind deblurring and blind super-resolution utilizing internal patch recurrence
KR102481882B1 (en) Method and apparaturs for processing image
CN112164011B (en) Motion image deblurring method based on self-adaptive residual error and recursive cross attention
CN109743473A (en) Video image 3 D noise-reduction method, computer installation and computer readable storage medium
CN110874827B (en) Turbulent image restoration method and device, terminal equipment and computer readable medium
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
Liu et al. A motion deblur method based on multi-scale high frequency residual image learning
CN108600783B (en) Frame rate adjusting method and device and terminal equipment
Liang et al. Improved non-local iterative back-projection method for image super-resolution
Jeong et al. Multi-frame example-based super-resolution using locally directional self-similarity
CN110490822B (en) Method and device for removing motion blur of image
CN113905147A (en) Method and device for removing jitter of marine monitoring video picture and storage medium
Choi et al. Sharpness enhancement and super-resolution of around-view monitor images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant