CN109949251B - Li Ci pyramid-based rapid stabilizing method for atmospheric turbulence degradation video - Google Patents
Li Ci pyramid-based rapid stabilizing method for atmospheric turbulence degradation video Download PDFInfo
- Publication number
- CN109949251B CN109949251B CN201910265487.9A CN201910265487A CN109949251B CN 109949251 B CN109949251 B CN 109949251B CN 201910265487 A CN201910265487 A CN 201910265487A CN 109949251 B CN109949251 B CN 109949251B
- Authority
- CN
- China
- Prior art keywords
- local
- pyramid
- image
- atmospheric turbulence
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000015556 catabolic process Effects 0.000 title claims abstract description 19
- 238000006731 degradation reaction Methods 0.000 title claims abstract description 19
- 230000000087 stabilizing effect Effects 0.000 title claims description 11
- 238000001914 filtration Methods 0.000 claims abstract description 42
- 230000006641 stabilisation Effects 0.000 claims abstract description 14
- 238000011105 stabilization Methods 0.000 claims abstract description 14
- 238000012546 transfer Methods 0.000 claims abstract description 13
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 29
- 230000000694 effects Effects 0.000 abstract description 27
- 230000008859 change Effects 0.000 abstract description 10
- 238000004422 calculation algorithm Methods 0.000 description 10
- 239000000443 aerosol Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Image Processing (AREA)
Abstract
An atmospheric turbulence degradation video rapid stabilization method based on a Li Ci pyramid comprises the following steps: the first step: and carrying out pyramid decomposition on each frame of image of the video. And a second step of: local amplitudes and local phases for each layer of subbands of the pyramid are calculated based on the Riesz transform (risz transform). And a third step of: the local amplitude and the local phase are time low pass filtered. Fourth step: the single frame image is deblurred based on the atmospheric turbulence modulation transfer function. The invention uses the idea of low-pass filtering to eliminate imaging distortion generated by atmospheric turbulence effect. The Riesz transform is a multidimensional extension of the hilbert transform that enables to obtain local amplitude and local phase information of a two-dimensional signal. The local amplitude reflects the light and shade change of the local gray scale of each frame image in the video, and the local phase change represents the local motion information in the video. Local high frequency noise and image distortion can be suppressed by low pass filtering of the monogenic local amplitude and local phase.
Description
Technical Field
The invention relates to the technical field of optical imaging and digital images, in particular to a rapid stabilizing method of an atmospheric turbulence degradation video based on a Li Ci pyramid.
Background
The random transformation of meteorological parameters such as atmospheric temperature, humidity and the like enables the refractive index of the atmosphere to randomly fluctuate. When light passes through the atmosphere with random fluctuation of refractive index, the propagation direction of the light swings randomly, so that the imaging system has image blurring and dynamic distortion. This degradation phenomenon is known as the optical turbulence effect. The optical turbulence effect reduces imaging resolution, so that dynamic distortion occurs to an imaging target, and the visual effect of an imaging system and target detection, tracking and identification are affected. How to eliminate the optical turbulence effect has been a difficult problem in the field of optical imaging and image processing. The traditional digital image stabilizing method mainly solves the problem of image shaking caused by camera shake in the shooting process, and mainly comprises two steps of motion estimation and motion compensation, wherein the two steps are used for eliminating rotation and translation between two frames. However, atmospheric turbulence is characterized by spatio-temporal random variations, the effect of which on the imaging system is difficult to describe analytically using mathematical models in conventional digital image stabilization methods.
The atmospheric turbulence degradation video stabilization mainly comprises two large steps, namely, the first step is to eliminate imaging dynamic distortion generated by the atmospheric turbulence effect, and the second step is to eliminate imaging blurring generated by the atmospheric turbulence effect.
The method for eliminating imaging dynamic distortion generated by the atmospheric turbulence effect mainly comprises the following steps: low rank matrix decomposition method proposed by oam-oshijie, etc., phase-based method proposed by nielvalde-watts, etc., and laplace-based method proposed by royalty, etc. Low rank matrix decomposition method the turbulent video is decomposed into three parts using the low rank matrix decomposition method: background, turbulence and moving objects, this method can break up turbulence aberrations, but the computational complexity is very high. The partial phase is subjected to time filtering based on the phase method, so that distortion generated by the atmospheric turbulence effect with weak fluctuation intensity can be eliminated. However, since this method does not consider the problem of light intensity flicker caused by atmospheric turbulence, the phenomenon of image shading occurs after image stabilization. The method based on the Laplacian uses Sobel gradient sharpening for each frame, then uses the Laplacian to eliminate time distortion and restrain imaging distortion and blurring, but cannot deal with the problem of image stabilization under stronger atmospheric turbulence, and has large calculation amount.
Another aspect of atmospheric turbulence degrading video stabilization is the problem of removing blurring of the distortion corrected video image. Deblurring of turbulent images is effectively a process of image deconvolution, which has the difficulty of estimating the point spread function of the turbulent effect. There are a variety of algorithms for estimating the blur kernel, the most straightforward of which is to use the atmospheric modulation transfer function, set the coefficients of the atmospheric modulation transfer function according to the turbulent imaging environment, and then perform the deconvolution operation. A time regression algorithm is used to obtain an approximate diffraction limited blurred image, and a spatially invariant deblurring algorithm can be used to recover a sharp image. The method needs to operate on the whole image sequence, and has low calculation efficiency.
The imaging distortion caused by the atmospheric turbulence effect appears in the video as random wobbling of local pixels, i.e. local motion at high frequencies, whereas the most common method of suppressing high frequency noise is low pass filtering. Therefore, a method for stabilizing the atmospheric turbulence degradation video with moderate calculation amount and high calculation efficiency is designed aiming at the problems of the existing method.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a video stabilizing method for stabilizing imaging distortion generated by an atmospheric turbulence effect, eliminating blurring caused by turbulence and overcoming the defect of low operation efficiency of the conventional turbulence processing algorithm so as to quickly obtain a section of stable and clear video.
The invention solves the technical problems by adopting the following technical scheme:
an atmospheric turbulence degradation video rapid stabilization method based on a Li Ci pyramid comprises the following steps:
the first step: and carrying out pyramid decomposition on each frame of image of the video. The present invention selects an approximate Laplacian pyramid representation image frame of an N. The pyramid is made up of a series of high-pass components and a low-pass component. In the construction process, high-pass filtering and low-pass filtering are respectively carried out on the original image, and the high-pass component is used as a pyramid first layer; the low-pass component is subjected to downsampling and then high-pass and low-pass filtering, and the high-pass component is used as a pyramid second layer, so that iteration is performed until the number of layers is needed.
And a second step of: local amplitudes and local phases for each layer of subbands of the pyramid are calculated based on the Riesz transform (risz transform).
And a third step of: the local amplitude and the local phase are time low pass filtered. And (3) using time low-pass filtering to the local phase to filter out high-frequency random swing generated by atmospheric turbulence disturbance in the video, so as to obtain low-frequency and motionless background information.
Let the local amplitude and the local phase after filtering be A respectively k′ ,According to the formula->Reconstructing the current pyramid subband I k And performing pyramid reconstruction to obtain a processing result of the current frame.
Fourth step: the single frame image is deblurred based on the atmospheric turbulence modulation transfer function. Image degradation affected by the atmosphere can be seen as a convolution process of a blur-free image with the atmospheric turbulence point spread function. The atmospheric turbulence modulation transfer function is the fourier transform model of the atmospheric turbulence point spread function, and can be established by using weather parameters according to the method proposed by ixak et al. Wiener filtering is the most commonly used deconvolution filter that is capable of recovering the original signal in the sense of minimum mean square error.
Preferably, the Li Ci pyramid is a pyramid obtained by decomposing an image by using an approximately laplacian pyramid and then performing Riesz transformation on each layer of sub-band.
Preferably, the frequency domain of the Riesz transformation in the second step under two-dimensional conditions is represented as:
where, (·, ·) T represents the transpose of the vector, ω= (ω) 1 ,ω 2 ) Representing the frequency domain coordinates, j representing the imaginary unit, R1 and R2 representing the two results of the Riesz transform, R 1 (ω) and R 2 (ω) shows the result of the frequency domain Riesz transformation, IF being the fourier transformation of the original image I. In the spatial domain, the two-dimensional Riesz transform can be implemented using the following two convolution kernel approximations to improve the computational efficiency:
Layer I for the k-th layer of the image pyramid k Its Riesz transform is (R 1 k ,R 2 k ) Vector (I) k ,R 1 k ,R 2 k ) Can be defined by local amplitude Ak and local direction θ k And local phaseThe representation is:
the local amplitude represents the local brightness of the image, defined as:
preferably, the formula of wiener filtering in the fourth step is expressed as:
where H (u, v) is a degradation function, i.e., an atmospheric turbulence modulation transfer function, (u, v) is a frequency domain coordinate, K is a constant, and the magnitude thereof represents the ratio of noise to signal, and k=0.01 in the present invention. The product of the wiener filter and the degraded image G (u, v) yields a blur-free image
F -1 Representing the inverse fourier transform.
The invention has the advantages and positive effects that:
1. the invention uses the idea of low-pass filtering to eliminate imaging distortion generated by atmospheric turbulence effect. The Riesz transform is a multidimensional extension of the hilbert transform that enables to obtain local amplitude and local phase information of a two-dimensional signal. The local amplitude reflects the light and shade change of the local gray scale of each frame image in the video, and the local phase change represents the local motion information in the video. Local high frequency noise and image distortion can be suppressed by low pass filtering of the monogenic local amplitude and local phase.
2. Aiming at the modeling problem of an imaging deblurring deconvolution degradation function generated by an atmospheric turbulence effect, namely an atmospheric turbulence modulation transfer function, the method provided by selecting Izake and the like utilizes weather parameters to establish an atmospheric turbulence modulation transfer function model and integrate an imaging distortion elimination method. And reconstructing the atmosphere blurred image according to the atmosphere modulation transfer function of weather prediction.
3. The invention has strong universality, can be applied to near-ground natural light and infrared imaging, or scenes such as remote sensing, astronomical observation and the like, can inhibit imaging and turbulence noise in the filtering process, improves the signal-to-noise ratio of images, has high operation efficiency due to the adoption of spatial pyramid and Riesz transformation, and can retain moving targets in videos while eliminating turbulence distortion.
Drawings
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a graph of the effect of the temporal low-pass filtering of the present invention;
FIG. 3 is an image of the change in pixel value at a point after temporal filtering in accordance with the present invention;
FIG. 4 is a graph comparing results of checkerboard simulation experiments of the present invention;
FIG. 5 is a graph comparing infrared video image stabilization results in atmospheric turbulence in accordance with the present invention;
FIG. 6 is a graph comparing the results of natural light video processing taken by oneself in the present invention;
fig. 7 is a graph comparing the results of the processing of the present invention for turbulent video with low speed moving objects.
Detailed Description
Embodiments of the invention are described in further detail below with reference to the attached drawing figures:
as shown in fig. 1, the rapid stabilizing method of the atmospheric turbulence degradation video based on the Li Ci pyramid provided by the invention comprises the following steps:
the first step: image pyramid decomposition
The strong noise has adverse effect on the time filtering, the image pyramid is used for decomposing the image into the sub-bands with different spatial frequencies, the signal to noise ratio of the image sub-band with lower spatial frequency is improved, and the influence of the noise on the time filtering is reduced. It is necessary to use pyramid decomposition for each frame of image to reduce the spatial frequency before the Riesz transform (risz transform) is performed.
The laplacian pyramid is a compact pyramid in the spatial domain that breaks down an image into a series of high-pass components and a low-pass component. However, the Laplacian pyramid cannot accurately reconstruct an original image after each layer of sub-band is processed, and the narrow impulse response bandwidth of the Laplacian pyramid is unfavorable for time filtering processing. It is necessary to design a reflexive pyramid P, i.e. P T P=i, where T represents the transpose of the matrix and I is the original image, which ensures that the image is restored after processing each subband and has a wider impulse response.
The pyramid used in the present invention is the same as the laplacian pyramid, which is made up of columns of high-pass components and a low-pass component. Firstly, respectively carrying out high-pass filtering and low-pass filtering on an original image, wherein high-pass components are used as pyramid first layers; the low-pass component is subjected to downsampling and then high-pass and low-pass filtering, and the high-pass component is used as a pyramid second layer, so that iteration is performed until the number of layers is needed.
And a second step of: computing local information using Riesz transforms
The pyramid decomposition reduces the spatial frequency of the image and is beneficial to the processing of time filtering. Next, the Riesz transform is performed on each layer of sub-band of the pyramid to obtain the filtering object, namely the local amplitude and the local phase.
The Riesz transform is a multidimensional extension of the hilbert transform, and in two-dimensional conditions, the frequency domain of the Riesz transform is represented as:
the product in the frequency domain corresponds to a convolution in the spatial domain where the two-dimensional Riesz transform is expressed as:
wherein x= [ x, y]. To improve the computational efficiency, the two-dimensional Riesz transform may use two three-tap filters in the spatial domain: [0.5,0, -0.5]And [0.5,0, -0.5] T To be implemented approximately.
The two-dimensional Riesz transform is effectively a phase shifting system that produces quadrature pairs of 90 ° phase shifts for each point in the original image. The two-dimensional signal and the Riesz transform constitute a mono signal:
I M (x)=I(x)+iR 1 (x)+jR 2 (x)
layer I for the k-th layer of the image pyramid k Its Riesz transform is (R 1 k ,R 2 k ) Vector (I) k ,R 1 k ,R 2 k ) Can be defined by local amplitude Ak and local direction θ k And local phaseThe representation is:
local amplitude represents local intensity or force, defined as:
the phase contains most of the important "information" and the image reconstructed using the phase highlights edges, lines and other narrow structures.
And a third step of: temporal low-pass filtering of local information
The local phase is filtered by using time low-pass filtering, so that high-frequency motion can be filtered, and only low-frequency and motionless background information can be obtained. As shown in fig. 2, (a) is the first frame of the rood video, where the vertical straight line represents the pixel from which the position is extracted for each frame, (b) is the time series of pixels at the straight line position for each frame of the original video, and (c) is the time series of pixels at the straight line position for each frame of the filtered video. While for the less distorted position, although the video does not show the deflection and vibration of the structure, the brightness value at the position changes due to the unstable intensity of the light when reaching the imaging plane, so that the turbulence distortion can be better eliminated by simultaneously processing the phase and the amplitude.
For a section of turbulence video of a static scene, its sequence unaffected by turbulence is expressed asWherein A and->Representing the amplitude and phase of the background, the video sequence affected by turbulence can be expressed as:
from each frame I by the Riesz transform Turbu In (x, y)Extracting local amplitude A+delta (x, y, t) and local phaseFiltering out the high frequency information delta (x, y, t) and delta (x, y, t) using a low pass filter to obtain a fixed a and delta (x, y, t)As shown in fig. 3, the change of the pixel value at a certain point after the time filtering in the present invention, (d) is the pixel position extracted from each frame in the Roof video, and (e) is the contrast curve of the change of the pixel value before and after the filtering, where the solid line represents the original video and the dotted line represents the processed video. The gray value of red point position of each frame of the rood video is extracted, the gray value of the same position is extracted by using the video after the amplitude and phase low-pass filtering treatment, and as can be seen from fig. 3, the gray change of the position is obviously reduced after the low-pass filtering.
Fourth step: atmospheric MTF deblurring
The degradation of an image affected by the atmosphere can be seen as a convolution process of a blur-free image with an atmospheric Point Spread Function (PSF), which, if known, can be restored to the original image by simply deconvoluting the blurred image. The atmospheric Modulation Transfer Function (MTF) is a model of the atmospheric PSF fourier transform. Wiener filtering is the most commonly used deconvolution filter that is capable of recovering the original signal in the sense of minimum mean square error, expressed as:
where H (u, v) is a degradation function, i.e., MTF, (u, v) is a frequency domain coordinate, K is a constant, and the magnitude thereof indicates the ratio of noise to signal, where k=0.01 in the present invention. The product of the wiener filter kernel and the degraded image G (u, v) yields the original image
Wherein F is -1 Representing the inverse fourier transform.
The atmospheric MTF includes a turbulent MTF and an aerosol MTF, and the quality is better if the influence of the aerosol in the air on the imaging is considered when the image is restored. For a short exposure imaging system, the turbulent MTF is calculated from the following equation:
where v is the angular spatial frequency, C n Is the refractive index structural coefficient, R is the imaging distance, λ is the wavelength of the light wave, D is the aperture diameter, and μ=0.5 for long-distance imaging and μ=1 for short-distance imaging. The aerosol MTF is calculated from the following formula:
MTF a =(exp[-(A a +S a )R]) -0.1
in which A a ,S a The absorption and dissipation coefficients, respectively. The total atmospheric MTF is the product of the turbulent MTF and the aerosol MTF: m is M t =MTF SE MTF a 。
As shown in fig. 4, the results of the checkerboard simulation experiment are shown, (f) is the original video, (g) is the processing effect of the method of Wadhwa, (h) is the processing effect of the conventional laplace operator algorithm, and (i) is the processing effect of the method of the present invention. The remote imaging environment is simulated, the imaging distance is set to be 2000 meters, the simulated turbulence sequence is shown in fig. 4 (f), and the experimental results are shown in fig. 4 (g), (h) and (i). Although the phase filtering method of Wadhwa effectively reduces turbulence distortion, the local brightness transformation is still obvious; the traditional Laplace operator algorithm also eliminates distortion to a certain extent, but color distortion occurs in image recovery, and the result is not stable enough; in contrast, the method of the present invention exhibits a better stabilizing effect at a low pass filter coefficient of 0.02.
As shown in FIG. 5, the infrared video image stabilization results in atmospheric turbulence are shown, (j) is the raw video, and (k) is WadhwaThe processing effect of the method is (i) the processing effect of the traditional Laplacian algorithm, and (m) the processing effect of the method. Setting imaging distance to 1000 m in treatment, and setting atmospheric refractive index structural coefficient C n The wavelength of the light was changed to 8. Mu.m, and the coefficient of the low-pass filter was 0.02, which was 2.0X10-14, and the result was shown in FIG. 5 (m).
As shown in fig. 6, the processing result of the natural light video photographed by itself is (n) the original video, (o) the processing effect of the method of Wadhwa, (p) the processing effect of the conventional laplace algorithm, and (q) the processing effect of the method of the present invention. In FIG. 6, there is not only distortion and blurring caused by turbulence, but also imaging noise of the system itself and camera displacement during photographing, and C is set in the experiment n The imaging distance is 600 meters, the low-pass filter coefficient is 0.02, and the obtained result is shown in fig. 6 (q), and it can be seen that the invention not only eliminates turbulence influence, but also has good robustness to noise.
As shown in fig. 7, for a turbulent video with a low-speed moving object, (r) is the 300 th frame of the original video,(s) is the time variation of the original video, (t) is the 300 th frame of the processing result of the present invention, and (u) is the time variation of the processing result of the present invention, not only is there turbulent influence in the video, but also the moon is moving slowly, and the low-pass filter coefficient is set to 0.08, as a result, not only distortion is suppressed, but also the movement effect of the moon is retained.
As shown in the following table, the method and the traditional Laplace operator algorithm are used for simultaneously operating the video stabilization to obtain the following table, and the data in the table can show that the running time of the method in the application document is reduced by half compared with that of the traditional Laplace operator algorithm, so that the method is more efficient in stabilizing the video.
Since the MTF uses frequency domain processing, and the fourier transform and the inverse transform are time consuming, if the MTF is generated and the spatial convolution kernel is processed in the spatial domain, the speed will be increased.
The invention uses the idea of low-pass filtering to eliminate imaging distortion generated by atmospheric turbulence effect. The Riesz transform is a multidimensional extension of the hilbert transform that enables to obtain local amplitude and local phase information of a two-dimensional signal. The local amplitude reflects the light and shade change of the local gray scale of each frame image in the video, and the local phase change represents the local motion information in the video. Local high frequency noise and image distortion can be suppressed by low pass filtering of the monogenic local amplitude and local phase.
It should be emphasized that the examples described herein are illustrative rather than limiting, and therefore the invention is not limited to the examples described in the detailed description, but rather falls within the scope of the invention as defined by other embodiments derived from the technical solutions of the invention by those skilled in the art.
Claims (3)
1. The rapid stabilizing method of the atmospheric turbulence degradation video based on the Li Ci pyramid is characterized by comprising the following steps of: the method comprises the following steps:
the first step: pyramid decomposition is carried out on each frame of image of the video, an approximate Laplacian pyramid of an N-Wade structure is selected to represent the image frame, and the pyramid is composed of a series of high-pass components and a low-pass component; in the construction process, high-pass filtering and low-pass filtering are respectively carried out on the original image, and the high-pass component is used as a pyramid first layer; the low-pass component is subjected to downsampling and then high-pass and low-pass filtering, and the high-pass component is used as a pyramid second layer, so that iteration is performed until the number of layers is needed;
and a second step of: calculating local amplitude and local phase of each layer of sub-band of the pyramid based on Riesz transformation;
and a third step of: performing time low-pass filtering on the local amplitude and the local phase; the local phase is filtered by using time low-pass filtering to filter high-frequency random swing generated by atmospheric turbulence disturbance in the video, so that background information which is only low-frequency and motionless is obtained;
let the local amplitude and the local phase after filtering be A respectively k′ ,According to the formula->Reconstructing the current pyramid subband I k Pyramid reconstruction is carried out to obtain the processing result of the current frame;
fourth step: deblurring the single frame image based on the atmospheric turbulence modulation transfer function; image degradation affected by the atmosphere is a convolution process of a blur-free image and an atmospheric turbulence point spread function; the atmospheric turbulence modulation transfer function is a Fourier transform module value of an atmospheric turbulence point spread function, is established by using weather parameters according to the method proposed by Izake, and is the most commonly used deconvolution filter, which can recover the original signal in the sense of minimum mean square error;
the frequency domain representation of the Riesz transformation in the second step in two dimensions is:
where, (·, ·) T represents the transpose of the vector, ω= (ω) 1 ,ω 2 ) Representing the frequency domain coordinates, j representing the imaginary unit, R1 and R2 representing the two results of the Riesz transform, R 1 (ω) and R 2 (omega) shows the result of the frequency domain Riesz transform, I F Fourier transform for the original image I; in the spatial domain, the two-dimensional Riesz transform is implemented using the following two convolution kernel approximations to improve the computational efficiency:
Layer I for the k-th layer of the image pyramid k Its Riesz transform is (R 1 k ,R 2 k ),Vector (I) k ,R 1 k ,R 2 k ) From local amplitude A k Local direction theta k And local phaseThe representation is:
the local amplitude represents the local brightness of the image, defined as:
2. the rapid stabilization method for atmospheric turbulence degradation video based on Li Ci pyramid as claimed in claim 1, wherein the rapid stabilization method comprises the following steps: the Li Ci pyramid is a pyramid obtained by decomposing an image by using an approximate Laplacian pyramid and then carrying out Riesz transformation on each layer of sub-bands.
3. The rapid stabilization method for atmospheric turbulence degradation video based on Li Ci pyramid as claimed in claim 1, wherein the rapid stabilization method comprises the following steps: the formula of wiener filtering in the fourth step is expressed as follows:
wherein H (u, v) is a degradation function, i.e., an atmospheric turbulence modulation transfer function, (u, v) is a frequency domain coordinate, K is a constant, the magnitude of which represents the ratio of noise to signal, let k=0.01; the product of the wiener filter and the degraded image G (u, v) yields a blur-free image
F -1 Representing the inverse fourier transform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910265487.9A CN109949251B (en) | 2019-04-03 | 2019-04-03 | Li Ci pyramid-based rapid stabilizing method for atmospheric turbulence degradation video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910265487.9A CN109949251B (en) | 2019-04-03 | 2019-04-03 | Li Ci pyramid-based rapid stabilizing method for atmospheric turbulence degradation video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109949251A CN109949251A (en) | 2019-06-28 |
CN109949251B true CN109949251B (en) | 2023-06-30 |
Family
ID=67013588
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910265487.9A Active CN109949251B (en) | 2019-04-03 | 2019-04-03 | Li Ci pyramid-based rapid stabilizing method for atmospheric turbulence degradation video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109949251B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114564685B (en) * | 2022-03-07 | 2024-07-16 | 重庆大学 | Engineering structure bidirectional dynamic displacement measurement method based on phase |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5841911A (en) * | 1995-06-06 | 1998-11-24 | Ben Gurion, University Of The Negev | Method for the restoration of images disturbed by the atmosphere |
CN106780385A (en) * | 2016-12-16 | 2017-05-31 | 北京华航无线电测量研究所 | A kind of fog-degraded image clarification method based on turbulent flow infra-red radiation model |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2272192B1 (en) * | 2005-10-14 | 2008-03-16 | Consejo Superior Invet. Cientificas | METHOD OF BLIND DECONVOLUTION AND SUPERRESOLUTION FOR SEQUENCES AND SETS OF IMAGES, AND THEIR APPLICATIONS. |
US8976259B2 (en) * | 2012-02-27 | 2015-03-10 | Greywolf Technical Services, Inc. | Methods and systems for modified wavelength diversity image compensation |
US9338331B2 (en) * | 2014-01-09 | 2016-05-10 | Massachusetts Institute Of Technology | Riesz pyramids for fast phase-based video magnification |
US10389940B2 (en) * | 2015-07-02 | 2019-08-20 | Mission Support and Test Services, LLC | Passive method to measure strength of turbulence |
-
2019
- 2019-04-03 CN CN201910265487.9A patent/CN109949251B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5841911A (en) * | 1995-06-06 | 1998-11-24 | Ben Gurion, University Of The Negev | Method for the restoration of images disturbed by the atmosphere |
CN106780385A (en) * | 2016-12-16 | 2017-05-31 | 北京华航无线电测量研究所 | A kind of fog-degraded image clarification method based on turbulent flow infra-red radiation model |
Also Published As
Publication number | Publication date |
---|---|
CN109949251A (en) | 2019-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | Atmospheric turbulence removal using convolutional neural network | |
CN114549361B (en) | Image motion blur removing method based on improved U-Net model | |
US8611690B2 (en) | Real-time video deblurring | |
He et al. | Atmospheric turbulence mitigation based on turbulence extraction | |
Rani et al. | A brief review on image restoration techniques | |
US9008453B2 (en) | Blur-kernel estimation from spectral irregularities | |
CN115965552B (en) | Frequency-Space-Time-Domain Joint Denoising and Restoration System for Low SNR Image Sequences | |
CN109949251B (en) | Li Ci pyramid-based rapid stabilizing method for atmospheric turbulence degradation video | |
Jain et al. | A comparative study of various image restoration techniques with different types of blur | |
Makandar et al. | Computation pre-processing techniques for image restoration | |
CN110517196A (en) | A SAR image noise reduction method and system | |
Deshpande et al. | Uniform and non-uniform single image deblurring based on sparse representation and adaptive dictionary learning | |
Muhammad et al. | Matlab Program for Sharpening Image due to Lenses Blurring Effect Simulation with Lucy Richardson Deconvolution | |
Patil et al. | Quality improvements of camera captured pictures using blind and non-blind deconvolution algorithms | |
Chen et al. | A structure-preserving image restoration method with high-level ensemble constraints | |
Grosu et al. | Reinforced Hybrid Wiener Deconvolutional-Convolutional Autoencoders Based Image Deblurring | |
Banik et al. | Transformer based technique for high resolution image restoration | |
Ma et al. | Superresolution reconstruction of hyperspectral remote sensing imagery using constrained optimization of POCS | |
Mondal et al. | Non-Blind and Blind Deconvolution Methodologies in Restoration of Motion-Blurred images | |
Abd Wahab et al. | Image contrast enhancement for outdoor machine vision applications | |
CN111489308B (en) | A real-time long-wave infrared fog penetration performance enhancement algorithm | |
Vemuru et al. | Enhancing Real-World Image Deblurring: Algorithm Selection and Performance Evaluation for CCTV | |
Devi et al. | Image super resolution using Fourier-wavelet transform | |
Stojanovic et al. | Application of non-iterative method in digital image restoration | |
Seo et al. | Computational photography using a pair of flash/no-flash images by iterative guided filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |