CN105303544A - Video splicing method based on minimum boundary distance - Google Patents
Video splicing method based on minimum boundary distance Download PDFInfo
- Publication number
- CN105303544A CN105303544A CN201510728737.XA CN201510728737A CN105303544A CN 105303544 A CN105303544 A CN 105303544A CN 201510728737 A CN201510728737 A CN 201510728737A CN 105303544 A CN105303544 A CN 105303544A
- Authority
- CN
- China
- Prior art keywords
- points
- point
- video frame
- calculating
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 239000011159 matrix material Substances 0.000 claims description 14
- 230000009466 transformation Effects 0.000 claims description 14
- 238000007670 refining Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 239000000470 constituent Substances 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 230000004927 fusion Effects 0.000 abstract description 7
- 238000004364 calculation method Methods 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000005286 illumination Methods 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a video splicing method based on a minimum boundary distance. The method mainly involves performing interframe registering on source video frames by use of a scale invariant feature transform algorithm, then detecting motion objects based on a minimum boundary distance algorithm and carrying out video fusion. The method provided by the invention can effectively overcome interference of such objective factors as illumination, ghosting artifacts and the like, and has the advantages of excellent splicing effect, concise calculation, simple parameter arrangement and the like.
Description
Technical Field
The invention relates to a video splicing method based on minimum boundary distance, and belongs to the technical field of image processing and image fusion.
Background
With the rapid development of national economy, the important role of vehicles, images and videos in daily life is prominent, and the problem of large-view-field images and videos is paid more and more attention by social media and people.
The video splicing is a technology for collecting video data with an overlapping area for splicing by two or more cameras according to a certain camera calibration technology so as to enable the output result view field to be wider. At present, a video splicing technology and application thereof are one of new research hotspots in the fields of virtual reality, computer vision and the like, and are mainly applied to the aspects of live broadcasting, video monitoring, panorama fusion, super-large image sampling and the like. The existing video splicing method has the difficulty in time efficiency due to the complexity of camera calibration, image registration and image splicing.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a video splicing method based on the minimum boundary distance, which can obviously improve the video splicing imaging effect.
The technical scheme is as follows: a video splicing method based on minimum boundary distance comprises the following steps:
step A: collecting two sections of video data containing overlapping areas;
and B: respectively carrying out median filtering on the video frame data obtained in the step A to obtain a Gaussian pyramid;
and C: combining and convolving the Gaussian pyramid obtained in the step B and image data to obtain a scale space of the video frame obtained in the step A;
step D: c, carrying out extreme point detection on the scale space obtained in the step C to obtain an extreme point of the maximum and minimum space;
step E: removing key points with the contrast ratio smaller than 0.03 and unstable edge response points from the extreme points obtained in the step D to obtain the positions and the scales of the determined key points;
step F: determining the gradient direction of the field pixels by using the position and the scale of the key point in the step E to obtain a key point direction parameter;
step G: calculating gradient direction histograms of 8 directions on each 4 multiplied by 4 small block according to the key point direction parameters obtained in the step F and the positions and the scales of the key points in the step E, and drawing an accumulated value of each gradient direction to form a seed point; one key point consists of four seed points of 2 multiplied by 2, and each seed point has 8 pieces of direction vector information; obtaining a plurality of groups of feature point descriptors matched with each other;
step H: g, randomly sampling a plurality of groups of mutually matched feature point descriptors obtained in the step G, and refining the plurality of groups of mutually matched feature point descriptors to obtain mutually matched feature point descriptors in two video frames;
step I: and D, obtaining a final splicing result of video splicing by using the mutually matched feature point descriptors obtained in the step H and using a minimum boundary distance algorithm.
Further, in step H, the method for matching the feature point descriptors includes:
step H-1: randomly selecting 4 groups of feature point descriptors matched with each other to form a random sample, calculating a transformation matrix, calculating the distance between the feature points of each group of matching points, then calculating the number of inner points consistent with the transformation matrix, selecting the transformation matrix with the largest number of inner points through multiple sampling, and selecting the transformation matrix with the smallest standard deviation of the inner points when the number of the inner points is equal;
step H-2: refining the transformation matrix by adopting an iterative method, wherein an LM algorithm minimized cost function is adopted in the iterative method for iterative refining;
step H-3: defining a nearby search area by using the transformation matrix obtained by refining in the step H-2, and refining the matched feature point descriptors;
step H-4: and repeating the steps H-2 and H-3 until the number of the characteristic points of the matching points is stable.
By adopting the method, mismatching point pairs can be effectively reduced.
Further, in step I, the minimum boundary distance algorithm method is as follows:
step I-1: extracting the object edge in each input video frame by using a Sobel edge detection algorithm, thereby obtaining the edge difference of the overlapped area;
step I-2: calculating the gray level difference of all the matched characteristic points in the overlapped area in the input video frame, and averaging the gray level difference;
step I-3: comparing the object edges of the overlapping areas in the two video frames obtained in the step I-1 to obtain non-overlapped edges;
step I-4: calculating gray values of pixel points on two sides of the self-contained non-coincident edge of the input video frame, and respectively making differences with the gray values of the pixel points at corresponding positions in the input video frame, wherein each obtained difference value is compared with the average gray difference in the step I-2; if not, the pixel point is proved to be a constituent pixel point of a moving object in the input video frame; processing other pixel points in sequence until other edges or the boundary of the overlapped area;
step I-5: calculating each pixel point in the other input video frame by the same method in the step I-4;
step I-6: and calculating the gray values of other pixel points of the fused video frame through a weighted average formula to finally obtain the fused video.
The method can better realize the purpose of eliminating the ghost.
Has the advantages that: compared with the prior art, the video splicing method based on the minimum boundary distance can effectively overcome the interference of objective factors such as illumination and ghost, and has the advantages of excellent splicing effect, simplicity in calculation, simplicity and convenience in parameter setting and the like.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 shows two input video frames according to an embodiment of the present invention;
FIG. 3 is a video frame spliced by a conventional method;
fig. 4 shows a video frame spliced by the method of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
As shown in fig. 1, the video stitching method based on the minimum boundary distance of the present invention includes the following steps:
step A: collecting two sections of video data containing overlapping areas;
and B: respectively carrying out median filtering on the video frame data obtained in the step A to obtain a Gaussian pyramid;
and C: combining and convolving the Gaussian pyramid obtained in the step B and image data to obtain a scale space of the video frame obtained in the step A;
the method for obtaining the scale space comprises the following steps:
step C-1: and (3) setting the input image as I, and forming a Gaussian pyramid through filtering according to different scales sigma of the Gaussian kernel function. The scale space of I is defined as L (x, y, σ), which is obtained from the convolution of a gaussian kernel with a scale σ of different gaussian kernels with I (x, y):
L(x,y,σ)=I(x,y)*G(x,y,σ)
wherein,is a scale-variable gaussian kernel function, I (x, y) represents an input video frame, wherein x and y are respectively the abscissa and ordinate of all points of the video frame, and σ is the scale size of the current gaussian kernel function.
Step C-2: in order to effectively detect stable key points in the scale space, a gaussian difference scale space (DoG for short) can be used. Let the difference of gaussians scale space be D (x, y, σ), which is generated by convolution of difference of gaussians kernel function and image of different scales:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
in the formula, k is a gray scale difference weighting coefficient.
Step D: c, carrying out extreme point detection on the scale space obtained in the step C to obtain an extreme point of the maximum and minimum space;
in order to find the extreme points in the scale space, each detection point is compared with all its neighbors to determine the magnitude relationship between them. The detection point is compared with 8 adjacent points of the same scale and 26 points of 9 multiplied by 2 of upper and lower adjacent scales, so that the maximum and minimum value points can be detected in the scale space and the two-dimensional image space.
Step E: removing low-contrast key points and unstable edge response points from the extreme points obtained in the step D to obtain the positions and the scales of the determined key points;
the extreme points found in discrete space are not necessarily true extreme points. This error can be reduced by curve fitting the scale space DoG function to find the extreme points. Besides the points where the DoG response is low, there are some points where the response is strong, which are not stable characteristic points. The DoG has a strong response value to an edge in the image, so a point falling on the edge of the image is not a stable feature point either. The positions and the scales of the key points are accurately determined, and the low-contrast key points and unstable edge response points are removed at the same time, because the DoG can generate stronger edge response, so that the matching stability is enhanced, and the noise resistance is improved.
Step F: determining the gradient direction of the field pixels by using the position and the scale of the key point in the step E to obtain a key point direction parameter;
and (3) assigning a direction parameter to each key point by using the key points obtained in the last step, namely the minimum and maximum value points meeting the condition and the gradient direction distribution characteristic of the neighborhood pixels, so that the operator has rotation invariance.
According to the following formula:
calculating the extreme point (x)j,yj) Gradient modulus value m (x) ofj,yj);
According to the following formula:
θ(xj,yj)=σtan2((L(xj,yj+1)-L(xj,yj-1))/(L(xj+1,yj)-L(xj-1,yj)))
calculating the extreme point (x)j,yj) Gradient direction of theta (x)j,yj). Wherein L isRespective scale, x, of each keypointj,yjRespectively represents the abscissa and ordinate of the jth extreme point.
During actual calculation, sampling is carried out in a neighborhood window with the key point as the center, and the gradient direction of neighborhood pixels is counted by using a histogram. The gradient histogram ranges from 0 to 360 degrees, with one bar every 10 degrees for a total of 36 bars. The peak of the histogram represents the main direction of the neighborhood gradient at the keypoint, i.e. the direction that is taken as the keypoint.
In the gradient direction histogram, when there is another peak corresponding to 80% of the energy of the main peak, this direction is considered as a secondary direction of the keypoint. A keypoint may be assigned multiple directions, such as a primary direction, more than one secondary direction, which enhances the robustness of the match.
Step G: calculating gradient direction histograms of 8 directions on each 4 multiplied by 4 small block according to the key point direction parameters obtained in the step F and the positions and the scales of the key points in the step E, and drawing an accumulated value of each gradient direction to form a seed point; one key point consists of four seed points of 2 multiplied by 2, and each seed point has 8 pieces of direction vector information; obtaining a plurality of groups of feature point descriptors matched with each other;
wherein, each key point has three information: position, scale, direction. Therefore, a SIFT (scale invariant feature transform) feature region, i.e., a feature descriptor, can be determined with the key point as the center.
And rotating the coordinate axis to the main direction of the key point to ensure the invariance of rotation. An 8 x 8 window is taken centered on the keypoint. Then, a gradient direction histogram of 8 directions is calculated on each 4 × 4 small block, and an accumulated value of each gradient direction is drawn, so that a seed point can be formed. One key point is composed of four seed points of 2 × 2, and each seed point has 8 pieces of direction vector information. The method for combining the neighborhood directivity information enhances the noise resistance of the algorithm and provides better fault tolerance for the feature matching containing the positioning error.
In the actual calculation process, in order to enhance the robustness of matching, it is recommended to use 16 seed points of 4 × 4 for each keypoint to describe, so that 128 data can be generated for one keypoint, that is, a 128-dimensional SIFT feature vector is finally formed.
Step H: g, randomly sampling a plurality of groups of mutually matched feature point descriptors obtained in the step G, and refining the plurality of groups of mutually matched feature point descriptors to obtain mutually matched feature point descriptors in two video frames;
the invention adopts RANSAC algorithm to solve the image transformation matrix H, and the specific flow is as follows:
step H-1: randomly selecting 4 groups of matching points to form a random sample and calculating a transformation matrix H, calculating the distance d of each group of matching points, then calculating the number of inner points consistent with the transformation matrix H, namely calculating the number of matching points with the distance not greater than d, selecting the transformation matrix H with the maximum number of inner points through multiple sampling, and selecting the transformation matrix H with the minimum standard deviation of the inner points when the number of the inner points is equal.
Step H-2: the transformation matrix H is refined using an iterative method that minimizes the cost function using an LM (Levenberg-Marquard-algorithm) algorithm.
Step H-3: and defining a nearby search area by using the transformation matrix H' refined in the step H-2, and further refining the matched feature point descriptor.
Step H-4: and repeating the step H-2 and the step H-3 until the number of the matched feature point descriptors is stable.
By adopting the method, mismatching point pairs can be reduced.
Step I: and D, obtaining a final splicing result of video splicing by using the mutually matched feature point descriptors obtained in the step H and using a minimum boundary distance algorithm.
After the frame registration is completed, frame fusion can be performed, and the target is to remove the overlapping area of the input video frames and synthesize a complete video frame. If only the data of the 1 st or 2 nd video frame is taken as the overlapped part, the fuzzy and obvious splicing trace can be avoided. In addition, if the illumination difference of the input video frame is large, the spliced image can have obvious brightness change.
The invention provides a minimum boundary distance algorithm. The specific method comprises the following steps:
step I-1: extracting the object edge in each input video frame by using a Sobel edge detection algorithm, thereby obtaining the edge difference of the overlapped area;
step I-2: calculating all matching feature points f of overlapping regions in input video frame1(xi,yi) And f2(xi,yi) And averaging the gray differences of:
wherein,is the average gray scale difference of the overlapping region of two input images, f1(xi,yi)、f2(xi,yi) The gray values, x, of the corresponding matching points in the overlapping area of the two video framesi,yiRespectively showing the abscissa and ordinate of the ith matching point. Knum is the logarithm of matching feature points in the overlap region.
Step I-3: comparing the object edges of the overlapping areas in the two video frames obtained in the step I-1 to obtain non-overlapped edges;
step I-4: computing an input video frame f1The gray values of the pixel points at two sides of the self-contained non-coincident edge are respectively compared with the input video frame f2The gray values of the pixel points at the corresponding positions are subtracted, and each obtained difference value is equal toComparing; if not, the pixel point is proved to be f1And constituting pixel points of the medium-moving object. Because most edges of moving objects are obvious and the gray values on both sides of the edge are different. The conventional weighted smoothing formula is rewritten as:
wherein, f (x)i,yi) And k is a weighting coefficient for the fused pixel points. The definition of which is different from the traditional weighted fusion weighting coefficient.
In the conventional fusion method, the weight of a pixel is a linear function of the distance ratio of the image boundary, and the method processes all different pixel points in the same mode and performs well in a general sample. However, when there is a severe parallax angle in the video frame, linear combination causes the fusion result to be blurred, and ghost occurs in the entire overlapping area. We therefore try to set the parameter value k to a special piece-wise prior function to avoid the variance in the overlap region of the two cameras. In the present invention, the nonlinear k-function formula is defined as follows:
wherein, β ═ min (x, y, | W-x |, | H-y |), indicates the minimum value of the distance from the pixel point to the frame image boundary. W and H represent the height and width of the frame images, respectively, N is the width of the nonlinear variation boundary, and the masked portion between the frame images is trimmed according to the fused region. The value of k remains uniform in the central part of the base frame image and starts to decrease rapidly as it approaches another layer boundary distance. And the fade process is determined by the value of N. Therefore, when N is larger, the change between different frame images is smoother and natural, and the resolution of the central area is smaller; and vice versa.
And similarly, sequentially processing the rest pixel points on the side until other edges or the boundary of the overlapped area.
Step I-5: calculating each pixel point in the other input video frame by the same method in the step I-4; now weighted fusion formula and formula The same is true.
Step I-6: by weighted average of the formula f (x)i,yi)=kf1(xi,yi)+(1-k)f2(xi,yi) And calculating gray values of other pixel points of the fused video frame to finally obtain the fused video. Wherein, the gray scale difference weighting coefficient k satisfies that k is more than or equal to 0 and less than or equal to 1.
As shown in FIG. 2, FIG. 2(a) shows an input video frame I1FIG. 2(b) shows an input video frame I2. As shown in fig. 3, in the case of a moving object existing in the conventional algorithm, a stitching result is prone to generate ghost, that is, the stitching result is partially overlapped and non-overlapped with the moving object. As shown in FIG. 4, the method provided by the present invention utilizes Sobel edge detection to calculate the edge contour of the object and the gray values of the pixel points at the two sides of the object, the gray difference mean value of the matching points provides the gray/brightness difference of the overlapping area of the two images, indirectly explains the gray value change of the pixels in the area where the moving object is located, positions the position of the moving object,ghost image is effectively eliminated. The method provided by the invention effectively overcomes the interference of objective factors such as illumination, ghost and the like, and has better splicing effect; and the calculation is simple and the parameter setting is simple and convenient.
Claims (3)
1. A video splicing method based on minimum boundary distance is characterized by comprising the following steps:
step A: collecting two sections of video data containing overlapping areas;
and B: respectively carrying out median filtering on the video frame data obtained in the step A to obtain a Gaussian pyramid;
and C: combining and convolving the Gaussian pyramid obtained in the step B and image data to obtain a scale space of the video frame obtained in the step A;
step D: c, carrying out extreme point detection on the scale space obtained in the step C to obtain an extreme point of the maximum and minimum space;
step E: removing key points with the contrast ratio smaller than 0.03 and unstable edge response points from the extreme points obtained in the step D to obtain the positions and the scales of the determined key points;
step F: determining the gradient direction of the field pixels by using the position and the scale of the key point in the step E to obtain a key point direction parameter;
step G: calculating gradient direction histograms of 8 directions on each 4 multiplied by 4 small block according to the key point direction parameters obtained in the step F and the positions and the scales of the key points in the step E, and drawing an accumulated value of each gradient direction to form a seed point; one key point consists of four seed points of 2 multiplied by 2, and each seed point has 8 pieces of direction vector information; obtaining a plurality of groups of feature point descriptors matched with each other;
step H: g, randomly sampling a plurality of groups of mutually matched feature point descriptors obtained in the step G, and refining the plurality of groups of mutually matched feature point descriptors to obtain mutually matched feature point descriptors in two video frames;
step I: and D, obtaining a final splicing result of video splicing by using the mutually matched feature point descriptors obtained in the step H and using a minimum boundary distance algorithm.
2. The method for video stitching based on minimum boundary distance according to claim 1, wherein in step H, the method for refining the plurality of sets of mutually matched feature point descriptors comprises:
step H-1: randomly selecting 4 groups of feature point descriptors matched with each other to form a random sample, calculating a transformation matrix, calculating the distance between the feature points of each group of matching points, then calculating the number of inner points consistent with the transformation matrix, selecting the transformation matrix with the largest number of inner points through multiple sampling, and selecting the transformation matrix with the smallest standard deviation of the inner points when the number of the inner points is equal;
step H-2: refining the transformation matrix by adopting an iterative method, wherein an LM algorithm minimized cost function is adopted in the iterative method for iterative refining;
step H-3: defining a nearby search area by using the transformation matrix obtained by refining in the step H-2, and refining the matched feature point descriptors;
step H-4: repeating the steps H-2 and H-3 until the number of the characteristic points of the matching points is stable;
by adopting the method, mismatching point pairs can be effectively reduced.
3. The method for video stitching based on minimum boundary distance according to claim 1, wherein in step I, the minimum boundary distance algorithm comprises:
step I-1: extracting the object edge in each input video frame by using a Sobel edge detection algorithm, thereby obtaining the edge difference of the overlapped area;
step I-2: calculating the gray level difference of all the matched characteristic points in the overlapped area in the input video frame, and averaging the gray level difference;
step I-3: comparing the object edges of the overlapping areas in the two video frames obtained in the step I-1 to obtain non-overlapped edges;
step I-4: calculating gray values of pixel points on two sides of the self-contained non-coincident edge of the input video frame, and respectively making differences with the gray values of the pixel points at corresponding positions in the input video frame, wherein each obtained difference value is compared with the average gray difference in the step I-2; if not, the pixel point is proved to be a constituent pixel point of a moving object in the input video frame; processing other pixel points in sequence until other edges or the boundary of the overlapped area;
step I-5: calculating each pixel point in the other input video frame by the same method in the step I-4;
step I-6: and calculating the gray values of other pixel points of the fused video frame through a weighted average formula to finally obtain the fused video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510728737.XA CN105303544A (en) | 2015-10-30 | 2015-10-30 | Video splicing method based on minimum boundary distance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510728737.XA CN105303544A (en) | 2015-10-30 | 2015-10-30 | Video splicing method based on minimum boundary distance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105303544A true CN105303544A (en) | 2016-02-03 |
Family
ID=55200768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510728737.XA Pending CN105303544A (en) | 2015-10-30 | 2015-10-30 | Video splicing method based on minimum boundary distance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105303544A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106530407A (en) * | 2016-12-14 | 2017-03-22 | 深圳市金大象文化发展有限公司 | Three-dimensional panoramic splicing method, device and system for virtual reality |
CN107301661A (en) * | 2017-07-10 | 2017-10-27 | 中国科学院遥感与数字地球研究所 | High-resolution remote sensing image method for registering based on edge point feature |
CN109859104A (en) * | 2019-01-19 | 2019-06-07 | 创新奇智(重庆)科技有限公司 | A kind of video generates method, computer-readable medium and the converting system of picture |
CN112163996A (en) * | 2020-09-10 | 2021-01-01 | 沈阳风驰软件股份有限公司 | Flat-angle video fusion method based on image processing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2221764A1 (en) * | 2009-02-20 | 2010-08-25 | Samsung Electronics Co., Ltd. | Method of creating a composite image |
CN101951487A (en) * | 2010-08-19 | 2011-01-19 | 深圳大学 | Panoramic image fusion method, system and image processing equipment |
CN103593832A (en) * | 2013-09-25 | 2014-02-19 | 重庆邮电大学 | Method for image mosaic based on feature detection operator of second order difference of Gaussian |
CN104134200A (en) * | 2014-06-27 | 2014-11-05 | 河海大学 | Mobile scene image splicing method based on improved weighted fusion |
-
2015
- 2015-10-30 CN CN201510728737.XA patent/CN105303544A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2221764A1 (en) * | 2009-02-20 | 2010-08-25 | Samsung Electronics Co., Ltd. | Method of creating a composite image |
CN101951487A (en) * | 2010-08-19 | 2011-01-19 | 深圳大学 | Panoramic image fusion method, system and image processing equipment |
CN103593832A (en) * | 2013-09-25 | 2014-02-19 | 重庆邮电大学 | Method for image mosaic based on feature detection operator of second order difference of Gaussian |
CN104134200A (en) * | 2014-06-27 | 2014-11-05 | 河海大学 | Mobile scene image splicing method based on improved weighted fusion |
Non-Patent Citations (1)
Title |
---|
刘鹏,王敏: "基于改进加权融合算法的运动场景图像拼接", 《信息技术》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106530407A (en) * | 2016-12-14 | 2017-03-22 | 深圳市金大象文化发展有限公司 | Three-dimensional panoramic splicing method, device and system for virtual reality |
CN107301661A (en) * | 2017-07-10 | 2017-10-27 | 中国科学院遥感与数字地球研究所 | High-resolution remote sensing image method for registering based on edge point feature |
CN107301661B (en) * | 2017-07-10 | 2020-09-11 | 中国科学院遥感与数字地球研究所 | High-resolution remote sensing image registration method based on edge point features |
CN109859104A (en) * | 2019-01-19 | 2019-06-07 | 创新奇智(重庆)科技有限公司 | A kind of video generates method, computer-readable medium and the converting system of picture |
CN112163996A (en) * | 2020-09-10 | 2021-01-01 | 沈阳风驰软件股份有限公司 | Flat-angle video fusion method based on image processing |
CN112163996B (en) * | 2020-09-10 | 2023-12-05 | 沈阳风驰软件股份有限公司 | Flat angle video fusion method based on image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104134200B (en) | Mobile scene image splicing method based on improved weighted fusion | |
Choi et al. | Thermal image enhancement using convolutional neural network | |
Engin et al. | Cycle-dehaze: Enhanced cyclegan for single image dehazing | |
Bhat et al. | Multi-focus image fusion techniques: a survey | |
Mo et al. | Attribute filter based infrared and visible image fusion | |
Trulls et al. | Dense segmentation-aware descriptors | |
CN105809626A (en) | Self-adaption light compensation video image splicing method | |
CN112364865B (en) | Method for detecting small moving target in complex scene | |
CN106846289A (en) | A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification | |
Yao et al. | Fast human detection from joint appearance and foreground feature subset covariances | |
CN109215053A (en) | Moving vehicle detection method containing halted state in a kind of unmanned plane video | |
CN105303544A (en) | Video splicing method based on minimum boundary distance | |
CN104766319A (en) | Method for improving registration precision of images photographed at night | |
CN108205657A (en) | Method, storage medium and the mobile terminal of video lens segmentation | |
Pan et al. | Depth map completion by jointly exploiting blurry color images and sparse depth maps | |
Zhao et al. | Region-and pixel-level multi-focus image fusion through convolutional neural networks | |
Hadfield et al. | Hollywood 3d: what are the best 3d features for action recognition? | |
CN106257537A (en) | A kind of spatial depth extracting method based on field information | |
Liu et al. | Depth-guided sparse structure-from-motion for movies and tv shows | |
WO2024016632A1 (en) | Bright spot location method, bright spot location apparatus, electronic device and storage medium | |
CN110120012B (en) | Video stitching method for synchronous key frame extraction based on binocular camera | |
Chen et al. | Visual depth guided image rain streaks removal via sparse coding | |
Fazlali et al. | Single image rain/snow removal using distortion type information | |
CN111127353A (en) | High-dynamic image ghost removing method based on block registration and matching | |
Furnari et al. | Generalized Sobel filters for gradient estimation of distorted images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160203 |