US8724854B2 - Methods and apparatus for robust video stabilization - Google Patents
Methods and apparatus for robust video stabilization Download PDFInfo
- Publication number
- US8724854B2 US8724854B2 US13/301,572 US201113301572A US8724854B2 US 8724854 B2 US8724854 B2 US 8724854B2 US 201113301572 A US201113301572 A US 201113301572A US 8724854 B2 US8724854 B2 US 8724854B2
- Authority
- US
- United States
- Prior art keywords
- factorization
- window
- windows
- technique
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 360
- 230000006641 stabilisation Effects 0.000 title claims abstract description 188
- 238000011105 stabilization Methods 0.000 title claims abstract description 187
- 230000007704 transition Effects 0.000 claims abstract description 121
- 230000033001 locomotion Effects 0.000 claims abstract description 70
- 238000000638 solvent extraction Methods 0.000 claims abstract description 34
- 238000005457 optimization Methods 0.000 claims abstract description 33
- 238000013519 translation Methods 0.000 claims abstract description 4
- 230000009466 transformation Effects 0.000 claims description 56
- 238000009499 grossing Methods 0.000 claims description 28
- 239000013598 vector Substances 0.000 claims description 28
- 230000006870 function Effects 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 26
- 230000015654 memory Effects 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 22
- 230000002123 temporal effect Effects 0.000 claims description 10
- 238000005192 partition Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 238000006073 displacement reaction Methods 0.000 description 18
- 238000013459 approach Methods 0.000 description 12
- 238000000844 transformation Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 230000002146 bilateral effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013442 quality metrics Methods 0.000 description 3
- 230000000087 stabilizing effect Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 208000018747 cerebellar ataxia with neuropathy and bilateral vestibular areflexia syndrome Diseases 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 229920000638 styrene acrylonitrile Polymers 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- 2-D stabilization is widely implemented in commercial software. This approach applies 2-D motion models, such as affine or projective transforms, to each video frame. Though conventional 2-D stabilization tends to be robust and fast, the amount of stabilization it can provide is very limited because the motion model is too weak; it cannot account for the parallax induced by 3-D camera motion.
- conventional 3-D video stabilization techniques may perform much stronger stabilization, and may even simulate 3-D motions such as linear camera paths.
- a 3-D model of the scene and camera motion are reconstructed using structure-from-motion (SFM) techniques, and then novel views are rendered from a new, smooth 3-D camera path.
- SFM structure-from-motion
- a problem with 3-D stabilization is the opposite of 2-D: the motion model is too complex to compute quickly and robustly.
- SFM is a fundamentally difficult problem, and the generality of conventional solutions is limited when applied to the diverse camera motions of amateur-level video. In general, requiring 3-D reconstruction hinders the practicality of the 3-D stabilization pipeline.
- Three-dimensional video stabilization typically begins by computing a 3-D model of the input camera motion and scene.
- Image-based rendering techniques can then be used to render novel views from new camera paths for videos of static scenes.
- Dynamic scenes are more challenging, however, since blending multiple frames may cause ghosting.
- ghosting may be reduced or avoided by fitting a homography to each frame; however, this approach cannot handle parallax.
- Content-preserving warps or content-aware warps
- a content-preserving warp is content-aware in that it attempts to maintain as much as possible the original characteristics of the objects in the scene that are most likely to be noticeable to a viewer.
- the reconstructed 3-D point cloud is projected to both the input and output cameras, producing a sparse set of displacements that guide a spatially-varying warping technique.
- the robust video stabilization technique applies a feature tracking technique to the video to generate feature trajectories.
- U.S. patent application Ser. No. 12/953,703 describes a feature tracking technique that may be used in some embodiments. Note that other techniques may be used in some embodiments of the robust video stabilization technique to track features.
- the robust video stabilization technique may apply a video partitioning technique to segment the input video sequence into one or more factorization windows and one or more transition windows. At least some embodiments may use a conservative factorization approach to partition the video into overlapping windows. The transition windows may be extended to overlap adjacent windows.
- the robust video stabilization technique may smooth the trajectories in each of the windows, in sequence.
- a subspace-based optimization technique may be used to smooth the tracks while respecting the boundary constraints from the previous window.
- transition window trajectory smoothing a direct track optimization technique that uses a similarity motion model may be used.
- the robust video stabilization technique may determine and apply warping models to the frames in the video sequence.
- a warping score is determined for each frame in the video sequence, and a warping model is determined according to the warping score of the frame.
- the technique may adjust the warping score for a frame according to the scores of adjacent or nearby frames to help achieve a smoother transition between frames.
- the warping models may include a content-preserving warping model, a homography model, and a similarity transform model.
- the robust video stabilization technique may crop all of the frames to generate an output video.
- a cropping technique may be used that places all frames into respective canvases, finds maximum possible cropping windows for all frames, forms an array of the anchor points (e.g., centers) of the cropping windows, and temporally smoothes the array. The cropping windows are then adjusted according to the smoothed anchor points.
- the various techniques described above may be used in combination in a robust video stabilization technique as described herein, these techniques may be used, alone or in combination, in other video stabilization techniques.
- the techniques for stabilizing factorization windows may be used in the subspace video stabilization technique described in patent application Ser. No. 12/953,703.
- the technique for determining and applying warping models may be applied in the subspace video stabilization technique described in patent application Ser. No. 12/953,703 or in other video stabilization techniques to apply warping to frames.
- the cropping technique may be applied in the subspace video stabilization technique described in patent application Ser. No. 12/953,703 or in other video stabilization techniques to crop warped frames.
- FIG. 1 illustrates an input video sequence divided into two types of overlapping windows, referred to as factorization windows and transition windows, according to at least some embodiments.
- FIG. 2 is a high-level flowchart of the robust video stabilization technique, according to at least some embodiments.
- FIGS. 3A and 3B illustrate portions of a cropping technique applied to example frames from an uncropped but stabilized video, according to at least some embodiments.
- FIG. 4A shows that, on each frame, the cropping technique according to at least some embodiments first determines the scene center, the maximum possible cropping window, and the distances from the center to the four edges.
- FIG. 4B shows that, in the cropping technique according to at least some embodiments, after temporal smoothing, the scene center position is shifted, and its distances to the four edges are updated accordingly.
- FIG. 5 illustrates an example video stabilization module, and data flow and processing within the module, according to at least some embodiments.
- FIG. 6 illustrates a module that may implement video stabilization methods as illustrated in FIGS. 1 through 5 and 7 through 11 , according to at least some embodiments.
- FIG. 7 is a flowchart of a video partitioning technique according to some embodiments.
- FIG. 8 is a high-level flowchart of a factorization window stabilization technique, according to at least some embodiments.
- FIG. 9A illustrates a technique for subdividing a transition window into two types of subwindows and processing the two types of subwindows differently, according to at least some embodiments.
- FIG. 9B is a high-level flowchart of a transaction window stabilization technique, according to at least some embodiments.
- FIG. 10 is a high-level flowchart of a method for determining and applying warping models, according to at least some embodiments.
- FIG. 11 is a high-level flowchart of a cropping technique, according to at least some embodiments.
- FIG. 12 illustrates an example computer system that may be used in embodiments.
- such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these or similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic computing device.
- a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device.
- a subspace video stabilization technique is described in U.S. patent application Ser. No. 12/953,703, entitled “Methods and Apparatus for Subspace Video Stabilization,” filed Nov. 24, 2010, the content of which is incorporated by reference herein in its entirety.
- the subspace video stabilization technique described in patent application Ser. No. 12/953,703 may provide an approach to video stabilization that achieves high-quality camera motion for a relatively wide range of videos.
- the subspace video stabilization technique may transform a set of input two-dimensional (2-D) motion trajectories so that they are both smooth and resemble visually plausible views of the imaged scene; this may be achieved by enforcing subspace constraints on feature trajectories while smoothing them.
- the subspace video stabilization technique may assemble tracked features in the video into a trajectory matrix, factor the trajectory matrix into two low-rank matrices, and perform filtering or curve fitting in a low-dimensional linear space.
- the subspace video stabilization technique may employ a moving factorization technique that is both efficient and streamable to perform the factorization.
- the moving factorization technique may factor two-dimensional (2D) feature trajectories from an input video sequence into a coefficient matrix representing features in the input video sequence and basis vectors representing camera motion over time in the input video sequence.
- the coefficient matrix may describe each feature as a linear combination of two or more of the basis vectors.
- the moving factorization technique iteratively: performs factorization in a window of k frames of the input video sequence; moves the window forward ⁇ frames; and performs factorization in the moved window.
- the parameters k and ⁇ are positive integers, where k is greater than ⁇ so that the factored windows overlap
- the subspace video stabilization technique described in patent application Ser. No. 12/953,703 tends to work well for carefully-shot, relatively short video sequences that typically yield a relatively large number of long tracks.
- Tracks are trajectories of feature points in time across frames of the video, and may be referred to herein as tracks, feature tracks, trajectories, or feature trajectories.
- the length of a track is determined by how many frames the track crosses.
- parts of the video may contain relatively few tracks, and/or relatively short tracks, and thus the subspace video stabilization technique may not produce satisfactory results or may even fail to produce a result at all.
- the following may be limitations of the subspace video stabilization technique when applied to challenging cases:
- Embodiments of a robust video stabilization technique is described herein that may handle more challenging video sequences than can be handled by the subspace video stabilization technique described in patent application Ser. No. 12/953,703 and other conventional video stabilization techniques.
- Embodiments of the video stabilization technique as described herein are robust and efficient, and provide high quality results over a wider range of videos than previous techniques.
- the robust video stabilization techniques described herein are relatively simple and may require no special handling of the known problems in techniques employing SFM, since none of the problems change the subspace properties of motion trajectories on which embodiments of the robust video stabilization technique may rely.
- embodiments of the robust video stabilization technique may be performed in real-time or near real-time, may use linear approximations to bilinear optimizations for efficiency, and may be computed in a streaming fashion.
- the robust video stabilization technique may apply a factorization technique conservatively, and may only apply factorization to parts of an input video sequence where the factorization works well. For the rest of the input video sequence, the robust video stabilization technique may apply a different optimization technique that is more reliable under conditions where there are insufficient tracks to apply the factorization technique.
- At least some embodiments of the robust video stabilization technique may also allow the user to change the underlying motion model of the stabilization technique, so that for normal examples the robust video stabilization technique may take full advantage of a subspace video stabilization technique to generate high quality results, while for examples of poor quality the robust video stabilization technique may still manage to generate reasonable results using simpler motion models.
- the robust video stabilization technique may thus work better on more challenging videos to produce more satisfactory results, and may be more controllable, than the video stabilization technique described in patent application Ser. No. 12/953,703 and other conventional video stabilization techniques.
- an input video sequence may be divided into two types of overlapping windows, referred to as factorization windows and transition windows, as shown in FIG. 1 .
- Each window may include multiple sequential frames from the input video sequence.
- the robust video stabilization technique may decompose an input video sequence to be stabilized into the two types of windows (factorization windows 100 and transition windows 102 ) for optimization.
- FIG. 1 shows three factorization windows 100 A, 100 B, and 100 C, and two transition windows 102 A and 102 B.
- Transition window 102 A includes one or more frames that appear between the end of factorization window 100 A and factorization window 100 B.
- Transition window 100 A also includes one or more frames that overlap factorization window 100 A and one or more frames that overlap factorization window 100 B, as indicated by overlaps 104 .
- a factorization window 100 generally contains sufficiently many long feature tracks so that the factorization technique works well.
- a transition window 102 generally contains fewer long feature tracks than a factorization window 100 , and thus the factorization technique may not work as well or at all on a transition window 100 .
- the robust video stabilization technique optimizes the windows sequentially with respect to the time axis. For example, the windows in FIG. 1 may be optimized in this order: factorization window 100 A, transition window 102 A, factorization window 100 B, transition window 102 B, factorization window 100 C.
- the overlapping portions between two adjacent windows may allow the robust video stabilization technique to use the previous window to constrain the next window for temporal smoothness, since the transition from one window to the next should be smooth. While FIG. 1 shows transition windows 102 overlapping adjacent factorization windows 100 , in at least some embodiments two adjacent factorization windows 100 may overlap in some cases.
- FIG. 2 is a high-level flowchart of the robust video stabilization technique, according to at least some embodiments.
- the robust video stabilization technique applies a feature tracking technique to the video to generate feature trajectories, as indicated at 202 .
- Patent application Ser. No. 12/953,703 describes a feature tracking technique that may be used in some embodiments. Note that other techniques may be used in some embodiments of the robust video stabilization technique to track features.
- the robust video stabilization technique then performs video partitioning, as indicated at 204 , to segment the input video sequence 200 into one or more factorization windows and one or more transition windows (see FIG. 1 ). At least some embodiments may use a conservative factorization approach to partition the video into overlapping windows, as described below in the section titled Video partitioning technique. Note that other techniques may be used in some embodiments of the robust video stabilization technique to partition the video.
- the robust video stabilization technique may smooth the tracks in each of the windows, in sequence, thus alternating between factorization window track smoothing 206 and transition window track smoothing 208 as the windows are stabilized in sequence.
- a subspace-based optimization technique may be used to smooth the tracks while respecting the boundary constraints from the previous window.
- a subspace-based optimization technique that may be used at 206 to stabilize factorization windows in at least some embodiments is described below in the section titled Factorization window stabilization techniques.
- a direct track optimization technique that uses a similarity motion model may be used.
- a direct track optimization technique that may be used at 208 to stabilize transition windows in at least some embodiments is described below in the section titled Transition window stabilization techniques.
- the robust video stabilization technique may determine and apply warping models to the frames in the video sequence, as indicated at 210 .
- a technique that may be used at 210 in at least some embodiments is described below in the section titled Determining and applying warping models.
- a warping score is determined for each frame in the video sequence, and a warping model is determined according to the warping score of the frame.
- the technique may adjust the warping score for a frame according to the scores of adjacent frames to help achieve a smoother transition between frames. Note that other techniques may be used in some embodiments of the robust video stabilization technique to warp the frames.
- the robust video stabilization technique may crop all of the frames, as indicated at 212 , to generate an output video 214 .
- a technique that may be used at 212 in at least some embodiments to crop the frames is described below in the section titled Cropping technique. Note that other cropping techniques may be used in some embodiments of the robust video stabilization technique.
- elements 202 through 212 of the robust video stabilization technique as illustrated in FIG. 2 are explained in more detail below. While elements 202 through 212 are shown in FIG. 2 as being used in combination in a robust video stabilization technique as described herein, these elements may be used, alone or in combination, in other video stabilization techniques.
- the techniques for stabilizing factorization windows described in the section Factorization window stabilization techniques that may be used at 206 of FIG. 2 may be used in the subspace video stabilization technique described in patent application Ser. No. 12/953,703.
- the technique that may be used at 210 described in the section titled Determining and applying warping models may be applied in the subspace video stabilization technique described in patent application Ser. No.
- the technique that may be used at 212 to crop the frames described in the section titled Cropping technique may be applied in the subspace video stabilization technique described in patent application Ser. No. 12/953,703 or in other video stabilization techniques to crop warped frames.
- the robust video stabilization technique applies a feature tracking technique to the video to generate feature trajectories.
- the robust video stabilization technique tracks multiple feature points across the frames of the input video sequence to generate feature trajectories throughout the entire video.
- a feature tracking technique is applied is to find the locations of the same feature point in a sequence of two or more frames. Trajectories should run as long as possible, and as many feature points as possible should be identified and tracked.
- Kanade-Lucas-Tomasi (KLT) feature tracker technology may be used as the 2-D feature tracking technique. Other techniques may be used for 2-D feature tracking in other embodiments.
- the result of the feature tracking technique is a set of feature trajectories ⁇ T i ⁇ .
- Each feature trajectory indicates the locations of a respective point in a contiguous series of frames.
- a feature tracking technique that may be used in some embodiments is further described in patent application Ser. No. 12/953,703. Note that other techniques may be used in some embodiments of the robust video stabilization technique to track features.
- the robust video stabilization technique performs video partitioning to segment the input video sequence into one or more factorization windows and one or more transition windows (see FIG. 1 ).
- a window in this context is a contiguous set of frames.
- a factorization window may be defined as a set of contiguous frames to which a moving factorization technique, such as the moving factorization technique described in patent application Ser. No. 12/953,703, can be applied.
- the frames must contain at least a minimum number of tracks (feature trajectories) for the factorization technique to be applied.
- some embodiments may employ a threshold that specifies a minimum number of tracks.
- a transition window may be defined as a window to which frames that do not qualify for factorization windows are assigned.
- a transition window generally lies between two factorization windows, and partially overlaps each adjacent factorization window (see FIG. 1 ).
- the video partitioning technique may favor factorization windows because better stabilization results may be obtained with a technique for stabilizing factorization windows, and thus as many frames as possible should be assigned to factorization windows. Therefore, a partitioning technique may be used that may attempt to find as many factorization windows as possible, with as many frames as possible being assigned to the factorization windows. The remaining frames that are not assigned to the factorization windows are assigned to transition windows.
- the following video partitioning technique may be used. Given a contiguous set of frames (e.g., an input video sequence), the technique starts at the beginning of the sequence (e.g., frame 0) and finds the first frame in the sequence at which a moving factorization algorithm can be applied (i.e., the first frame at which there are sufficient feature trajectories to apply the moving factorization algorithm). This frame will be the beginning of the first factorization window. If there are frames before the beginning of the first factorization window, the frames are assigned to a transition window. The frames in the input video sequence after the first frame of the factorization window are then sequentially checked to see if the frames can be assigned to the factorization window.
- a contiguous set of frames e.g., an input video sequence
- the technique starts at the beginning of the sequence (e.g., frame 0) and finds the first frame in the sequence at which a moving factorization algorithm can be applied (i.e., the first frame at which there are sufficient feature trajectories to apply the moving factor
- the checking and adding of frames to the current factorization window stops.
- a window length threshold may also be applied, and the video partitioning technique may stop adding frames to the current factorization window when the threshold is reached.
- the first factorization window includes all frames from the first frame in the window to the last frame added before a terminating condition is met. If there are still frames left in the input video sequence, the video partitioning technique begins again at the first window not already assigned to a factorization window.
- the frames are checked until a frame that can be factorized is found (i.e., a frame that has a sufficient number of tracks), which is the start of a next factorization window. Any frames between this frame and the previous factorization window are assigned to a transition window, and frames after this frame are sequentially checked to see if the frames can be added to the current factorization window, stopping when a terminating condition is met (e.g., when a frame is found that does not qualify for factorization due to an insufficient number of trajectories, or when a window length threshold is reached).
- the transition windows may be expanded to overlap the adjacent factorization windows by one or more frames, for example by 20 frames. See FIG. 1 for an example segmentation of an input video sequence into factorization windows and transition windows that overlap adjacent factorization windows.
- FIG. 7 A flowchart of the video partitioning technique according to some embodiments is shown in FIG. 7 .
- the technique finds the first frame that qualifies for factorization.
- a frame qualifies for factorization if there are sufficient feature trajectories (e.g., above a specified threshold) to apply the moving factorization algorithm.
- the technique assigns the first frame to a new factorization window.
- the technique assigns any frames prior to the first frame that are not in a window to a transition window.
- the next frame is checked.
- this frame qualifies for factorization the frame is added to the current factorization window, as indicated at 710 .
- the technique returns to element 706 . If not, the video partitioning technique proceeds to element 716 .
- a window length threshold may be applied at 712 , and the technique may stop adding frames to the current factorization window when the threshold is reached.
- the technique checks to see if there are more frames to process, as indicated at 714 . If so, the technique returns to element 700 . If not, the technique proceeds to element 716 .
- the technique may overlap adjacent windows.
- N the number of frames that are in the adjacent window.
- only the transition windows are extended; factorization windows are not extended.
- the transition windows are extended to overlap the factorization windows.
- the basic video partitioning technique described above may work aggressively to assign as many frames as possible to factorization windows. However, in some cases, this basic video partitioning technique is too aggressive as it may result in factorization windows in which there are relatively few good tracks for performing factorization. In at least some of these factorization windows, the factorization algorithm may barely succeed, and the final stabilization results may contain artifacts due to the lack of a sufficient number of good tracks. Therefore, in at least some embodiments, a more conservative video partitioning technique may be used to partition the input video sequence into factorization windows and transition windows. This conservative video partitioning technique may help to insure that the generated factorization windows include sufficient tracks for the factorization algorithm to succeed and to produce final stabilization results with fewer or no artifacts. Using this conservative video partitioning technique, more frames may be assigned to transition windows than with the previously described aggressive technique.
- a factorization window [t start , t end ] may be generated using the basic video partitioning technique described above; t start represents the first frame in the window, and t end represents the last frame in the window. For each frame in the window, a quality score may be computed as:
- the conservative video partitioning technique searches for the first frame in the window that has a quality score that is lower than a predefined threshold. If no such frame is found, then the factorization window passes the quality check. Otherwise, suppose on frame t (t start ⁇ t ⁇ t end , where ⁇ indicates relative position in a temporal sequence) the quality score is lower than the threshold. The conservative video partitioning technique then truncates the factorization window to [t start , t ⁇ 1], and restarts from frame t.
- the techniques may be adapted to partition an input video sequence into more than two different types of windows. More than two different video stabilization techniques may then be applied to the different types of windows.
- the video partitioning techniques are described as partitioning the video into factorization windows and transition windows for processing by different types of video stabilization techniques, the video partitioning techniques may be applied to partition a video into other or different types of windows for processing by other types of video or image processing techniques.
- the r row vectors of E may be referred to as eigen-trajectories, in that they represent the basis vectors that can be linearly combined to form a 2-D motion trajectory over the window of k frames.
- the coefficient matrix C represents each observed feature as such a linear combination.
- the technique performs temporal Gaussian smoothing directly on the matrix E, and re-multiplies with coefficient matrix C to form a new matrix of trajectories.
- this approach may have a number of problems.
- the factorization may not complete if the tracks are poor.
- strong Gaussian smoothing has a well-known “shrinkage” problem, where the ends of an open curve will shrink in; the beginning and end of sequences produced by the subspace video stabilization technique as described in patent application Ser. No. 12/953,703 may exhibit this problem.
- the eigen-trajectories are treated as unknowns in an optimization.
- a solution for these unknowns is computed as those that minimize an energy function encoding the goals defined above.
- FIG. 8 is a high-level flowchart of a factorization window stabilization technique, according to at least some embodiments.
- the two-dimensional (2-D) feature trajectories from an input video sequence are factored into a coefficient matrix representing features in the input video sequence and basis vectors representing camera motion over time in the input video sequence.
- an energy minimization technique is applied to the basis vectors to generate smoothed basis vectors.
- the energy minimization technique treats the basis vectors as unknowns in an optimization framework and computes a solution for each unknown in the optimization framework that minimizes an energy function that is the sum of a first data term that keeps the smoothed feature trajectories close in position to the original feature trajectories, a smoothness term that smoothes the feature trajectories over time, and a second data term that preserves temporal consistency between overlapping windows.
- the smoothed basis vectors are re-multiplied with the original coefficient matrix to yield a set of smoothed output trajectories.
- the energy function is a sum of a data term and smoothness term over each tracked point in each frame.
- the data term D(E) indicates that the output, smoothed trajectories should be close in position to the original trajectories:
- ⁇ i,j is an indicator function that is 1 if the jth track exists on the ith frame and 0 otherwise
- x i,j is the original tracked location of the jth track in the ith frame
- ⁇ i,j is a weight on each trajectory (the computation of this weight is described later in this document).
- C j indicates the two rows (2 ⁇ r) of the matrix C that contain the coefficients for the jth track
- E i is the column (r ⁇ 1) of the matrix E that contains the eigen-trajectories at frame i. Note that the matrix E is the only unknown in this term.
- the smoothness term indicates that the smoothed trajectories should move smoothly over time.
- One method to maximize smoothness is to minimize the second derivative of the motion of each trajectory. Therefore, the smoothness term S(E) is:
- a second data term is added that preserves temporal consistency between any overlapping windows (see, e.g., FIG. 1 ).
- a data term D′(E) is added:
- ⁇ and/or ⁇ may be user-settable parameters.
- ⁇ 100 as the default; however, other values for ⁇ may be used.
- the default value of ⁇ may be 200; however, other default values for ⁇ may be used.
- the weight ⁇ i,j in equation (3) may be used to fade-in and fade-out the contribution of each trajectory over time to preserve temporal coherence.
- a technique that may be used to set this weight is described in reference to FIG. 5 of the published paper Content - preserving warps for 3 D video stabilization , which appeared in ACM Trans Graphics 28, 3, Article No 44, 2009, the content of which was previously incorporated by reference.
- the overall energy function is a linear least squares problem, i.e., quadratic in the unknowns E, and so can be minimized to its global minimum by solving a single sparse linear system.
- a complication may arise if there are fewer than r trajectories for a frame.
- at least as many constraints are needed as the number of variables. If there are not enough constraints (e.g., if there are fewer than r trajectories for a frame), the least squares problem is under-constrained, and there are multiple possible solutions.
- the general paradigm of a stabilization algorithm is to compute a set of displacements for each image; the displacements are used to compute a warp for the image.
- the displacements are used to compute a warp for the image.
- at least some embodiments may require the displacements for an image to follow a 2-D parametric transformation (see details below). Therefore, embodiments may only need to compute a 2-D parametric transformation for each image (i.e., each frame of the transition window).
- the techniques for computing the transformations are different.
- a technique for computing smooth motion for transition windows is described, followed by a description of a no motion technique for transition windows.
- a technique for handling the case where there is insufficient information to process the entire transition window is described.
- a transition window may be subdivided into subwindows, with one or more subwindows each including frames that that do not have a sufficient number of features for performing a 2-D parametric transformation, and one or more other subwindows each including frames that that do have a sufficient number of features for performing a 2-D parametric transformation.
- the first set of subwindows that include frames with insufficient features may then be processed separately from the second set of windows to which a 2-D parametric transformation is applied.
- FIG. 9A illustrates a technique for subdividing a transition window into two types of subwindows and processing the two types of subwindows differently, according to at least some embodiments.
- M the number of frames in a transition window of interest.
- Feature trajectories generated by the feature tracking technique are assumed. However, only the trajectories that have overlaps with the transition window of interest are of interest.
- N the number of feature trajectories that have overlaps with the transition window of interest. From these N trajectories, the number of features on each frame in the window can be computed, and this number can be compared against a threshold. There are two possibilities.
- the first case is that there is no frame that has a sufficient number of features (e.g., greater than or equal to the threshold) for performing the 2-D parametric transformation.
- the technique simply skips the entire optimization algorithm and sets the output transformations to the 2-D identity transformation for both smooth motion and no motion cases, essentially keeping the feature trajectories as they originally were.
- the second case is that there are frames with sufficient numbers of features for performing the 2-D parametric transformation.
- the technique finds the first contiguous set of frames in which there are a sufficient number of features for performing the 2-D parametric transformation, as indicated at 900 of FIG. 9A . As indicated at 902 of FIG.
- frames before this set of frames in the transition window that do not have a sufficient number of features for performing the 2-D parametric transformation, if any, are handled by setting the output transformations to the 2-D identity transformation; no optimization is performed.
- the length of this contiguous set of frames is compared against a window length threshold. If the length is greater than the threshold, the window length is truncated to the threshold.
- a transaction window stabilization technique that employs a 2-D parametric transformation is applied to the frames in this set of windows.
- the transaction window stabilization technique may be a technique for computing smooth motion for transition windows as described below, or a no motion technique for transition windows as described below.
- FIG. 9B is a high-level flowchart of a transaction window stabilization technique, according to at least some embodiments.
- the transition window may be extended to overlap a previous window, which may be a factorization window or another transaction window.
- a global 2-D parametric transformation may be applied to the extended transition window, for example to smooth the feature trajectories in the window.
- the global 2-D parametric transformation may be a global 2-D similarity transformation.
- the global 2-D parametric transformation may be configured to produce smooth motion, or optionally to produce no motion. Further details of these transaction window stabilization techniques are described in the sections titled Smooth motion in transition windows and No motion option in transition windows.
- the transition window processing technique is restarted at the frame after the set of frames to find another contiguous set of frames in which there are a sufficient number of features. This may be repeated until all frames in the transition window have been processed, as indicated at 906 of FIG. 9A .
- these sets of frames and each set is a subwindow of a transition window produced by the previously described video partitioning technique, these sets of frames can be considered as a transition window that is processed by the transition window stabilization techniques as described in the following sections.
- This section describes a technique for computing smooth motion for transition windows, according to at least some embodiments.
- M be the number of frames in a transition window.
- This transition window is extended 0 frames to overlap with a previous window, which can be either a factorization window or another transition window.
- N be the number of feature trajectories that have overlaps larger than a threshold with the transition window of interest.
- x i,j is denoted to be the location of the jth feature trajectory on the ith image. It may be assumed that in the overlapping frames, for each x ij , there is a corresponding smooth feature location ⁇ circumflex over (x) ⁇ i,j , which is computed from the optimization result of the previous window.
- a feature trajectory may not span all the frames, i.e. x i,j is not defined for all the combinations of i and j.
- a characteristic function ⁇ i,j may be used to denote this information; in at least some embodiments, ⁇ i,j may be set to 1 if the jth trajectory is available on the ith image, and ⁇ i,j may be set to 0 otherwise.
- displacements may be based on subspace constraints, as previously described. Instead, for transition windows, a technique may be used that restricts all the displacements in one image (frame) to follow a global 2-D transformation. At least some embodiments may employ 2-D similarity transformations; however, the technique may be generalized to other parametric transformations. Note that a global 2-D similarity transformation for an image is fairly strict. However, an advantage is that the transformations may be computed robustly from a very few number of trajectories.
- all of the similarity transformations in a transition window may be computed jointly by optimizing a cost function.
- the cost function may be implemented as follows. First, the output video should be close to the input video. This may be manifested through a data cost that encourages the transformations to follow the input feature locations, including the smooth one, in the overlapping frames:
- ⁇ i,j is the weight for each term which can vary according to both i and j.
- a method for computing the weights is discussed later in this document.
- the transformations should yield a video with smooth motion. There are several ways to encode this requirement in a cost function. For example, the following two smoothness terms may be used:
- the final cost function is a combination of the data and smoothness terms.
- This cost function is nonlinear least squares in terms of ( ⁇ i , s i , t i ).
- an iterative global optimization technique such as the Levenberg-Marquardt technique may be applied to perform the optimization.
- parameter initialization may be performed as follows:
- the jacobian matrix computed in the Levenberg-Marquardt technique has a block structure and is very sparse. In at least some embodiments, this sparsity may be leveraged to implement the algorithm more efficiently.
- the weights ⁇ i,j and ⁇ i,j may be computed as follows. First, a weight ⁇ i,j is computed for each point x i,j . A technique that may be used to compute this weight is described in the published paper Content - preserving warps for 3 D video stabilization , which appeared in ACM Trans Graphics 28, 3, Article No 44, 2009, the content of which was previously incorporated by reference. The technique then counts the number of non-zero weights in an image I, denoted as ⁇ i . The technique then finds the maximum value of ⁇ i over all the images, denoted as ⁇ . The weights ⁇ i,j may be given as:
- ⁇ i is a set of weights as given below:
- ⁇ is a user-adjustable parameter with a default value, for example 100.
- An alternative to smoothing motion is to attempt to achieve no motion at all, similar to what a camera on a tripod would see.
- this “no motion” effect may be provided as an option to the user via a user interface.
- Different techniques may be used to achieve the no motion effect for transition windows.
- a first technique that may be used in some embodiments is to simply use the same technique as given above for smooth motion, but with much larger weights for the smoothness terms.
- Another technique that may be used in some embodiments is described below.
- an iterative global optimization technique such as the Levenberg-Marquardt technique may be applied to perform the optimization.
- parameter initialization may be performed as follows:
- ⁇ t is a 2-D offset computed between image i and image i+1. This offset may be computed from tracked points between the two images, for example using a relatively simple least squares algorithm as follows:
- the jacobian matrix computed in the Levenberg-Marquardt technique has a block structure and is very sparse. This sparsity may be leveraged to implement the algorithm efficiently.
- S O may be set to the identity transformation.
- ⁇ S O,O+1 may be a relative 2-D similarity transformation between frame O and frame O+1; otherwise, ⁇ S O,O+1 may be set to be the identity transformation.
- the previously mentioned relative 2-D similarity transformation may be computed by optimizing the following cost function:
- the techniques as described above generate a new, smoothed location for each tracked point in each frame, either using a subspace method applied to frames in factorization windows (described in the section titled Factorization window stabilization techniques) or a technique applied to frames in transition windows (described in the section titled Transition window stabilization techniques).
- the vector between each original track or trajectory and its smoothed location may be referred to as a displacement; the displacements indicate how to warp an input video frame so that its motion is stabilized.
- FIG. 10 is a high-level flowchart of a method for determining and applying warping models, according to at least some embodiments.
- the method may assign a warping score to each frame that indicates one of a plurality of warping models.
- the method may determine a quality metric or metrics for each frame according to the smoothed feature trajectories for the respective frame.
- the method may then adjust the warping score for each frame according to the determined quality metric(s) of the frame.
- each frame may then be warped according to one of the plurality of warping models indicated by the adjusted warping score for the respective frame. Further details of the method for determining and applying warping models are given below.
- the primary warping technique that is used may be a content-preserving warping technique.
- a content-preserving warping technique that may be used in at least some embodiments is described in U.S. patent application Ser. No. 12/276,119, entitled “Content-Aware Video Stabilization,” filed Nov. 21, 2008, the content of which is incorporated by reference herein in its entirety.
- a content-preserving warping technique that may be used in at least some embodiments is also described in the published paper Content - preserving warps for 3 D video stabilization , which appeared in ACM Trans Graphics 28, 3, Article No 44, 2009, the content of which was previously incorporated by reference.
- the content-preserving warping technique described in patent application Ser. No. 12/276,119 applies a homography to get a rough approximation of the overall warp.
- the content-preserving warping technique then uses known trajectories to guide a deformation of a mesh. Even though the results of such a warp may not be physically accurate, the results are generally visually plausible.
- the content-preserving warping technique may be well-suited for achieving a stable look when there are a sufficient number of high-quality displacements.
- the content-preserving warping technique may lead to distorted results.
- the quality of the displacements may be evaluated, and the warping technique that is applied may be scaled back conservatively if the quality of the displacements is low, for example below a specified displacement quality threshold.
- each frame may be assigned a warping score that indicates a warping method or model.
- a warping score of 4 may be used to indicate a content-preserving warp
- scores 1 through 3 may be used to indicate more restricted warps that are fit to the displacements in a least squares manner.
- a warping score of 3 may indicate a homography
- a warping score of 2 may indicate a similarity transform
- a warping score of 1 may indicate a whole-frame translation.
- each frame in a subspace window may be initially assigned a warping score of 4, and each frame in a transition window is assigned a warping score of 2 since transition windows optimize similarity transforms to begin with.
- the whole-frame translation may not be included in the warping techniques.
- a warping score of 4 may be used to indicate a content-preserving warp
- a warping score of 3 may indicate a homography
- a warping score of 2 may indicate a similarity transform.
- a warping score of 3 may be used to indicate a content-preserving warp
- a warping score of 2 may indicate a homography
- a warping score of 1 may indicate a similarity transform.
- the values of the scores used to represent the warping techniques are not intended to be limiting; any scale of scores may be used.
- a warping score of 4 indicates a content-preserving warp
- a warping score of 3 indicates a homography
- a warping score of 2 indicates a similarity transform
- a series of sanity checks may be performed that might reduce the warping score for at least some frames.
- outlier displacements may be rejected by fitting a similarity transform to the set of displacements for a frame and computing the median error from the similarity transform. Any displacement whose error is more than a threshold (e.g., 4.75 times) the median error is rejected outright. Points whose error fall between a range (e.g., 3.0 times and 4.75 times) the median error have their weights reduced by an exponential function, for example 1 at error 3.0, and nearly 0 at error 4.75. Finally, if the median error is more than a specified percentage (e.g., 15%) of the frame width, this indicates that the displacements are fairly messy. In this case, the warping score may be reduced, for example to 2, to indicate a similarity transform.
- another sanity check is performed that compares the best-fit similarity transform and best-fit homography. If these two warps are very different from each other, this indicates the homography contains significant distortions such as shearing and keystoning, which are not possible in similarity transforms.
- the technique may take the L1 distance between transform matrices of the homography and similarity transform; if this distance is greater than a threshold (e.g., 50), the warping score may be reduced, for example to 2, to indicate a similarity transform.
- the warping scores may be temporally smoothed, since the warp should not jump between models over time; this smoothing can produce non-integral warping scores, e.g., 2.5.
- a low score at one frame limits the score at nearby frames, with a linear fade-out of N frames (e.g., 30 frames) per warping score increment. For example, if frame 100 has a score of 2, frame 130 may have at most a score of 3, and frame 160 is the first frame that can have a full score of 4.
- an upside-down pyramid function may be placed at each frame, with the tip of the pyramid having that frame's warping score.
- Each frame's warping score is then set as the minimum value of the superimposed pyramids from all neighboring frames.
- non-integral warping scores may be rendered by cross-fading the warped grids between two warps.
- a warping score of 3.5 may be applied by first computing grids from both a content-preserving warp and a homography, and averaging the two results.
- a warping score of 2.8 may be applied by first computing grids from both a homography and a similarity transform, and combining the results with appropriate weighting towards the homography results.
- an uncropped video may be generated by directly rendering each warped frame onto a respective large canvas that is the union of all meshes, as shown in FIGS. 3A and 3B .
- FIGS. 3A and 3B show example frames 617 and 641 , respectively, from an uncropped but stabilized video.
- the transparent regions 300 represent the regions of the canvases that are not covered by the frames.
- a cropping technique may then be applied to remove the transparent edges on each frame and generate a final video that contains no transparent pixels.
- the cropping technique may determine the width and height of the cropping window (Wc,Hc), and its center on frame t (x c t ,y c t ) (used as an anchor point), which together satisfy the following constraints for each frame:
- FIG. 11 is a high-level flowchart of a cropping technique, according to at least some embodiments.
- the technique may determine an anchor point at each warped frame according to a maximum bounding box for the respective frame.
- the technique may then temporally smooth the determined anchor points.
- the warped frames may then be cropped according to the temporally smoothed anchor points. Further details of the cropping technique are given below.
- FIG. 4A shows that, on each frame, the cropping technique first determines the scene center, the maximum possible cropping window 404 , and the distances from the center to the four edges.
- FIG. 4B shows that, after temporal smoothing, the scene center position is shifted, and its distances to the four edges are updated accordingly.
- the cropping technique may generate cropping windows that satisfy all of these constraints. The technique starts by determining the maximum possible cropping window (maximum cropping window 404 in FIG. 4A ) on each frame independently.
- the maximum cropping window 404 may be denoted as: ( x m t ,y m t ,L left t ,L right t ,L top t ,L bottom t ) as shown in FIG.
- ( x m t ,y m t ) is the center of the mesh on this frame, which is also the center of the scene that the camera captures at time t
- ( L left t ,L right t ,L top t ,L bottom t ) is the length from the scene center to the left, right, top and bottom edge of the maximum cropping window.
- the scene center point may be used as the origin or anchor point to decide the location of the cropping window on the current frame.
- the cropping technique collects all of the scene centers: ( x m t ,y m t ) on all frames to form a point array, and temporally smoothes the array, for example using a bilateral smoothing filter.
- the filter may be applied to the X and Y coordinates separately. Using X coordinates as an example, the coordinates may be smoothed as:
- ⁇ t determines the length of the filter in terms of the number of neighboring frames
- ⁇ d is the range parameter, which in some embodiments may be set at one tenth of the original video width.
- Increasing ⁇ t and ⁇ d allows the cropping windows to move more smoothly across time, resulting in more stable final results.
- the downside is that the final video size is often smaller with heavier smoothing. In the extreme case, if ⁇ t and ⁇ d are set to be extremely large, and the filter is applied a large number of times, the cropping window will not move at all, and the final cropping window is the intersection of all maximum possible windows on all frames.
- a reason for using a bilateral filter instead of a Gaussian filter is to avoid letting bad frames affect good frames.
- Bad frames may, for example, occur in examples where during a few frames the rendered image suddenly drifts away from the stable location at which the majority of frames are rendered.
- the bilateral filter the few bad frames will not affect good frames since the weights between them are low.
- a new scene center location ( ⁇ circumflex over (x) ⁇ m t , ⁇ m t ) has been generated for each frame.
- the technique then updates the distances to the four edges as: ⁇ circumflex over (L) ⁇ left t , ⁇ circumflex over (L) ⁇ right t , ⁇ circumflex over (L) ⁇ top t , ⁇ circumflex over (L) ⁇ bottom t , as shown in FIG. 4B .
- the size of the cropping window may be determined by taking the minimal values of the four distances across all frames as: ⁇ circumflex over (L) ⁇ left min , ⁇ circumflex over (L) ⁇ right min , ⁇ circumflex over (L) ⁇ top min , ⁇ circumflex over (L) ⁇ bottom min .
- the four fixed distances may be applied to the scene center to generate the final cropping window for the frame.
- While the cropping technique described above relies on the center of the cropping window as the anchor point, other points within the cropping window may be used as the anchor point, for example the top left corner of the cropping window may be used.
- a video stabilization module may receive an input video sequence, and may perform robust video stabilization to generate a stabilized, and cropped, output video as described herein.
- the video stabilization module may in some embodiments be implemented by a non-transitory, computer-readable storage medium and one or more processors (e.g., CPUs and/or GPUs) of a computing apparatus.
- the computer-readable storage medium may store program instructions executable by the one or more processors to cause the computing apparatus to perform receiving a video sequence as input, performing feature tracking on the sequence, partitioning the video sequence into factorization windows and transition windows, applying track smoothing techniques to the windows, determining and applying warping techniques to the frames in the video sequence, and cropping the warped frames, as described herein.
- Other embodiments of the video stabilization module may be at least partially implemented by hardware circuitry and/or firmware stored, for example, in a non-volatile memory.
- Embodiments of the robust video stabilization technique and/or of the various techniques described as parts of the robust video stabilization technique as described herein may be implemented in software, hardware, or a combination thereof.
- embodiments of the robust video stabilization techniques may be performed by a video stabilization module implemented by program instructions stored in a computer-readable storage medium and executable by one or more processors (e.g., one or more CPUs or GPUs).
- a video stabilization module may, for example, be implemented as a stand-alone application, as a module of an application, as a plug-in for applications including image or video processing applications, and/or as a library function or functions that may be called by other applications such as image processing or video processing applications.
- Embodiments of the video stabilization module may be implemented in any image or video processing application, or more generally in any application in which video sequences may be processed.
- Example applications in which embodiments may be implemented may include, but are not limited to, Adobe® Premiere® and Adobe® After Effects®. “Adobe,” “Adobe Premiere,” and “Adobe After Effects” are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries.
- An example video stabilization module that may implement the robust video stabilization methods as described herein is illustrated in FIGS. 5 and 6 .
- An example computer system on which a video stabilization module may be implemented is illustrated in FIG. 12 .
- embodiments of the video stabilization methods as described herein may be implemented in other devices, for example in digital video cameras for video stabilization in captured video sequences, as a software module, hardware module, or a combination thereof.
- FIG. 5 illustrates an example video stabilization module 500 , and data flow and processing within the module 500 , according to at least some embodiments.
- FIG. 12 illustrates an example computer system on which embodiments of module 500 may be implemented.
- an input vide sequence 550 may be obtained.
- a feature tracking technique may be applied to estimate 2-D feature trajectories 552 from the input video 550 .
- a video partitioning technique may be applied to segment the video sequence 550 into factorization windows and transition windows.
- the feature trajectories 552 may be smoothed by applying smoothing techniques to the factorization windows and transition windows, as described in the sections titled Factorization window stabilization techniques and Transition window stabilization techniques, to generate smoothed trajectories 554 .
- the input video sequence 550 may be warped with the guidance of the new feature trajectories 562 according to the warping techniques described in the section titled Determining and applying warping models to generate as output a warped, stabilized video sequence 556 .
- the frames in video sequence 556 may then be cropped according to the cropping technique described in the section titled Cropping technique.
- FIG. 5 shows the warping technique 508 as part of the video stabilization module 500
- the warping technique 508 may be implemented external to module 500 , for example as a separate video image frame warping module that accepts smoothed feature trajectories 554 and input video sequence 550 as input.
- cropping technique 510 may be implemented as a separate module, or in a separate warping module.
- FIG. 6 illustrates an example video stabilization module that may implement the video stabilization methods as illustrated in FIGS. 1 through 5 and 7 through 11 .
- FIG. 12 illustrates an example computer system on which embodiments of module 600 may be implemented.
- Module 600 receives as input a video sequence 610 .
- module 600 may receive user input 612 via user interface 602 specifying one or more video stabilization parameters as previously described, for example to select between smoothing motion and no motion modes, to change the underlying motion model used in stabilization, to set parameters or weights that control the degree of smoothness or other parameters of the smoothing techniques, and so on.
- Module 600 then applies a robust video stabilization technique as described herein, according to user input 612 received via user interface 602 , if any.
- Module 600 generates as output a stabilized and cropped output video sequence 620 .
- Output video sequence 620 may, for example, be stored to a storage medium 640 , such as system memory, a disk drive, DVD, CD, etc. Output video sequence 620 may, in addition or instead, be displayed to a display device 650 . Output video sequence 620 may, in addition or instead, be provided to one or more other video processing modules 660 for further processing.
- a storage medium 640 such as system memory, a disk drive, DVD, CD, etc.
- Output video sequence 620 may, in addition or instead, be displayed to a display device 650 .
- Output video sequence 620 may, in addition or instead, be provided to one or more other video processing modules 660 for further processing.
- Embodiments of a video stabilization module and/or of the video stabilization techniques as described herein may be executed on one or more computer systems, which may interact with various other devices.
- One such computer system is illustrated by FIG. 12 .
- computer system 2000 may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
- computer system 2000 includes one or more processors 2010 coupled to a system memory 2020 via an input/output (I/O) interface 2030 .
- Computer system 2000 further includes a network interface 2040 coupled to I/O interface 2030 , and one or more input/output devices 2050 , such as cursor control device 2060 , keyboard 2070 , and display(s) 2080 .
- I/O input/output
- embodiments may be implemented using a single instance of computer system 2000 , while in other embodiments multiple such systems, or multiple nodes making up computer system 2000 , may be configured to host different portions or instances of embodiments.
- some elements may be implemented via one or more nodes of computer system 2000 that are distinct from those nodes implementing other elements.
- computer system 2000 may be a uniprocessor system including one processor 2010 , or a multiprocessor system including several processors 2010 (e.g., two, four, eight, or another suitable number).
- processors 2010 may be any suitable processor capable of executing instructions.
- processors 2010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA.
- ISAs instruction set architectures
- each of processors 2010 may commonly, but not necessarily, implement the same ISA.
- At least one processor 2010 may be a graphics processing unit.
- a graphics processing unit or GPU may be considered a dedicated graphics-rendering device for a personal computer, workstation, game console or other computing or electronic device.
- Modern GPUs may be very efficient at manipulating and displaying computer graphics, and their highly parallel structure may make them more effective than typical CPUs for a range of complex graphical algorithms.
- a graphics processor may implement a number of graphics primitive operations in a way that makes executing them much faster than drawing directly to the screen with a host central processing unit (CPU).
- the video stabilization methods disclosed herein may, at least in part, be implemented by program instructions configured for execution on one of, or parallel execution on two or more of, such GPUs.
- the GPU(s) may implement one or more application programmer interfaces (APIs) that permit programmers to invoke the functionality of the GPU(s). Suitable GPUs may be commercially available from vendors such as NVIDIA Corporation, ATI Technologies (AMD), and others.
- APIs application programmer interfaces
- System memory 2020 may be configured to store program instructions and/or data accessible by processor 2010 .
- system memory 2020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory.
- SRAM static random access memory
- SDRAM synchronous dynamic RAM
- program instructions and data implementing desired functions, such as those described above for embodiments of a video stabilization module are shown stored within system memory 2020 as program instructions 2025 and data storage 2035 , respectively.
- program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory 2020 or computer system 2000 .
- a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or CD/DVD-ROM coupled to computer system 2000 via I/O interface 2030 .
- Program instructions and data stored via a computer-accessible medium may be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 2040 .
- I/O interface 2030 may be configured to coordinate I/O traffic between processor 2010 , system memory 2020 , and any peripheral devices in the device, including network interface 2040 or other peripheral interfaces, such as input/output devices 2050 .
- I/O interface 2030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2020 ) into a format suitable for use by another component (e.g., a processor 2010 ).
- I/O interface 2030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
- PCI Peripheral Component Interconnect
- USB Universal Serial Bus
- I/O interface 2030 may be split into two or more separate components, such as a north bridge and a south bridge, for example.
- some or all of the functionality of I/O interface 2030 such as an interface to system memory 2020 , may be incorporated directly into processor 2010 .
- Network interface 2040 may be configured to allow data to be exchanged between computer system 2000 and other devices attached to a network, such as other computer systems, or between nodes of computer system 2000 .
- network interface 2040 may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
- Input/output devices 2050 may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer system 2000 .
- Multiple input/output devices 2050 may be present in computer system 2000 or may be distributed on various nodes of computer system 2000 .
- similar input/output devices may be separate from computer system 2000 and may interact with one or more nodes of computer system 2000 through a wired or wireless connection, such as over network interface 2040 .
- memory 2020 may include program instructions 2025 , configured to implement embodiments of a video stabilization module as described herein, and data storage 2035 , comprising various data accessible by program instructions 2025 .
- program instructions 2025 may include software elements of embodiments of a video stabilization module as illustrated in the above Figures.
- Data storage 2035 may include data that may be used in embodiments. In other embodiments, other or different software elements and data may be included.
- computer system 2000 is merely illustrative and is not intended to limit the scope of a video stabilization module as described herein.
- the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including a computer, personal computer system, desktop computer, laptop, notebook, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a digital video camera, a set top box, a mobile device, network device, internet appliance, PDA, wireless phones, pagers, a consumer device, video game console, handheld video game device, application server, storage device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
- Computer system 2000 may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system.
- the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components.
- the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available.
- instructions stored on a computer-accessible medium separate from computer system 2000 may be transmitted to computer system 2000 via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link.
- Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Accordingly, the present invention may be practiced with other computer system configurations.
- a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
- storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc.
- RAM e.g. SDRAM, DDR, RDRAM, SRAM, etc.
- ROM etc.
- transmission media or signals such as electrical, electromagnetic, or digital signals
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
-
- The factorization technique as described in patent application Ser. No. 12/953,703 may not always work. The factorization technique requires a relatively large number of long tracks to work well, which may not be the case when the camera is moving too fast or the scene is textureless.
- When the factorization technique fails, the subspace video stabilization technique fails completely. No method has previously been provided that allows the subspace video stabilization technique to step back and try a less aggressive approach. In other words, the subspace video stabilization technique does not fail gracefully.
- The subspace video stabilization technique applies a low-pass filter to the eigen-trajectories (also referred to as basis vectors) after factorization. However, low-pass filtering may not be sufficient in many cases, especially on enforcing boundary constraints.
where λt,j refers to the weight for the jth trajectory on frame t, which is used to fade-in and fade-out the contribution of each trajectory over time to preserve temporal coherence. λt,j is only positive when the jth trajectory appears on frame t, otherwise it is zero. A technique that may be used to set this weight is described in relation to FIG. 5 of the published paper Content-preserving warps for 3D video stabilization, which appeared in ACM Trans Graphics 28, 3, Article No 44, 2009, the content of which is incorporated by reference herein in its entirety.
M 2n×k =W□(C 2n×r E r×k) (2)
where W is a binary mask matrix with 0 indicating missing data and 1 indicating existing data, r is the chosen rank (typically 9), and □ indicates component-wise multiplication. The r row vectors of E may be referred to as eigen-trajectories, in that they represent the basis vectors that can be linearly combined to form a 2-D motion trajectory over the window of k frames. The coefficient matrix C represents each observed feature as such a linear combination. The technique performs temporal Gaussian smoothing directly on the matrix E, and re-multiplies with coefficient matrix C to form a new matrix of trajectories.
In this equation, χi,j is an indicator function that is 1 if the jth track exists on the ith frame and 0 otherwise, xi,j is the original tracked location of the jth track in the ith frame, and λi,j is a weight on each trajectory (the computation of this weight is described later in this document). Cj indicates the two rows (2×r) of the matrix C that contain the coefficients for the jth track, and Ei is the column (r×1) of the matrix E that contains the eigen-trajectories at frame i. Note that the matrix E is the only unknown in this term.
Note that this data term is always zero for frames that do not overlap with other windows.
ξ=D(E)+αD′(E)+βS(E). (6)
where β controls the degree of smoothness. In at least some embodiments, β and/or α may be user-settable parameters. In at least some embodiments, α=100 as the default; however, other values for α may be used. In at least some embodiments, the default value of β may be 200; however, other default values for β may be used.
t=[t x ,t y]Tε□2.
The application of a 2-D similarity transformation on a point x=[x,y]T is given by:
(θ1+θ2 ,s 1 s 2 ,S 1(t 2)+t 1) (9)
The inverse of a transformation is given by:
where αi,j is the weight for each term which can vary according to both i and j. A method for computing the weights is discussed later in this document. Second, the transformations should yield a video with smooth motion. There are several ways to encode this requirement in a cost function. For example, the following two smoothness terms may be used:
where βi,j are a different set of weights and Si −1 is the inverse of Si. Note that χi−1,jχi,jχi+1,j indicates that a smoothness term is active only if the corresponding feature is available one all three images.
where αi is a set of weights as given below:
where α is a user-adjustable parameter with a default value, for example 100. The weights βi,j may be given as
βi,j=γi,jβ (18)
where β is a user-adjustable parameter with a default value, for example 100 or 200.
No Motion Option in Transition Windows
S i({circumflex over (x)} j)=(S i S)(S −1({circumflex over (x)} j). (20)
where Δt is a 2-D offset computed between image i and image i+1. This offset may be computed from tracked points between the two images, for example using a relatively simple least squares algorithm as follows:
S O ΔS O,O+1 S i −1 , i=O+1,O+2, . . . ,O+M (23)
where Si are the results of optimizing equation (19) and SO is the 2-D similarity transformation of the last frame in the previous window. In the case when there is not a previous window, SO may be set to the identity transformation. In the case when there is an overlap with respect to a previous window, ΔSO,O+1 may be a relative 2-D similarity transformation between frame O and frame O+1; otherwise, ΔSO,O+1 may be set to be the identity transformation. In at least some embodiments, the previously mentioned relative 2-D similarity transformation may be computed by optimizing the following cost function:
Determining and Applying Warping Models
-
- The cropping window should contain no transparent pixels;
- The size of the cropping window should be as large as possible, so that the final video contains as much content as possible; and
- The center (anchor point) of the cropping window should move smoothly across time to avoid introducing additional camera motion into the final video.
(x m t ,y m t ,L left t ,L right t ,L top t ,L bottom t)
as shown in
(x m t ,y m t)
is the center of the mesh on this frame, which is also the center of the scene that the camera captures at time t, and:
(L left t ,L right t ,L top t ,L bottom t)
is the length from the scene center to the left, right, top and bottom edge of the maximum cropping window. In at least some embodiments, this may be done by a greedy algorithm which initially sets:
L left t =L right t =L top t =L bottom t=0
and increases each length, for example by one pixel, at each iteration to gradually expand the window. If one edge of the window reaches a transparent pixel, that edge stops moving. When all edges stop, a
(x m t ,y m t)
on all frames to form a point array, and temporally smoothes the array, for example using a bilateral smoothing filter. The filter may be applied to the X and Y coordinates separately. Using X coordinates as an example, the coordinates may be smoothed as:
{circumflex over (L)} left t ,{circumflex over (L)} right t ,{circumflex over (L)} top t ,{circumflex over (L)} bottom t,
as shown in
{circumflex over (L)} left min ,{circumflex over (L)} right min ,{circumflex over (L)} top min ,{circumflex over (L)} bottom min.
Claims (22)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/301,572 US8724854B2 (en) | 2011-04-08 | 2011-11-21 | Methods and apparatus for robust video stabilization |
US13/368,284 US8611602B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
US13/368,282 US8675918B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/368,279 US8929610B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/367,994 US8885880B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161473354P | 2011-04-08 | 2011-04-08 | |
US13/301,572 US8724854B2 (en) | 2011-04-08 | 2011-11-21 | Methods and apparatus for robust video stabilization |
Related Child Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/368,279 Continuation US8929610B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/367,994 Continuation US8885880B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
US13/368,282 Continuation US8675918B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/368,284 Continuation US8611602B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130128066A1 US20130128066A1 (en) | 2013-05-23 |
US8724854B2 true US8724854B2 (en) | 2014-05-13 |
Family
ID=48426466
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/301,572 Active 2032-05-28 US8724854B2 (en) | 2011-04-08 | 2011-11-21 | Methods and apparatus for robust video stabilization |
US13/368,282 Active 2032-02-25 US8675918B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/368,284 Active US8611602B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
US13/368,279 Active 2032-11-21 US8929610B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/367,994 Active 2032-10-18 US8885880B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/368,282 Active 2032-02-25 US8675918B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/368,284 Active US8611602B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
US13/368,279 Active 2032-11-21 US8929610B2 (en) | 2011-04-08 | 2012-02-07 | Methods and apparatus for robust video stabilization |
US13/367,994 Active 2032-10-18 US8885880B2 (en) | 2011-04-08 | 2012-02-07 | Robust video stabilization |
Country Status (1)
Country | Link |
---|---|
US (5) | US8724854B2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8885880B2 (en) | 2011-04-08 | 2014-11-11 | Adobe Systems Incorporated | Robust video stabilization |
US20140351185A1 (en) * | 2012-05-23 | 2014-11-27 | Amazon Technologies, Inc. | Machine learning memory management and distributed rule evaluation |
US9525821B2 (en) | 2015-03-09 | 2016-12-20 | Microsoft Technology Licensing, Llc | Video stabilization |
US9530215B2 (en) | 2015-03-20 | 2016-12-27 | Qualcomm Incorporated | Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology |
US9635339B2 (en) | 2015-08-14 | 2017-04-25 | Qualcomm Incorporated | Memory-efficient coded light error correction |
US9846943B2 (en) | 2015-08-31 | 2017-12-19 | Qualcomm Incorporated | Code domain power control for structured light |
US9948920B2 (en) | 2015-02-27 | 2018-04-17 | Qualcomm Incorporated | Systems and methods for error correction in structured light |
US10068338B2 (en) | 2015-03-12 | 2018-09-04 | Qualcomm Incorporated | Active sensing spatial resolution improvement through multiple receivers and code reuse |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8797414B2 (en) * | 2010-12-23 | 2014-08-05 | Samsung Electronics Co., Ltd. | Digital image stabilization device |
TWI469062B (en) * | 2011-11-11 | 2015-01-11 | Ind Tech Res Inst | Image stabilization method and image stabilization device |
US8810666B2 (en) * | 2012-01-16 | 2014-08-19 | Google Inc. | Methods and systems for processing a video for stabilization using dynamic crop |
US9147226B2 (en) * | 2012-09-10 | 2015-09-29 | Nokia Technologies Oy | Method, apparatus and computer program product for processing of images |
TWI606418B (en) * | 2012-09-28 | 2017-11-21 | 輝達公司 | Computer system and method for gpu driver-generated interpolated frames |
CN104823444A (en) * | 2012-11-12 | 2015-08-05 | 行为识别系统公司 | Image stabilization techniques for video surveillance systems |
US9374532B2 (en) | 2013-03-15 | 2016-06-21 | Google Inc. | Cascaded camera motion estimation, rolling shutter detection, and camera shake detection for video stabilization |
CA2849563A1 (en) * | 2013-04-22 | 2014-10-22 | Martin Julien | Live panning system and method |
US9466092B2 (en) | 2013-11-27 | 2016-10-11 | Microsoft Technology Licensing, Llc | Content-aware image rotation |
CN103826102B (en) * | 2014-02-24 | 2018-03-30 | 深圳市华宝电子科技有限公司 | A kind of recognition methods of moving target, device |
US9854168B2 (en) | 2014-03-07 | 2017-12-26 | Futurewei Technologies, Inc. | One-pass video stabilization |
US10341561B2 (en) * | 2015-09-11 | 2019-07-02 | Facebook, Inc. | Distributed image stabilization |
US10506235B2 (en) | 2015-09-11 | 2019-12-10 | Facebook, Inc. | Distributed control of video encoding speeds |
US10499070B2 (en) | 2015-09-11 | 2019-12-03 | Facebook, Inc. | Key frame placement for distributed video encoding |
US10063872B2 (en) | 2015-09-11 | 2018-08-28 | Facebook, Inc. | Segment based encoding of video |
US10375156B2 (en) | 2015-09-11 | 2019-08-06 | Facebook, Inc. | Using worker nodes in a distributed video encoding system |
US10602153B2 (en) | 2015-09-11 | 2020-03-24 | Facebook, Inc. | Ultra-high video compression |
US10602157B2 (en) | 2015-09-11 | 2020-03-24 | Facebook, Inc. | Variable bitrate control for distributed video encoding |
GB2544786A (en) * | 2015-11-27 | 2017-05-31 | Univ Of East Anglia | Method and system for generating an output image from a plurality of corresponding input image channels |
CN106204458B (en) * | 2016-07-12 | 2019-04-23 | 北京理工大学 | A kind of Video Stabilization cutting control method based on the constraint of kinematic geometry amount |
AU2016231661A1 (en) * | 2016-09-27 | 2018-04-12 | Canon Kabushiki Kaisha | Method, system and apparatus for selecting a video frame |
CN110307791B (en) * | 2019-06-13 | 2020-12-29 | 东南大学 | Vehicle length and speed calculation method based on three-dimensional vehicle boundary frame |
US11423654B2 (en) * | 2019-10-01 | 2022-08-23 | Adobe Inc. | Identification of continuity errors in video by automatically detecting visual inconsistencies in video frames |
US20210136135A1 (en) * | 2019-10-31 | 2021-05-06 | Sony Interactive Entertainment Inc. | Image stabilization cues for accessible game stream viewing |
CN113313188B (en) * | 2021-06-10 | 2022-04-12 | 四川大学 | Cross-modal fusion target tracking method |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050275727A1 (en) | 2004-06-15 | 2005-12-15 | Shang-Hong Lai | Video stabilization method |
US20070009034A1 (en) | 2005-07-05 | 2007-01-11 | Jarno Tulkki | Apparatuses, computer program product, and method for digital image processing |
US20080016469A1 (en) * | 2001-12-28 | 2008-01-17 | Jong Yeul Suh | Apparatus and method for generating thumbnail images |
US20080112642A1 (en) | 2006-11-14 | 2008-05-15 | Microsoft Corporation | Video Completion By Motion Field Transfer |
US20080165280A1 (en) * | 2007-01-05 | 2008-07-10 | Deever Aaron T | Digital video stabilization with manual control |
US7548256B2 (en) * | 2003-10-18 | 2009-06-16 | Hewlett-Packard Development Company, L.P. | Image processing scheme |
US20090214078A1 (en) | 2008-02-26 | 2009-08-27 | Chia-Chen Kuo | Method for Handling Static Text and Logos in Stabilized Images |
US20090278921A1 (en) | 2008-05-12 | 2009-11-12 | Capso Vision, Inc. | Image Stabilization of Video Play Back |
US20090295930A1 (en) * | 2008-06-02 | 2009-12-03 | Micron Technology Inc. | Method and apparatus providing motion smoothing in a video stabilization system |
US20100046624A1 (en) | 2008-08-20 | 2010-02-25 | Texas Instruments Incorporated | Method and apparatus for translation motion stabilization |
US20100053347A1 (en) | 2008-08-28 | 2010-03-04 | Agarwala Aseem O | Content-Aware Video Stabilization |
US20100208086A1 (en) | 2009-02-19 | 2010-08-19 | Texas Instruments Incorporated | Reduced-memory video stabilization |
US20110085049A1 (en) | 2009-10-14 | 2011-04-14 | Zoran Corporation | Method and apparatus for image stabilization |
US20110150093A1 (en) | 2009-12-22 | 2011-06-23 | Stephen Mangiat | Methods and apparatus for completion of video stabilization |
US20120120264A1 (en) | 2010-11-12 | 2012-05-17 | Samsung Electronics Co., Ltd. | Method and apparatus for video stabilization by compensating for view direction of camera |
US20120162454A1 (en) | 2010-12-23 | 2012-06-28 | Samsung Electronics Co., Ltd. | Digital image stabilization device and method |
US20130002814A1 (en) | 2011-06-30 | 2013-01-03 | Minwoo Park | Method for automatically improving stereo images |
US20130120600A1 (en) * | 2010-09-14 | 2013-05-16 | Hailin Jin | Methods and Apparatus for Subspace Video Stabilization |
US20130128064A1 (en) * | 2011-04-08 | 2013-05-23 | Hailin Jin | Methods and Apparatus for Robust Video Stabilization |
US20130148738A1 (en) | 2010-08-31 | 2013-06-13 | St-Ericsson Sa | Global Motion Vector Estimation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6636220B1 (en) | 2000-01-05 | 2003-10-21 | Microsoft Corporation | Video-based rendering |
US8054335B2 (en) | 2007-12-20 | 2011-11-08 | Aptina Imaging Corporation | Methods and system for digitally stabilizing video captured from rolling shutter cameras |
US8929655B2 (en) | 2008-10-16 | 2015-01-06 | Nikon Corporation | Image evaluation apparatus and camera |
US8179446B2 (en) | 2010-01-18 | 2012-05-15 | Texas Instruments Incorporated | Video stabilization and reduction of rolling shutter distortion |
US9071851B2 (en) | 2011-01-10 | 2015-06-30 | Qualcomm Incorporated | Adaptively performing smoothing operations |
-
2011
- 2011-11-21 US US13/301,572 patent/US8724854B2/en active Active
-
2012
- 2012-02-07 US US13/368,282 patent/US8675918B2/en active Active
- 2012-02-07 US US13/368,284 patent/US8611602B2/en active Active
- 2012-02-07 US US13/368,279 patent/US8929610B2/en active Active
- 2012-02-07 US US13/367,994 patent/US8885880B2/en active Active
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080016469A1 (en) * | 2001-12-28 | 2008-01-17 | Jong Yeul Suh | Apparatus and method for generating thumbnail images |
US7548256B2 (en) * | 2003-10-18 | 2009-06-16 | Hewlett-Packard Development Company, L.P. | Image processing scheme |
US20050275727A1 (en) | 2004-06-15 | 2005-12-15 | Shang-Hong Lai | Video stabilization method |
US20070009034A1 (en) | 2005-07-05 | 2007-01-11 | Jarno Tulkki | Apparatuses, computer program product, and method for digital image processing |
US20080112642A1 (en) | 2006-11-14 | 2008-05-15 | Microsoft Corporation | Video Completion By Motion Field Transfer |
US20080165280A1 (en) * | 2007-01-05 | 2008-07-10 | Deever Aaron T | Digital video stabilization with manual control |
US20090214078A1 (en) | 2008-02-26 | 2009-08-27 | Chia-Chen Kuo | Method for Handling Static Text and Logos in Stabilized Images |
US20090278921A1 (en) | 2008-05-12 | 2009-11-12 | Capso Vision, Inc. | Image Stabilization of Video Play Back |
US20090295930A1 (en) * | 2008-06-02 | 2009-12-03 | Micron Technology Inc. | Method and apparatus providing motion smoothing in a video stabilization system |
US20100046624A1 (en) | 2008-08-20 | 2010-02-25 | Texas Instruments Incorporated | Method and apparatus for translation motion stabilization |
US20100053347A1 (en) | 2008-08-28 | 2010-03-04 | Agarwala Aseem O | Content-Aware Video Stabilization |
US8102428B2 (en) | 2008-08-28 | 2012-01-24 | Adobe Systems Incorporated | Content-aware video stabilization |
US20100208086A1 (en) | 2009-02-19 | 2010-08-19 | Texas Instruments Incorporated | Reduced-memory video stabilization |
US20110085049A1 (en) | 2009-10-14 | 2011-04-14 | Zoran Corporation | Method and apparatus for image stabilization |
US20110150093A1 (en) | 2009-12-22 | 2011-06-23 | Stephen Mangiat | Methods and apparatus for completion of video stabilization |
US20130148738A1 (en) | 2010-08-31 | 2013-06-13 | St-Ericsson Sa | Global Motion Vector Estimation |
US20130120600A1 (en) * | 2010-09-14 | 2013-05-16 | Hailin Jin | Methods and Apparatus for Subspace Video Stabilization |
US20120120264A1 (en) | 2010-11-12 | 2012-05-17 | Samsung Electronics Co., Ltd. | Method and apparatus for video stabilization by compensating for view direction of camera |
US20120162454A1 (en) | 2010-12-23 | 2012-06-28 | Samsung Electronics Co., Ltd. | Digital image stabilization device and method |
US20120162452A1 (en) | 2010-12-23 | 2012-06-28 | Erwin Sai Ki Liu | Digital image stabilization method with adaptive filtering |
US20130128064A1 (en) * | 2011-04-08 | 2013-05-23 | Hailin Jin | Methods and Apparatus for Robust Video Stabilization |
US20130128063A1 (en) * | 2011-04-08 | 2013-05-23 | Hailin Jin | Methods and Apparatus for Robust Video Stabilization |
US20130128062A1 (en) | 2011-04-08 | 2013-05-23 | Hailin Jin | Methods and Apparatus for Robust Video Stabilization |
US20130128065A1 (en) | 2011-04-08 | 2013-05-23 | Hailin Jin | Methods and Apparatus for Robust Video Stabilization |
US8611602B2 (en) | 2011-04-08 | 2013-12-17 | Adobe Systems Incorporated | Robust video stabilization |
US8675918B2 (en) | 2011-04-08 | 2014-03-18 | Adobe Systems Incorporated | Methods and apparatus for robust video stabilization |
US20130002814A1 (en) | 2011-06-30 | 2013-01-03 | Minwoo Park | Method for automatically improving stereo images |
Non-Patent Citations (53)
Title |
---|
"Corrected Notice of Allowance", U.S. Appl. No. 13/368,282, Feb. 5, 2014, 2 pages. |
"Corrected Notice of Allowance", U.S. Appl. No. 13/368,284, (Nov. 19, 2013), 2 pages. |
"Corrected Notice of Allowance", U.S. Appl. No. 13/368,284, (Sep. 20, 2013), 2 pages. |
"Non-Final Office Action", U.S Appl. No. 13/368,284, (Mar. 11, 2013), 6 pages. |
"Non-Final Office Action", U.S. Appl. No. 13/368,282, (Oct. 3, 2013), 10 pages. |
"Notice of Allowance", U.S. Appl. No. 13/368,282, (Dec. 23, 2013), 7 pages. |
"Notice of Allowance", U.S. Appl. No. 13/368,284, (Aug. 6, 2013), 8 pages. |
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., and Cohen, M. 2004. Interactive digital photomontage. ACM Transactions on Graphics (TOG) (Jan). |
Agarwala, A., Hertzmann, A., Salesin, D., and Seitz, S. 2004. Keyframe-based tracking for rotoscoping and animation. SIGGRAPH '04: SIGGRAPH 2004 Papers (Aug). |
Avidan, et al. "Seam Carving for Content-Aware Image Resizing" ACM Transactions on Graphics 2007. |
Barnes, C., Shechtman, E., Finkelstein, A., and Goldman, D. 2009. Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics 28, 3, 2. |
Bhat, P., Zitnick, C. L., Snavely, N., Agarwala, A., Agrawala, M., Cohen, M., Curless, B., and Kang, S. B. 2007. Using photographs to enhance videos of a static scene. In Rendering Techniques 2007: 18th Eurographics Workshop on Rendering, 327-338. |
Brand, M. 2002. Incremental singular value decomposition of uncertain data with missing values. In 7th European Conference on Computer Vision (ECCV 2002), 707-720. |
Bruce D. Lucas and Takeo Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. International Joint Conference on Artificial Intelligence, pp. 674-679, 1981. |
Buchanan, A. M., and Fitzgibbon, A. 2005. Damped Newton algorithms for matrix factorization with missing data. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 316-322. |
Buehler, C., Bosse, M., and McMillan, L. 2001. Non-metric image-based rendering for video stabilization. In 2001 Conference on Computer Vision and Pattern Recognition (CVPR 2001), 609-614. |
Chen, B.-Y., Lee, K.-Y., Huang, W.-T., and Lin, J.-S. 2008. Capturing intention-based full-frame video stabilization. Computer Graphics Forum 27, 7, 1805-1814. |
Chen, P. 2008. Optimization algorithms on subspaces: Revisiting missing data problem in low-rank matrix. Int. J. Comput. Vision 80, 1, 125-142. |
Davison, A. J., Reid, I. D., Molton, N. D., and Stasse, O. 2007. MonoSLAM: Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 6, 1052-1067. |
Feng Liu, Michael Gleicher, Hailin Jin, Aseem Agarwala. Content-preserving warps for 3D video stabilization. ACM Trans Graphics 28, 3, Article No. 44, 2009, 9 pages. |
Feng Liu, Michael Gleicher, Jue Wang, Hailin Jin and Aseem Agarwala. Subspace Video Stabilization. ACM Trans Graphics, 30, 1, Article No. 4, 2011, 10 pages. |
Fitzgibbon, A., Wexler, Y., and Zisserman, A. 2005. Image-based rendering using image-based priors. International Journal of Computer Vision 63, 2 (July), 141-151. |
Gal, et al. "Feature-aware Texturing" School of Computer Science, Tel Aviv University, Israel; Draft version. The original paper appeared in EGSR '06 proceedings. |
Goh, A., and Vidal, R. 2007. Segmenting motions of different types by unsupervised manifold clustering. In IEEE Computer Vision and Pattern Recognition (CVPR), 1-6. |
Igarashi, et al. "As-Rigid-As-Possible Shape Manipulation" ACM Transactions on Graphics 2005. |
Irani, M. 2002. Multi-frame correspondence estimation using subspace constraints. International Journal of Computer Vision 48, 1, 39-51. |
Jain, V., and Narayanan, P. 2006. Video completion for indoor scenes. Computer Vision, Graphics and Image Processing: 5th Indian Conference, Icvgip 2006, Madurai, India, Dec. 13-16, 2006, Proceedings, 409. |
Jia, Y., Hu, S., and Martin, R. 2005. Video completion using tracking and fragment merging. The Visual Computer 21, 8, 601-610. |
Jianbo Shi and Carlo Tomasi. Good Features to Track. IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, 1994. |
Ke, Q., and Kanade, T. 2001. A subspace approach to layer extraction. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1. |
Kokaram, A., Collis, B., and Robinson, S. 2003. A Bayesian framework for recursive object removal in movie post production. IEEE International Conference on Image Processing, Barcelona. |
Lee, J., and Shin, S. Y. 2002. General construction of time domain filters for orientation data. IEEE Transactions on Visualization and Computer Graphics 8, 2 (April-June), 119-128. |
Lee, K.-Y., Chuang, Y.-Y., Chen, B.-Y., and Ouhyoung, M. 2009. Video stabilization using robust feature trajectories. In IEEE ICCV. |
Liu, F., Gleicher, M., Jin, H., and Agarwala, A. 2009. Content-preserving warps for 3d video stabilization. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2009) 28, 3 (August), Article No. 44. |
Lucas, B., and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision. International joint conference on artificial Intelligence (IJCIA) . . . (Jan). |
Matsushita, Y., Ofek, E., Ge, W., Tang, X., and Shum, H.-Y. 2006. Full-frame video stabilization with motion inpainting. IEEE Transactions on Pattern Analysis andMachine Intelligence 28, 7, 1150-1163. |
Morimoto, C., and Chellappa, R. 1997. Evaluation of image stabilization algorithms. In DARPA Image Understanding Workshop DARPA97, 295-302. |
Nister, D., Naroditsky, O., and Bergen, J. 2004. Visual odometry. In IEEE Computer Vision and Pattern Recognition (CVPR), 652-659. |
Perez, P., Gangnet, M., and Blake, A. 2003. Poisson image editing. ACM Transactions on Graphics (Jan). |
Shi, J., and Tomasi, C. 1994. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition, 593-600. |
Shih, T., Tang, N., Yeh, W., Chen, T., and Lee, W. 2006. Video inpainting and implant via diversified temporal continuations. Proceedings of the 14th annual ACM international conference on Multimedia, 136. |
Shiratori, T., Matsushita, Y., Kang, S.B., and Tang, X. 2006. Video completion by motion field transfer. Proceedings of CVPR 2006 (Jan). |
Singular value decomposition, downloaded from http://en.wikipedia.org/wiki/Singualr-value-decomposition[Nov. 29, 2010 10:47:04 AM] on Nov. 29, 2010, 14 pages. |
Sinha, S., Frahm, J.-M., Pollefeys, M., and Genc, Y. 2006. Gpu-based video feature tracking and matching. In Workshop on Edge Computing Using New Commodity Architectures (EDGE 2006). |
Tomasi, C., and Kanade, T. 1992. Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vision 9, 2, 137-154. |
Tomasi, et al., "Shape and Motion form Image Streams under Orthography: a Factorization Method", International Journal of Computer Vision, 9:2, (Nov. 1992), pp. 137-154. |
Torr, P. H. S., Fitzgibbon, A. W., and Zisserman, A. 1999. The problem of degeneracy in structure and motion recovery from uncalibrated image sequences. International Journal of Computer Vision 32, 1, 27-44. |
U.S. Appl. No. 12/953,703, filed Nov. 24, 2010, Adobe Systems Incorporated, all pages. |
U.S. Appl. No. 12/954,445, filed Nov. 24, 2010, Adobe Systems Incorporated, all pages. |
U.S. Appl. No. 13/301,572, filed Nov. 21, 2011, 68 pages. |
Vidal, R., Tron, R., and Hartley, R. 2008. Multiframe motion segmentation with missing data using power factorization and gpca. Int. J. Comput. Vision 79, 1, 85-105. |
Wexler, Y., Shechtman, E., and Irani, M. 2007. Space-time completion of video. IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (Dec), 1-14. |
Zhang, G., Hua, W., Qin, X., Shao, Y., and Bao, H. 2009. Video stabilization based on a 3d perspective camera model. The Visual Computer 25, 11, 997-1008. |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8885880B2 (en) | 2011-04-08 | 2014-11-11 | Adobe Systems Incorporated | Robust video stabilization |
US8929610B2 (en) | 2011-04-08 | 2015-01-06 | Adobe Systems Incorporated | Methods and apparatus for robust video stabilization |
US20140351185A1 (en) * | 2012-05-23 | 2014-11-27 | Amazon Technologies, Inc. | Machine learning memory management and distributed rule evaluation |
US9235814B2 (en) * | 2012-05-23 | 2016-01-12 | Amazon Technologies, Inc. | Machine learning memory management and distributed rule evaluation |
US9948920B2 (en) | 2015-02-27 | 2018-04-17 | Qualcomm Incorporated | Systems and methods for error correction in structured light |
US9525821B2 (en) | 2015-03-09 | 2016-12-20 | Microsoft Technology Licensing, Llc | Video stabilization |
US10068338B2 (en) | 2015-03-12 | 2018-09-04 | Qualcomm Incorporated | Active sensing spatial resolution improvement through multiple receivers and code reuse |
US9530215B2 (en) | 2015-03-20 | 2016-12-27 | Qualcomm Incorporated | Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology |
US9635339B2 (en) | 2015-08-14 | 2017-04-25 | Qualcomm Incorporated | Memory-efficient coded light error correction |
US9846943B2 (en) | 2015-08-31 | 2017-12-19 | Qualcomm Incorporated | Code domain power control for structured light |
US10223801B2 (en) | 2015-08-31 | 2019-03-05 | Qualcomm Incorporated | Code domain power control for structured light |
Also Published As
Publication number | Publication date |
---|---|
US8885880B2 (en) | 2014-11-11 |
US20130128066A1 (en) | 2013-05-23 |
US8929610B2 (en) | 2015-01-06 |
US8611602B2 (en) | 2013-12-17 |
US20130128064A1 (en) | 2013-05-23 |
US20130128063A1 (en) | 2013-05-23 |
US20130128062A1 (en) | 2013-05-23 |
US20130128065A1 (en) | 2013-05-23 |
US8675918B2 (en) | 2014-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8724854B2 (en) | Methods and apparatus for robust video stabilization | |
US9013634B2 (en) | Methods and apparatus for video completion | |
Yu et al. | Robust video stabilization by optimization in cnn weight space | |
US8872928B2 (en) | Methods and apparatus for subspace video stabilization | |
US8897562B2 (en) | Adaptive trimap propagation for video matting | |
EP2633682B1 (en) | Methods and systems for processing a video for stabilization and retargeting | |
US8792718B2 (en) | Temporal matte filter for video matting | |
US10778949B2 (en) | Robust video-based camera rotation estimation | |
US8428390B2 (en) | Generating sharp images, panoramas, and videos from motion-blurred videos | |
US9367922B2 (en) | High accuracy monocular moving object localization | |
Wang et al. | Video stabilization: A comprehensive survey | |
CN102216957A (en) | Visual tracking of objects in images, and segmentation of images | |
US20180005039A1 (en) | Method and apparatus for generating an initial superpixel label map for an image | |
JP3557982B2 (en) | Optical flow estimation method | |
US8320620B1 (en) | Methods and apparatus for robust rigid and non-rigid motion tracking | |
WO2017154045A1 (en) | 3d motion estimation device, 3d motion estimation method, and program | |
Favorskaya et al. | Warping techniques in video stabilization | |
Glantz et al. | Automatic MPEG-4 sprite coding—Comparison of integrated object segmentation algorithms | |
Yu | Robust Selfie and General Video Stabilization | |
KR101192162B1 (en) | Method and apparatus for robust object tracking by combining histogram-wise and pixel-wise matching approaches | |
ZHANG et al. | 3D Scene Reconstruction from RGB-Depth Images | |
Scholz et al. | Editing object behaviour in video sequences | |
Al-Anizy | Super Resolution Image from Low Resolution of Sequenced Frames-Text Image and Image-Based on POCS | |
Sandeep | Full Frame Video Stabilization Using Motion Inpainting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ADOBE SYSTEMS INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIN, HAILIN;AGARWALA, ASEEM O.;WANG, JUE;SIGNING DATES FROM 20111109 TO 20111118;REEL/FRAME:027257/0356 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
AS | Assignment |
Owner name: ADOBE INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:ADOBE SYSTEMS INCORPORATED;REEL/FRAME:048867/0882 Effective date: 20181008 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |