CA2891836C - System and method for tracking moving objects in videos - Google Patents
System and method for tracking moving objects in videos Download PDFInfo
- Publication number
- CA2891836C CA2891836C CA2891836A CA2891836A CA2891836C CA 2891836 C CA2891836 C CA 2891836C CA 2891836 A CA2891836 A CA 2891836A CA 2891836 A CA2891836 A CA 2891836A CA 2891836 C CA2891836 C CA 2891836C
- Authority
- CA
- Canada
- Prior art keywords
- tracklets
- representative
- images
- generate
- targets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
A system and method are provided for tracking objects in a scene from a sequence of images captured by an imaging device. The method includes processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest; and, at each of the hierarchical levels: examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets. The representative tracklets are grouped to generate a tracking result for at least one object.
Description
SYSTEM AND METHOD FOR TRACKING MOVING OBJECTS IN VIDEOS
TECHNICAL FIELD
[0001] The following relates to systems and methods for tracking moving objects in videos.
DESCRIPTION OF THE RELATED ART
TECHNICAL FIELD
[0001] The following relates to systems and methods for tracking moving objects in videos.
DESCRIPTION OF THE RELATED ART
[0002] Object tracking is typically considered a fundamental task for any high-level video content analysis system. Decades of research on this topic have produced a diverse set of approaches and a rich collection of tracking algorithms. In the majority of the traditional approaches, only the object itself and/or its background are modeled. However, it has been observed that significant progress has been made in this case. This class of tracking methods can be referred to as "object-centric" approaches [1].
[0003] On the other hand, it has been established that detection often cannot be performed when there is no prior knowledge about the specific objects (i.e. targets) being tracked. These methods are referred to as "generic object tracking" or "model-free tracking".
Manually annotating sufficient numbers of objects is often prohibitively expensive and impractical. Thus recently approaches for model-free tracking have received increased interest.
Model-free tracking is a challenging task because there is little information available about the objects to be tracked (i.e. targets). Another challenge is the presence of an unknown and ever-changing number of objects that could be targets [2].
Manually annotating sufficient numbers of objects is often prohibitively expensive and impractical. Thus recently approaches for model-free tracking have received increased interest.
Model-free tracking is a challenging task because there is little information available about the objects to be tracked (i.e. targets). Another challenge is the presence of an unknown and ever-changing number of objects that could be targets [2].
[0004] To date, most of the reported approaches for tracking rely on either robust motion or appearance models of each individual object or on object detection, i.e., they are object-centric.
Thus a key assumption is that a reliable object detection algorithm exists, e.g., see references [3, 4] [5] [6] [7]. This remains a challenge, particularly in complex and crowded situations.
These methods use the detection response to construct an object trajectory.
This is accomplished by using data association based on either the detection responses or a set of short tracks called tracklets that are associated with each detected object.
Tracklets are mid-level features that provide more spatial and temporal context than raw sensor data during the process of creating consistent object trajectories.
Thus a key assumption is that a reliable object detection algorithm exists, e.g., see references [3, 4] [5] [6] [7]. This remains a challenge, particularly in complex and crowded situations.
These methods use the detection response to construct an object trajectory.
This is accomplished by using data association based on either the detection responses or a set of short tracks called tracklets that are associated with each detected object.
Tracklets are mid-level features that provide more spatial and temporal context than raw sensor data during the process of creating consistent object trajectories.
[0005] Subsequently, data association links these tracklets into multi-frame trajectories. The issue of associating tracklets across time, the so-called "data association", is usually formulated as a Maximum A Posteriori (MAP) problem and has been solved using different methods. For example, network flow graphs and cost-flow networks are employed for data association to 22731187.1 determine globally optimal solutions for an entire sequence of tracklets.
Other data association approaches include the Hungarian algorithm, maximum weight independent sets, the Markov Chain Monte Carlo, and the iterative hierarchical tracklet linking methods.
Other data association approaches include the Hungarian algorithm, maximum weight independent sets, the Markov Chain Monte Carlo, and the iterative hierarchical tracklet linking methods.
[0006] There are other tracking algorithms, which are based on local spatio-temporal motion patterns in the scene. For example, hidden Markov models are employed to learn local motion patterns that are subsequently used as prior statistics for a particle filter.
Alternatively, other methods employ the global motion patterns of a crowd to learn local motion patterns of the neighboring local regions. Individual moving entities are detected by associating similar trajectories based on their features by assuming that objects move in distinct directions, and thus disregard possible and very likely local motion inconsistencies between different body parts. Thus a single pedestrian could be detected as a multiple objects or multiple individuals as the same object. In order to overcome these difficulties.
SUMMARY
Alternatively, other methods employ the global motion patterns of a crowd to learn local motion patterns of the neighboring local regions. Individual moving entities are detected by associating similar trajectories based on their features by assuming that objects move in distinct directions, and thus disregard possible and very likely local motion inconsistencies between different body parts. Thus a single pedestrian could be detected as a multiple objects or multiple individuals as the same object. In order to overcome these difficulties.
SUMMARY
[0007] In the following, trajectories are analyzed at multiple hierarchical levels, in which the higher levels account for the inconsistency between local motions of a single object.
[0008] The following relates to multiple object tracking with visual information, and particularly to tracking without target detection or prior knowledge about targets that are being tracked. This system described herein can also provide the flexibility of adding additional information to a model free multi-object tracking system, including target detectors and entrance/exit regions to improve tracking results.
[0009] In an implementation of the system, methods and algorithms are provided for multiple-object tracking in videos either with or without employing target detection systems.
The algorithms are directed to creating long-term trajectories for moving objects in the scene by using a model-free tracking algorithm. Each individual object is tracked by modeling the temporal relationship between sequentially occurring local motion patterns. In this implementation, the algorithm is based on shape and motion descriptors of moving objects, obtained at multiple hierarchical levels from an event understanding or video segmentation system.
The algorithms are directed to creating long-term trajectories for moving objects in the scene by using a model-free tracking algorithm. Each individual object is tracked by modeling the temporal relationship between sequentially occurring local motion patterns. In this implementation, the algorithm is based on shape and motion descriptors of moving objects, obtained at multiple hierarchical levels from an event understanding or video segmentation system.
[0010] In one aspect, there is provided a method of tracking objects in a scene from a sequence of images captured by an imaging device, the method comprising:
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to 22731187.1 generate a set of regions of interest; at each of the hierarchical levels:
examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to 22731187.1 generate a set of regions of interest; at each of the hierarchical levels:
examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
[0011] In another aspect, there is provided a computer readable storage medium comprising computer executable instructions for of tracking objects in a scene from a sequence of images captured by an imaging device, the computer executable instructions comprising instructions for: processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest; at each of the hierarchical levels:
examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
[0012] In yet another aspect, there is provided a tracking system comprising a processor and memory, the memory comprising computer executable instructions for causing the processor to track objects in a scene from a sequence of images captured by an imaging device, the computer executable instructions comprising instructions for:
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest; at each of the hierarchical levels:
examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
BRIEF DESCRIPTION OF THE DRAWINGS
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest; at each of the hierarchical levels:
examining pairs of sequential images to link pixels into short tracklets; and grouping short tracklets that indicate similar motion patterns to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Embodiments will now be described by way of example only with reference to the appended drawings wherein:
[0014] FIG. 1 is an example of a configuration for a system for performing multi-object tracking without using detection responses;
[0015] FIG. 2 is a flow chart illustrating a process for multiple-object tracking by hierarchical data association;
[0016] FIG. 3 illustrates and example of a process for generating hierarchical short tracklets and tracklets;
22731187.1
22731187.1
[0017] FIG. 4 illustrates and example of an over-segmented sample video frame at different hierarchical levels;
[0018] FIG. 5 provides three-dimensional graphical illustrations of short tracklet and representative tracklet construction for the video frame depicted in FIG. 4;
[0019] FIG. 6 illustrates graphically a data association and tracklet rejection process; and
[0020] FIG. 7 illustrates low- and high-level codebooks.
DETAILED DESCRIPTION
DETAILED DESCRIPTION
[0021] It has been recognized that by creating long-term trajectories for unknown moving targets using a model-free tracking algorithm, object tracking can be performed without relying on prior knowledge of the targets. As opposed to tracking-by-detection algorithms, no target detection is involved in the base algorithm. However, a target detection system can also be optionally used in conjunction with the model-free tracking algorithm to improve accuracy, as discussed in greater detail below.
[0022] Each individual object can be tracked by only modeling the temporal relationship between sequentially occurring local motion/appearance patterns. This is achieved by constructing multiple sets of initial tracks that code local and global motion patterns in videos.
These local motion patterns are obtained by analyzing spatially and temporally varying structures in videos, e.g., according to references [8-10].
These local motion patterns are obtained by analyzing spatially and temporally varying structures in videos, e.g., according to references [8-10].
[0023] The system described herein introduces the use of local and global motion descriptors obtained by either a video segmentation or an event detection algorithm [11] for multiple object tracking. The video segmentation assigns a codeword to each pixel from a hierarchical codebook structure obtained at different levels by considering local and global visual context. By examining pairs of sequential video frames, the matching codewords for each video pixel are transitively linked into distinct tracks, whose total number is unknown a priori and which are referred to herein as "short tracklets" or "linklets". The linking process is separately performed at each hierarchical level. This is done under the hard constraint that no two short tracklets may share the same pixel at the same time, i.e. the assigned codewords. The end result at this step is multiple sets of independent short tracklets obtained from the low- to high-level codebooks.
22731187.1
22731187.1
[0024] Subsequently, a set of sparse tracks, referred to herein as representative tracklets or simply "tracklets" in the literature, are produced by grouping the short tracklets that indicate similar motion patterns (see also HG. 3 described below). This produces multiple sets of independent representative tracklets. The Markov Chain Monte Carlo Data Association (MCMCDA) is adopted to estimate an initially unspecified number of trajectories. To this end, the tracklet association problem can be formulated as a Maximum A Posteriori (MAP) problem to produce a chain of tracklets. The final output of the data association algorithm is a partition of the set of tracklets such that those belonging to each individual object have been grouped together (see also FIG. 6 described below). It can be appreciated that MCMCDA
can be replaced by any other data association system as long as it is within the framework of MAP.
can be replaced by any other data association system as long as it is within the framework of MAP.
[0025] Turning now to the figures, FIG. 1 illustrates an example of a system 10 for multi-target tracking without using detection responses. The system 10 includes a tracking system 12 that receives image and/or video data from an imaging device 14, e.g., for tracking objects in a video. The tracking system 12 includes an image/video segmentation module 16 for segmenting the data received from the imaging device 14, and a tracklet generation module 18 for generating tracklets. It can be appreciated that the image/video segmentation module 16 may instead be any object detection mechanism that processes the images to identify a region in which an object is present and differentiate regions from the rest of the video. In other words, using this module, the sequence of images of the video are processed to generate sequential images at multiple hierarchical levels to generate a set of regions of interest. The tracking system also includes a trajectory creation module 20 for using the tracklets to generate trajectories for the objects being tracked. As noted above, the system 10 may optionally include a target detector/extractor module 22 to augment the data used in tracklet generation, which is used in conjunction with prior knowledge 24 obtained from an external source, which is used to increase the accuracy of the trajectory creation stage.
[0026] The prior knowledge 24 can optionally be used in the trajectory construction and data association. The system 10, however, does not require target detection, but the target detector/extractor module 22 can be added to improve the output results.
[0027] FIG. 2 illustrates an example of a process for multiple-target tracking by hierarchical data association. As shown in FIG. 2, the process includes performing a video over-segmentation step 30 based on shape/motion/appearance in the video being analyzed. Short tracklet generation is then performed at step 32 at multiple hierarchical levels as described in 22731187.1 greater detail below. The short tracklets are then used at step 34 to generate the representative tracklets at the multiple hierarchical levels. Data association and trajectory creation are then performed at step 36. As illustrated using dashed lines, the system 10 may optionally perform target detection at step 38 (i.e. when a target detector/extractor module 22 and prior knowledge 24 are available), and to assist in tracklet generation based on detection responses at step 40.
[0028] FIG. 2 therefore illustrates how the system 10 can operate without target detection, and in presence of a target detection system. If a target detector exists, the generated tracklets from the detection responses are be considered to be the highest level tracklets in the data association step 36.
[0029] FIG. 3 visually depicts an overview of the process shown in FIG. 2.
The goal is to estimate the trajectory of the moving objects in the video without invoking object detection.
Initially two sets of short tracklets are constructed in stage (a) by chaining. In this stage, the low-level portion (upper diagram) considers small window fragments, while the high-level portion (lower diagram) analyzes a larger region in order to impose a contextual influence. The results in stage (a) are obtained by exploiting a system configured for over-segmenting videos. The resultant short tracks (chains) are filtered and replaced in stage (b) by a set of sparse representative tracks, the so-called representative tracklets. Longer trajectories, i.e. tracking results, are then generated by using the Markov Chain Monte Carlo Data Association (MCMCDA) algorithm in stage (c) to solve the Maximum A Posteriori (MAP) problem using tracklet affinities. Thus this procedure uses low-level tracklets to connect high-level tracklets when there is a discontinuity in motion or time to produce a final tracking result in stage (d).
The goal is to estimate the trajectory of the moving objects in the video without invoking object detection.
Initially two sets of short tracklets are constructed in stage (a) by chaining. In this stage, the low-level portion (upper diagram) considers small window fragments, while the high-level portion (lower diagram) analyzes a larger region in order to impose a contextual influence. The results in stage (a) are obtained by exploiting a system configured for over-segmenting videos. The resultant short tracks (chains) are filtered and replaced in stage (b) by a set of sparse representative tracks, the so-called representative tracklets. Longer trajectories, i.e. tracking results, are then generated by using the Markov Chain Monte Carlo Data Association (MCMCDA) algorithm in stage (c) to solve the Maximum A Posteriori (MAP) problem using tracklet affinities. Thus this procedure uses low-level tracklets to connect high-level tracklets when there is a discontinuity in motion or time to produce a final tracking result in stage (d).
[0030] FIG. 4 illustrates a series of the same image from a video to depict codeword assignment for each pixel. In image (a), a sample video frame from the CAVIAR
dataset [12] is shown. In image (b), color-coded low-level codewords are assigned to every pixel in the video frame. In this case, there are a large number of low-level codewords. Finally, in image (c), high-level codewords, which represent compositions, are also assigned to every pixel in the video frame. This would generally produce a small number of codewords since it deals with objects in the scene. Each object might be represented by a large number of low-level codewords, while the high-level codebook assigns a few number of codewords to an objects, in most cases one or two.
dataset [12] is shown. In image (b), color-coded low-level codewords are assigned to every pixel in the video frame. In this case, there are a large number of low-level codewords. Finally, in image (c), high-level codewords, which represent compositions, are also assigned to every pixel in the video frame. This would generally produce a small number of codewords since it deals with objects in the scene. Each object might be represented by a large number of low-level codewords, while the high-level codebook assigns a few number of codewords to an objects, in most cases one or two.
[0031] FIG. 5 graphically illustrates a process for short tracklet and representative tracklet construction. In graph (a), a set of short tracklets (short tracks) is constructed using 22731187.1 observations obtained from the low-level codebook, XL. In graph (b), a set of short tracklets is constructed using observations obtained from the high-level codebook, X H .
Next, in graph (c), low-level tracklets, TL, are obtained by grouping similar short tracklets in XL. Finally, graph (d) illustrates high-level tracklets, T", obtained by grouping similar short tracklets in X H . The black rectangle shown in FIG. 5(d) indicates the area in XYT-space occupied by a single person. It can be seen from this, that a single person may produce more than a single trajectory. This is expected since the process illustrated in FIG. 5 does not involve any person or object detection, an issue which is also dealt with below, in which a data association process is described that rejects certain tracklets as false positives.
Next, in graph (c), low-level tracklets, TL, are obtained by grouping similar short tracklets in XL. Finally, graph (d) illustrates high-level tracklets, T", obtained by grouping similar short tracklets in X H . The black rectangle shown in FIG. 5(d) indicates the area in XYT-space occupied by a single person. It can be seen from this, that a single person may produce more than a single trajectory. This is expected since the process illustrated in FIG. 5 does not involve any person or object detection, an issue which is also dealt with below, in which a data association process is described that rejects certain tracklets as false positives.
[0032] FIG. 6 illustrates data association and tracklet rejection. It has been found that formulating the MAP estimation as a combination of low and high level tracklets makes it possible to reject some trajectories by considering them as false positives.
Here, T2 is a rejected tracklet. A low-level tracklet, T4 is used to connect T, and T3 based on motion consistency and temporal continuity.
Here, T2 is a rejected tracklet. A low-level tracklet, T4 is used to connect T, and T3 based on motion consistency and temporal continuity.
[0033] FIG. 7 illustrates observations represented by low- and high-level codebooks, by way of example only. First, the video is densely sampled to produce a set of overlapping spatio-temporally video volumes (STVs) and, subsequently, a two-level hierarchical codebook is created. It will be appreciated that this example illustrates a two-level hierarchy but may be extended to three or more levels. In diagram (a), at the lower level of the hierarchy, similar video volumes are dynamically grouped to form a conventional fixed-size low-level codebook.
In diagram (b), at the higher level, a much larger spatio-temporal 3D volume is created by a large contextual region containing many STVs (in space and time) around each pixel being examined and their compositional relationships being approximated using a probabilistic framework. This volume contains many STVs and captures the spatio-temporal arrangement of the volumes, called an ensemble of volumes. Similar ensembles are grouped based on the similarity between arrangements of their video volumes and yet another codebook is formed. In this example, two codewords are assigned to each pixel, one from the low-level and the other from the high-level codebook. By examining pairs of sequential video frames, the matching codewords for each video pixel are transitively linked into distinct tracks, whose total number is unknown a priori and are referred to herein as linklets. The linking process is separately performed for both codebooks, done under a hard constraint that no two linklets share the same 22731187.1 pixel at the same time. The end result is two sets of independent linklets obtained from the low-level and high-level codebooks.
In diagram (b), at the higher level, a much larger spatio-temporal 3D volume is created by a large contextual region containing many STVs (in space and time) around each pixel being examined and their compositional relationships being approximated using a probabilistic framework. This volume contains many STVs and captures the spatio-temporal arrangement of the volumes, called an ensemble of volumes. Similar ensembles are grouped based on the similarity between arrangements of their video volumes and yet another codebook is formed. In this example, two codewords are assigned to each pixel, one from the low-level and the other from the high-level codebook. By examining pairs of sequential video frames, the matching codewords for each video pixel are transitively linked into distinct tracks, whose total number is unknown a priori and are referred to herein as linklets. The linking process is separately performed for both codebooks, done under a hard constraint that no two linklets share the same 22731187.1 pixel at the same time. The end result is two sets of independent linklets obtained from the low-level and high-level codebooks.
[0034] The system 10 described herein is capable of creating an over-segmented representation of video or images based on appearance and/or motion patterns at one or more hierarchical levels [11]. The system 10 uses the information produced to detect and track all moving objects in the scene, and it is also capable of incorporating additional prior knowledge to the tracking system 12, when available.
[0035] In general, a video segmentation system is an on-line framework which produces one (or more) sets of codebooks in real-time and assigns labels to local spatio-temporal video volumes or every pixel in each frame based on their similarity, while also considering their spatio-temporal relationships [8].
[0036] An example of such system is described in reference [11], in which the low level codebook uses local visual context while the higher level one uses the global visual context and hence, the codebook size is decreased as the process goes up toward the higher levels of the hierarchy.
[0037] Considering that there are N levels of hierarchy, {I nI and each codebook is referred to as C". Depending on the number of codebooks, multiple codewords are assigned to each pixel p(x,y) at time t in the video. Therefore, in a video sequence of temporal length T, a particular pixel p(x,y) is represented by N sequences of assigned codewords (where *¨
symbolizes value assignment):
symbolizes value assignment):
[0038] p(x,y)= {p(x,y)*-- c, :V t ET , c, ÃC1'} (1)
[0039] In order to simplify the present discussion and for ease of illustration, the number of hierarchical codebooks is exemplified herein as being equal to two. A sample video frame from a caviar dataset [12] and the assigned codewords are illustrated in FIG. 4.
Given the assigned codewords (labels) for each pixel and the over-segmented representation of the video (see again FIG. 4), each segment represents a set of pixels that are similar in terms of local motion patterns or appearance. Therefore, a short trajectory for each pixel is created by examining the temporal coherence of its assigned codewords. Two responses are conservatively associated only if they are in consecutive frames and are close enough in space and similar enough 22731187.1 according to their assigned codewords. Depending on the number of codebooks, different sets of trajectories are obtained. Given the current representation of two codebooks, the two sets of trajectories are called X 1- , {I n . FIG. 5 illustrates the created linklet sets for two hierarchical levels.
Given the assigned codewords (labels) for each pixel and the over-segmented representation of the video (see again FIG. 4), each segment represents a set of pixels that are similar in terms of local motion patterns or appearance. Therefore, a short trajectory for each pixel is created by examining the temporal coherence of its assigned codewords. Two responses are conservatively associated only if they are in consecutive frames and are close enough in space and similar enough 22731187.1 according to their assigned codewords. Depending on the number of codebooks, different sets of trajectories are obtained. Given the current representation of two codebooks, the two sets of trajectories are called X 1- , {I n . FIG. 5 illustrates the created linklet sets for two hierarchical levels.
[0040] It can be observed that the number of linklets is generally more than the number of objects in the scene and that many trajectories may belong to a single object.
In addition, it is noted that the number of short tracklets created by a single object is much smaller in X than the ones in X . Ideally one is interested in obtaining a single trajectory for an object. Thus, the short tracklets belonging to the same object should be merged in order to create a single representative track that describes the motion of the object. Here the idea of clustering trajectories is used to create a representative object trajectory, e.g., see references [13, 14].
In addition, it is noted that the number of short tracklets created by a single object is much smaller in X than the ones in X . Ideally one is interested in obtaining a single trajectory for an object. Thus, the short tracklets belonging to the same object should be merged in order to create a single representative track that describes the motion of the object. Here the idea of clustering trajectories is used to create a representative object trajectory, e.g., see references [13, 14].
[0041] It can be appreciated that non-informative short tracklets are removed before constructing clusters of trajectories. These are taken to be relatively motionless or those that carry little information about the motion. They are mainly related to the background or static objects. One possibility is to analyze the short tracklets within a temporal window of the length of T. Then, those trajectories with a small variance are removed.
[0042] Xl^ {X E Xl. , var {x} ?. el" (2)
[0043] where sl. is a threshold for the short tracklet set X . This kind of filtering can be replaced by any other trajectory pruning algorithm and hence, other methods can be used to remove uninformative codewords, such as the one presented in reference [10].
[0044] In order to create representative tracklets, similar short tracklets are grouped based on their similarity and proximity in space and time. One possibility is to adopt the pairwise affinities between all trajectories as a similarity measurement, e.g., see reference [14].
[0045] The distance between two trajectories x and y, D(x,y) , is defined as:
[0046] D2 (x, y) = max, {ci,2 (x, y)} , where dr2 (x, y) is the distance between two trajectories x and y at the time t and defined as follows:
22731187.1 , 2 2 (x¨y)
22731187.1 , 2 2 (x¨y)
[0047] d2(x,y)=1(x¨y) ___________ (3) 5,7,2
[0048] The first factor on the right-hand-side of equation (3) is the average spatial Euclidean distance between the two trajectories. The second factor characterizes the motion of a point aggregated over a number of frames at time t. The normalization term, a, accounts for the local variation in the motion, e.g., see reference [14]. Given the above distance measurement between two trajectories, clustering can be performed using the k-means algorithm. Here iterative clustering is invoked to determine the optimal number of clusters. In order to perform the merging, the Jensen-Shannon divergence measure can be used to compute the actual difference between the resulting clusters. It should be noted that the tracking system 12 is independent of the choice of the clustering algorithm and similarity measurement method between trajectories. Accordingly, the example algorithms described herein are for illustrative purposes and similarly operable algorithms could instead be used.
[0049] Eventually, clustering produces N sets of tracklets, which are referred to as T".
[0050] The tracklets obtained after clustering are not quite reliable for long term object tracking, but do a relatively good job of encoding the moving object motions in the short term (see FIG. 5). The main advantage of constructing the tracklets based on the hierarchical codebooks is that no target detection is required. Although a set of representative trajectories is created for all moving objects in the video, there is no guarantee that an object would be represented by a single trajectory. Moreover, in crowded scenes, the representative trajectories may correspond to more than one object. However, if the motion pattern changes, then the trajectories would separate.
[0051] Given the resulting tracklets, high-level trajectories can be generated by linking them in space and time. This can be achieved by formulating the data association as a maximum a posteriori (MAP) problem and solve it with the Markov Chain Monte Carlo Data Association (MCMCDA) algorithm.
[0052] The observations are taken to be the constructed tracklets, 0 = .
Let F be n=1 a tracklet association result, which is a set of trajectories, rk E F. Fk is defined as a set of the connected observations which is a subset of all observations, Fic 7:1,, c .
22731187.1
Let F be n=1 a tracklet association result, which is a set of trajectories, rk E F. Fk is defined as a set of the connected observations which is a subset of all observations, Fic 7:1,, c .
22731187.1
[0053] The goal is to find the most probable set of object trajectories, F, which is formulated as a MAP problem:
[0054] r*, arg max P ( F ) = arg max P (0 I F) P (F) (4)
[0055] The likelihood, P(0 I F) indicates how well a set of trajectories matches the observations and the prior, P( indicates how correct is the data association.
[0056] By assuming that the likelihoods of the tracklets are conditionally independent, we can rewrite the likelihood, P(0 I , in equation (4) as follows:
[0057] P(0 I 1-')= n 1 ,7:212 v n p(fk) (5) Tn'"ETI" rker
[0058] First one can consider the encoding of the likelihood of tracklets in (5). The observations, that is, the tracklets, can be either true or false trajectories of the object.
Therefore, the likelihood of a tracklet, given the set of trajectories, S , can be modeled by a Bernoulli distribution:
pIT1 : T e rk,F k c
Therefore, the likelihood of a tracklet, given the set of trajectories, S , can be modeled by a Bernoulli distribution:
pIT1 : T e rk,F k c
[0059] P (7' r)¨Bern(p)= (6) (1-p) :Tofk,F1,ET
[0060] where ITI denotes how good is a tracklet. Since the tracklets are taken to be clusters of small trajectories constructed previous, TI is defined as the size of the cluster. It is assumed that the N sets of tracklets, {T1.} , are independent (the independency assumption n=1 is valid here because the consistency between tracklets and observations, i.e., the suitability of the tracklets, is independent of the relationship between trajectories).
Therefore, the likelihood in equation (5) can be written as follows:
Therefore, the likelihood in equation (5) can be written as follows:
[0061] p (T), 77:212 ,...77:2,1,v I 0= ,r) (7) n=i
[0062] where P(T,!- I 1)- Bern(p1^) as described in equation (6). This formulation makes it possible to exclude some tracklets from the final data association by assuming that any 22731187.1 =
tracklet can belong to at most one trajectory in the data association process.
This is achieved simply by rejecting them as false object tracklets.
tracklet can belong to at most one trajectory in the data association process.
This is achieved simply by rejecting them as false object tracklets.
[0063] Next one can consider the encoding of the prior of tracklets in equation (5) , P(Fk).
These priors can be modeled as a Markov chain:
These priors can be modeled as a Markov chain:
[0064] p(rk):= n p(rtic rtk-1) = (F0k. ) (Flk FOk (rkn rkn-1 )pt (rkn) (8) 11,Erk
[0065] where rk is the trajectory of the object at a time instant t.
The chain includes an initialization term, P, , a probability to link the tracklets, 1, and a termination probability, P, , to terminate the trajectory.
The chain includes an initialization term, P, , a probability to link the tracklets, 1, and a termination probability, P, , to terminate the trajectory.
[0066] It is assumed in this example that a trajectory can only be initialized or terminated using the tracklets obtained from the highest level codebook, . Therefore, the probabilities of initializing and terminating a trajectory are written as follows:
[0067] p, (rok = p, (T.;
(9)
(9)
[0068] f; (rn,), (T.; (10)
[0069] Any prior information such as entry and exit zones can be incorporated into the trajectory initialization and termination probabilities. The probability of linking two tracklets can be written as:
(rk I )=
Jt Jr Jt-i Jr-t Jr-i p (TivAr 51,122 57,11v, 5 57,122 571, )p I Tivry 5 .557,122 57,11, 1 J; = = Jr Jr Jr-i = = Jr-i Jr-, 1 J, I J;-1 = Jr-I Jr-1 = (r;"I T' ,===,T122 ) (T1,,;!,,1 Jr Ji It Jr Jr-1 Jr Jr Jr Jr Jr-i Jt-1 ...P,(71' I ,71') Jr Jr-I Jr-i (11)
(rk I )=
Jt Jr Jt-i Jr-t Jr-i p (TivAr 51,122 57,11v, 5 57,122 571, )p I Tivry 5 .557,122 57,11, 1 J; = = Jr Jr Jr-i = = Jr-i Jr-, 1 J, I J;-1 = Jr-I Jr-1 = (r;"I T' ,===,T122 ) (T1,,;!,,1 Jr Ji It Jr Jr-1 Jr Jr Jr Jr Jr-i Jt-1 ...P,(71' I ,71') Jr Jr-I Jr-i (11)
[0070] Two tracklets are linked if they are consistent in the time domain and show similar motion patterns. One can assume independency and decompose the probability of linking the tracklets into two probabilities. Therefore equation (11) is rewritten as:
22731187.1 Pr(rk 1111= pT (TV I TVTI T122 )p (TV I 7,14 ...,TI22 ,T1\iv T1,2 = =
(TI'IN-1 I T
)2 TI2TuITlv T12 )p1,4 Tiµ2 TI22 ,===>T122 J, 11 J, Jr-1 Jr-1 lt-1 .1; ft ft ft 1,-1 ...pT (71, I 7,12 )p (Tit, TV ,Ti2,TI, Ji-1 It-, M It J,-1" = J12-1 .1,-1 (12)
22731187.1 Pr(rk 1111= pT (TV I TVTI T122 )p (TV I 7,14 ...,TI22 ,T1\iv T1,2 = =
(TI'IN-1 I T
)2 TI2TuITlv T12 )p1,4 Tiµ2 TI22 ,===>T122 J, 11 J, Jr-1 Jr-1 lt-1 .1; ft ft ft 1,-1 ...pT (71, I 7,12 )p (Tit, TV ,Ti2,TI, Ji-1 It-, M It J,-1" = J12-1 .1,-1 (12)
[0071] where the temporal consistency probability, PT , is taken to be the hyper-exponential distribution of the temporal gap between the tracklets:
[0072] PT (VI I T Ivy , 12 2 T = anpn (rn) (13) Jr Jr-1 Jr-)
[0073] Põ(r)¨ Exp (A, n) ={ :r > 00 n (14) 0 :r,<0
[0074] where r., is the temporal distance between the end of a tracklet and the start of its immediate successor. The motion consistency probability, Pm , is modeled by assuming that the trajectories follow a constant velocity model and obey a Gaussian distribution.
[0075] The combinatorial solution space of F in equation (4) is extremely large and finding good tracklet associations can be extremely challenging. Here the MCMCDA
sampling approach is followed, to simultaneously estimate the parameters and T.
.
sampling approach is followed, to simultaneously estimate the parameters and T.
.
[0076] FIG. 6 shows how the two low- and high-level tracklets can be used for constructing long trajectories in a data association framework. Formulating the likelihood as described in equation (7) makes it possible to reject some trajectories by considering them as false positives.
Here, T2 S a rejected tracklet. A lower level tracklet, 74 is used to connect 7; and 7; based on motion consistency and temporal continuity.
Here, T2 S a rejected tracklet. A lower level tracklet, 74 is used to connect 7; and 7; based on motion consistency and temporal continuity.
[0077] MCMC is a general method for generating samples from a distribution, p, by constructing a Markov chain in which the states are F. At any state F, a new proposal is introduced using the distribution, q(T111 . One can consider three types of association as a result of the sampling process.
[0078] The first randomly selects one tracklet and one trajectory. This affects the current state of the tracklet by associating it to the selected trajectory.
22731187.1
22731187.1
[0079] The second, called swapping, postulates that, all tracklets constructing the two trajectories be swapped at a randomly chosen time.
[0080] Finally, the third proposes a change of trajectory type. One can decide which of the three r' should be accepted by employing the Metropolis-Hastings acceptance function, e.g., see reference [15], which defines the likelihood by:
A (r, p(r)q(F I r),1} (15)
A (r, p(r)q(F I r),1} (15)
[0081]
P(r)q(ri F)
P(r)q(ri F)
[0082] In addition, in order to estimate the model parameters, we use MCMCDA sampling followed by an additional Metropolis-Hastings update for the parameters.
[0083] By employing two codebooks for video segmentations, the algorithm has been tested using the TUD [16] and CAVIAR [12] datasets. All parameters have been set experimentally, but most have remained identical for all sequences. In all cases, the suggested parameters in reference [10] have been used for codebook construction.
Quantitative comparisons with state-of-art methods have also been performed, as well as visual results of the approach described herein.
Quantitative comparisons with state-of-art methods have also been performed, as well as visual results of the approach described herein.
[0084] One can follow the same evaluation metrics as those in references [7, 17-19].
These are Mostly Tracked (MT), which is the percentage of the trajectories covered by the tracker output more than 80% of the time; Mostly Lost (ML) which is the percentage of the trajectories covered by the tracker output less than 20% of the time; ID
Switch (ID) which is the number of times that a trajectory changes its matched ground truth identity;
fragments (FRAG), which is the number of times that a ground truth trajectory is interrupted (i.e., each time it is lost by the current hypothesis); and average False Alarms per Frame (FAF). The current system without employing object detection achieved the following results, for a representative two-level example:
Dataset MT ML ID FRAG FAF
CAVIAR 84.3 6.4 18 16 0.237 TUD 60 10 1 4 0.281
These are Mostly Tracked (MT), which is the percentage of the trajectories covered by the tracker output more than 80% of the time; Mostly Lost (ML) which is the percentage of the trajectories covered by the tracker output less than 20% of the time; ID
Switch (ID) which is the number of times that a trajectory changes its matched ground truth identity;
fragments (FRAG), which is the number of times that a ground truth trajectory is interrupted (i.e., each time it is lost by the current hypothesis); and average False Alarms per Frame (FAF). The current system without employing object detection achieved the following results, for a representative two-level example:
Dataset MT ML ID FRAG FAF
CAVIAR 84.3 6.4 18 16 0.237 TUD 60 10 1 4 0.281
[0085] The results indicate that although the correct detections obtained with the present algorithm may be comparable to other approaches, they include more false positives. This can be expected, at least in some cases, since no object detection is employed. As noted above, 22731187.1 the scene observations that are used are motion descriptors and do not incorporate object appearance, as do object-centric trackers.
[0086] Accordingly, there is provided a method for tracking objects in a scene from a sequence of images which are captured by an imaging device such as a camera.
The method is based upon a process comprising the steps of: receiving a plurality of over-segmented regions on one image or a plurality of images, generating a hierarchical representation of the targets' trajectories in terms of local and global motion patterns, and generating a probabilistic framework to encode hierarchical trajectory creation.
The method is based upon a process comprising the steps of: receiving a plurality of over-segmented regions on one image or a plurality of images, generating a hierarchical representation of the targets' trajectories in terms of local and global motion patterns, and generating a probabilistic framework to encode hierarchical trajectory creation.
[0087] The method can also include generating at least one set of short trajectories over time for every region (or pixel) in the videos based on local and global motion patterns, referred to herein as short tracklets or linklets. Also, representative trajectories for regions with coherent motion patterns can be generated, referred to herein as representative tracklets or simply tracklets.
[0088] The method and system described herein can be used for generating long term trajectories for each target by linking tracklets over time, and a hierarchical representation of short tracklets of all moving objects can also be generated.
[0089] The method can also incorporate a hierarchical data association mechanism for formulating the data association problem as a MAP estimation, connecting tracklets and generating long term trajectories of every target, filling gaps between disconnected trajectories, and removing irrelevant trajectories in a probabilistic fashion by considering them as false trajectories.
[0090] The method can also incorporate a target detection system in the hierarchical trajectory creation.
[0091] The method can also include the use of low-level tracklets as supportive information for filling the gaps between high-level tracklets, to produce smooth trajectories.
[0092] The method can also incorporate prior knowledge to increase accuracy, such as the number of targets to be tracked, regions of interest for the targets, camera parameters, and/or calibration data, ground plane location, entry/exit zones of the targets in the scene appearance and/or shape representation of the targets.
22731187.1
22731187.1
[0093] The system performing the above described method does not require any prior information about the objects or the scene. In another words, object detection is not required.
[0094] The system can be operable to work with motion patterns of all objects in the scene and in its basic form it does not require any appearance or shape representation of the targets.
[0095] A method for multi-object tracking in videos is also provided, which exploits an event descriptor obtained from an event description system. A method for multi-object tracking in videos exploiting local and global motion patterns of every pixel in the video is also provided.
[0096] List of references:
[0097] 1. Kratz, L. and K. Nishino, Tracking Pedestrians Using Local Spatio-Temporal Motion Patterns in Extremely Crowded Scenes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2012. 34(5): p. 987-1002.
[0098] 2. Javan Roshtkhari, M.a.L., Martin. Multiple Object Tracking Using Local Motion Patterns. in Proceedings of the British Machine Vision Conference. 2014. BMVA
Press.
Press.
[0099] 3. Roshan Zamir, A., A. Dehghan, and M. Shah, GMCP-Tracker: Global Multi-object Tracking Using Generalized Minimum Clique Graphs, in Computer Vision - ECCV
2012. 2012, Springer Berlin Heidelberg. p. 343-356.
2012. 2012, Springer Berlin Heidelberg. p. 343-356.
[00100] 4. Lu, Z. and L. van der Maaten, Structure Preserving Object Tracking, in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. 2013. p. 1838-1845.
[00101] 5. Jingchen, L., et al., Tracking Sports Players with Context-Conditioned Motion Models, in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE
Conference on. 2013.
p. 1830-1837.
Conference on. 2013.
p. 1830-1837.
[00102] 6. Chang, H., L. Yuan, and R. Nevatia, Multiple Target Tracking by Learning-Based Hierarchical Association of Detection Responses. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2013. 35(4): p. 898-910.
[00103] 7. Yang, B. and R. Nevatia, Multi-Target Tracking by Online Learning a CRF Model of Appearance and Motion Patterns. International Journal of Computer Vision, 2014. 107(2): p.
203-217.
CPST Doc: 374425.1 Date Recue/Date Received 2021-08-25
203-217.
CPST Doc: 374425.1 Date Recue/Date Received 2021-08-25
[00104] 8. Roshtkhari, M.J. and M.D. Levine, Online Dominant and Anomalous Behavior Detection in Videos, in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE
Conference on. 2013. p. 2609-2616.
Conference on. 2013. p. 2609-2616.
[00105] 9. Roshtkhari, M.J. and M.D. Levine, An on-line, real-time learning method for detecting anomalies in videos using spatio-temporal compositions, Computer Vision and Image Understanding. Computer Vision and Image Understanding, 2013. 117(10): p. 1436-1452.
[00106] 10. Roshtkhari, M.J. and M.D. Levine, Human activity recognition in videos using a single example. Image and Vision Computing, 2013. 31(11): p. 864-876.
[00107] 11. Roshtkhari, M.J. and M.D. Levine, METHODS AND SYSTEMS RELATING
TO
ACTIVITY ANALYSIS. 2014.
TO
ACTIVITY ANALYSIS. 2014.
[00108] 12. CAVIAR Dataset. http://homepages.inted.ac.uktrbf/CAVIARDATA1.
[00109] 13. Shandong, W., B.E. Moore, and M. Shah, Chaotic invariants of Lagrangian particle trajectories for anomaly detection in crowded scenes, in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. 2010. p. 2054-2060.
[00110] 14. Brox, T. and J. Malik, Object Segmentation by Long Term Analysis of Point Trajectories, in Computer Vision - ECCV 2010. 2010, Springer Berlin Heidelberg. p. 282-295.
[00111] 15. Gilks, W.R., S. Richardson, and D.J. Spiegelhalter, Markov chain Monte Carlo in practice. 1998: Chapman & Hall.
[00112] 16. Andriyenko, A. and K. Schindler, Multi-target tracking by continuous energy minimization, in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE
Conference on.
2011. p. 1265-1272.
Conference on.
2011. p. 1265-1272.
[00113] 17. Song, B., et al., A stochastic graph evolution framework for robust multi-target tracking, in Computer Vision - ECCV 2010. 2010, Springer-Verlag. p. 605-619.
[00114] 18. Li, Z., L. Yuan, and R. Nevatia, Global data association for multi-object tracking using network flows, in Computer Vision and Pattern Recognition (CVPR), 2008 IEEE
Conference on. 2008. p. 1-8.
Conference on. 2008. p. 1-8.
[00115] 19. Yuan, L., H. Chang, and R. Nevatia, Learning to associate:
HybridBoosted multi-target tracker for crowded scene, in Computer Vision and Pattern Recognition (CVPR), 2009 IEEE Conference on. 2009. p. 2953-2960.
22731187.1
HybridBoosted multi-target tracker for crowded scene, in Computer Vision and Pattern Recognition (CVPR), 2009 IEEE Conference on. 2009. p. 2953-2960.
22731187.1
[00116] For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
In addition, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. However, it will be understood by those of ordinary skill in the art that the examples described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the examples described herein. Also, the description is not to be considered as limiting the scope of the examples described herein.
In addition, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. However, it will be understood by those of ordinary skill in the art that the examples described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the examples described herein. Also, the description is not to be considered as limiting the scope of the examples described herein.
[00117] It will be appreciated that the examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein. For instance, components and modules can be added, deleted, modified, or arranged with differing connections without departing from these principles.
[00118] It will also be appreciated that any module or component exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the system 10, any component of or related to the system 10, etc., or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.
Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the system 10, any component of or related to the system 10, etc., or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.
[00119] The steps or operations in the flow charts and diagrams described herein are just for example. There may be many variations to these steps or operations without departing from the principles discussed above. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified.
22731187.1
22731187.1
[00120] Although the above principles have been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art as outlined in the appended claims.
22731187.1
22731187.1
Claims (27)
1. A method of tracking objects in a scene from a sequence of images captured by an imaging device, the method comprising:
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest;
at each of the hierarchical levels:
examining pairs of sequential images to transitively link pixels in the images into short tracklets, wherein a plurality of independent short tracklets can be associated with a same object and wherein a total number of short tracklets can be unknown a priori; and grouping short tracklets that indicate similar motion patterns and proximity in both space and time, to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest;
at each of the hierarchical levels:
examining pairs of sequential images to transitively link pixels in the images into short tracklets, wherein a plurality of independent short tracklets can be associated with a same object and wherein a total number of short tracklets can be unknown a priori; and grouping short tracklets that indicate similar motion patterns and proximity in both space and time, to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
2. The method of claim 1, wherein the sequence of images are segmented at the plurality of hierarchical levels according to local and global shapes and motion patterns.
3. The method of claim 1 or claim 2, wherein the processing comprises assigning a codeword to each pixel in each of the sequence of images using a codebook for each of the hierarchical levels.
4. The method of claim 3, wherein a low level codebook uses local visual context and a higher level codebook uses a global visual content.
5. The method of any one of claims 1 to 4, further comprising rejecting at least one representative tracklet as a false positive.
6. The method of any one of claims 1 to 5, wherein a pair of high level representative tracklets having a discontinuity are linked using a low level representative tracklet.
7. The method of any one of claims 1 to 6, wherein grouping the representative tracklets comprises using a maximum a posteriori (MAP) estimation for performing data association.
CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25
CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25
8. The method of any one of claims 1 to 7, further comprising performing target detection using prior knowledge, and using detection responses in generating the representative tracklets.
9. The method of claim 8, wherein the prior knowledge comprises any one or more of: a number of targets to be tracked, one or more regions of interest for the targets, at least one parameter of the imaging device, calibration data for the imaging device, ground plane location, entry or exit zones of the targets, and appearance or shape representation of the targets.
10. A computer readable storage medium comprising computer executable instructions for of tracking objects in a scene from a sequence of images captured by an imaging device, the computer executable instructions comprising instructions for:
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest;
at each of the hierarchical levels:
examining pairs of sequential images to transitively link pixels in the images into short tracklets, wherein a plurality of independent short tracklets can be associated with a same object and wherein a total number of short tracklets can be unknown a priori; and grouping short tracklets that indicate similar motion patterns and proximity in both space and time, to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest;
at each of the hierarchical levels:
examining pairs of sequential images to transitively link pixels in the images into short tracklets, wherein a plurality of independent short tracklets can be associated with a same object and wherein a total number of short tracklets can be unknown a priori; and grouping short tracklets that indicate similar motion patterns and proximity in both space and time, to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
11. The computer readable storage medium of claim 10, wherein the sequence of images are segmented at the plurality of hierarchical levels according to local and global shapes and motion patterns.
12. The computer readable storage medium of claim 10 or claim 11, wherein the processing comprises assigning a codeword to each pixel in each of the sequence of images using a codebook for each of the hierarchical levels.
13. The computer readable storage medium of claim 12, wherein a low level codebook uses local visual context and a higher level codebook uses a global visual content.
CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25
CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25
14. The computer readable storage medium of any one of claims 10 to 13, further comprising instructions for rejecting at least one representative tracklet as a false positive.
15. The computer readable storage medium of any one of claims 10 to 14, wherein a pair of high level representative tracklets having a discontinuity are linked using a low level representative tracklet.
16. The computer readable storage medium of any one of claims 10 to 15, wherein grouping the representative tracklets comprises using a maximum a posteriori (MAP) estimation for performing data association.
17. The computer readable storage medium of any one of claims 10 to 16, further comprising instructions for performing target detection using prior knowledge, and using detection responses in generating the representative tracklets.
18. The computer readable storage medium of claim 17, wherein the prior knowledge comprises any one or more of: a number of targets to be tracked, one or more regions of interest for the targets, at least one parameter of the imaging device, calibration data for the imaging device, ground plane location, entry or exit zones of the targets, and appearance or shape representation of the targets.
19. A tracking system comprising a processor and memory, the memory comprising computer executable instructions for causing the processor to track objects in a scene from a sequence of images captured by an imaging device, the computer executable instructions comprising instructions for:
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest;
at each of the hierarchical levels:
examining pairs of sequential images to transitively link pixels in the images into short tracklets, wherein a plurality of independent short tracklets can be associated with a same object and wherein a total number of short tracklets can be unknown a priori; and CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25 grouping short tracklets that indicate similar motion patterns and proximity in both space and time, to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
processing the sequence of images to generate sequential images at a plurality of hierarchical levels to generate a set of regions of interest;
at each of the hierarchical levels:
examining pairs of sequential images to transitively link pixels in the images into short tracklets, wherein a plurality of independent short tracklets can be associated with a same object and wherein a total number of short tracklets can be unknown a priori; and CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25 grouping short tracklets that indicate similar motion patterns and proximity in both space and time, to generate representative tracklets; and grouping the representative tracklets to generate a tracking result for at least one object.
20. The system of claim 19, wherein the sequence of images are segmented at the plurality of hierarchical levels according to local and global shapes and motion patterns.
21. The system of claim 19 or claim 20, wherein the processing comprises assigning a codeword to each pixel in each of the sequence of images using a codebook for each of the hierarchical levels.
22. The system of claim 21, wherein a low level codebook uses local visual context and a higher level codebook uses a global visual content.
23. The system of any one of claims 19 to 22, further comprising instructions for rejecting at least one representative tracklet as a false positive.
24. The system of any one of claims 19 to 23, wherein a pair of high level representative tracklets having a discontinuity are linked using a low level representative tracklet.
25. The system of any one of claims 19 to 24, wherein grouping the representative tracklets comprises using a maximum a posteriori (MAP) estimation for performing data association.
26. The system of any one of claims 19 to 25, further comprising instructions for performing target detection using prior knowledge, and using detection responses in generating the representative tracklets.
27. The system of claim 26, wherein the prior knowledge comprises any one or more of: a number of targets to be tracked, one or more regions of interest for the targets, at least one parameter of the imaging device, calibration data for the imaging device, ground plane location, entry or exit zones of the targets, and appearance or shape representation of the targets.
CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25
CPST Doc: 374427.1 Date Recue/Date Received 2021-08-25
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2891836A CA2891836C (en) | 2015-05-15 | 2015-05-15 | System and method for tracking moving objects in videos |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2891836A CA2891836C (en) | 2015-05-15 | 2015-05-15 | System and method for tracking moving objects in videos |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2891836A1 CA2891836A1 (en) | 2016-11-15 |
CA2891836C true CA2891836C (en) | 2022-07-26 |
Family
ID=57298821
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2891836A Active CA2891836C (en) | 2015-05-15 | 2015-05-15 | System and method for tracking moving objects in videos |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2891836C (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG11202000855VA (en) * | 2017-08-17 | 2020-02-27 | Nat Univ Singapore | Video visual relation detection methods and systems |
CN115099343B (en) * | 2022-06-24 | 2024-03-29 | 江南大学 | Distributed tag Bernoulli fusion tracking method under limited field of view |
-
2015
- 2015-05-15 CA CA2891836A patent/CA2891836C/en active Active
Also Published As
Publication number | Publication date |
---|---|
CA2891836A1 (en) | 2016-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9824281B2 (en) | System and method for tracking moving objects in videos | |
Fan et al. | Video anomaly detection and localization via gaussian mixture fully convolutional variational autoencoder | |
Sindagi et al. | Cnn-based cascaded multi-task learning of high-level prior and density estimation for crowd counting | |
CA2953394C (en) | System and method for visual event description and event analysis | |
Benezeth et al. | Abnormal events detection based on spatio-temporal co-occurences | |
Zhang et al. | Multi-target tracking by learning local-to-global trajectory models | |
Tran et al. | Activity analysis in crowded environments using social cues for group discovery and human interaction modeling | |
Joo et al. | A multiple-hypothesis approach for multiobject visual tracking | |
Tran et al. | Social cues in group formation and local interactions for collective activity analysis | |
Tesfaye et al. | Multi‐object tracking using dominant sets | |
Gan et al. | Online CNN-based multiple object tracking with enhanced model updates and identity association | |
Luo et al. | Real-time people counting for indoor scenes | |
Vu et al. | Energy-based models for video anomaly detection | |
Miller et al. | Detection theory for graphs | |
Heili et al. | Detection-based multi-human tracking using a CRF model | |
Cheng et al. | Entropy guided attention network for weakly-supervised action localization | |
Flaborea et al. | Contracting skeletal kinematics for human-related video anomaly detection | |
CA2891836C (en) | System and method for tracking moving objects in videos | |
Huang et al. | Multi-object tracking via discriminative appearance modeling | |
Zhu et al. | Human behavior clustering for anomaly detection | |
Ullah et al. | AD‐Graph: Weakly Supervised Anomaly Detection Graph Neural Network | |
Alzahrani et al. | Anomaly detection in crowds by fusion of novel feature descriptors | |
Yasir et al. | Comparative analysis of GMM, KNN, and ViBe background subtraction algorithms applied in dynamic background scenes of video surveillance system | |
Mahmood et al. | Anomaly event detection and localization of video clips using global and local outliers | |
Zhang et al. | A new visual object tracking algorithm using Bayesian Kalman filter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request |
Effective date: 20200330 |
|
EEER | Examination request |
Effective date: 20200330 |
|
EEER | Examination request |
Effective date: 20200330 |
|
EEER | Examination request |
Effective date: 20200330 |