US20210166072A1 - Learning highlights using event detection - Google Patents
Learning highlights using event detection Download PDFInfo
- Publication number
- US20210166072A1 US20210166072A1 US17/120,581 US202017120581A US2021166072A1 US 20210166072 A1 US20210166072 A1 US 20210166072A1 US 202017120581 A US202017120581 A US 202017120581A US 2021166072 A1 US2021166072 A1 US 2021166072A1
- Authority
- US
- United States
- Prior art keywords
- media content
- content item
- highlight
- event
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G06K9/00536—
-
- G06K9/00724—
-
- G06K9/46—
-
- G06K9/4647—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G06K2009/00738—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
Definitions
- the present disclosure generally relates to automatically identifying video highlights.
- Sports videos are punctuated by moments of excitement. For many sports, the exciting moments are scattered throughout a full video of the game which typically includes primarily uninteresting material. For every home run there are balls and strikes. For every touchdown or interception, unproductive running plays and incomplete passes. Soccer and hockey can have entire games with just a few goals. Most viewers just want to get these interesting and exciting portions (herein “sports video highlights”), without having to watch an entire game.
- a highlight learning module trains highlight classifiers to identify highlights in videos based on event vectors which characterize the videos according to detected events.
- To identify the events features are extracted from the videos on a frame basis. The features are used to identify events within the video using event models trained in an unsupervised manner to identify recurring events within the videos.
- the training videos are transcribed into a series of events and event vectors are constructed for the training videos to train a classifier according to the event vectors. Since the event framework is developed with an unsupervised assessment of the low-level features, the only supervision which need be used in this technique is to designate the video at a high level for the training sets as highlight or non-highlight.
- the low-level feature and event detection framework enables a system applicable to a wide variety of sports videos.
- the highlight learning module is used to classify video clips using the trained classifiers.
- the highlight learning module receives a video, or portion of a video to be classified.
- the highlight learning module extracts features from the video or portion thereof to match the features used to train the event models.
- the extracted features from the video clips are transcribed by the event models.
- An event vector is created for the transcribed events, and the video is classified using the event vector applied to the highlight classifier to determine if the video is a highlight according to this highlight classifier.
- the same event vector for a video can be classified using several highlight classifiers which can determine whether the video belongs to any of the highlight types.
- FIG. 1 is a block diagram of a video hosting service in which highlight learning can be employed according to an embodiment.
- FIG. 2 illustrates the various components of a highlight learning module used in the video hosting service according to an embodiment.
- FIG. 3 is a detailed view of the event modeling components according to an embodiment.
- FIG. 4 is a data flow diagram showing iterative refinement of the event models.
- FIG. 5 presents an overview of highlight detection using event modeling according to an embodiment.
- FIG. 1 is a block diagram of a video hosting service 100 in which highlight learning with event modeling can be employed, according to one embodiment.
- the video hosting service 100 represents a system such as that of YOUTUBE that stores and provides videos to clients such as the client device 135 .
- the video hosting site 100 communicates with a plurality of content providers 130 and client devices 135 via a network 140 to facilitate sharing of video content between users. Note that in FIG. 1 , for the sake of clarity only one instance of content provider 130 and client 135 is shown, though there could be any number of each.
- the video hosting service 100 additionally includes a front end interface 102 , a video serving module 104 , a video search module 106 , an upload server 108 , a user database 114 , and a video repository 116 .
- video hosting site 100 can be implemented as single or multiple components of software or hardware.
- functions described in one embodiment as being performed by one component can also be performed by other components in other embodiments, or by a combination of components.
- functions described in one embodiment as being performed by components of the video hosting website 100 can also be performed by one or more client devices 135 in other embodiments if appropriate.
- Client devices 135 are computing devices that execute client software, e.g., a web browser or built-in client application, to connect to the front end interface 102 of the video hosting service 100 via a network 140 and to display videos.
- client software e.g., a web browser or built-in client application
- the client device 135 might be, for example, a personal computer, a personal digital assistant, a cellular, mobile, or smart phone, or a laptop computer.
- the network 140 is typically the Internet, but may be any network, including but not limited to a LAN, a MAN, a WAN, a mobile wired or wireless network, a cloud computing network, a private network, or a virtual private network.
- Client device 135 may comprise a personal computer or other network-capable device such as a personal digital assistant (PDA), a mobile telephone, a pager, a television “set-top box,” and the like.
- PDA personal digital assistant
- the content provider 130 provides video content to the video hosting service 100 and the client 135 views that content.
- content providers may also be content viewers.
- the content provider 130 may be the same entity that operates the video hosting site 100 .
- the content provider 130 operates a client device to perform various content provider functions.
- Content provider functions may include, for example, uploading a video file to the video hosting website 100 , editing a video file stored by the video hosting website 100 , or editing content provider preferences associated with a video file.
- the client 135 is a device operating to view video content stored by the video hosting site 100 .
- Client 135 may also be used to configure viewer preferences related to video content.
- the client 135 includes an embedded video player such as, for example, the FLASH player from Adobe Systems, Inc. or any other player adapted for the video file formats used in the video hosting website 100 .
- client and content provider as used herein may refer to software providing both client and content providing functionality, to hardware on which the software executes.
- a “content provider” also includes the entities operating the software and/or hardware, as is apparent from the context in which the terms are used.
- the upload server 108 of the video hosting service 100 receives video content from a client devices 135 . Received content is stored in the video repository 116 .
- a video serving module 104 provides video data from the video repository 116 to the clients 135 .
- Client devices 135 may also search for videos of interest stored in the video repository 116 using a video search module 106 , such as by entering textual queries containing keywords of interest.
- Front end interface 102 provides the interface between client 135 and the various components of the video hosting site 100 .
- the user database 114 is responsible for maintaining a record of all registered users of the video hosting server 100 .
- Registered users include content providers 130 and/or users who simply view videos on the video hosting website 100 .
- Each content provider 130 and/or individual user registers account information including login name, electronic mail (e-mail) address and password with the video hosting server 100 , and is provided with a unique user ID. This account information is stored in the user database 114 .
- the video repository 116 contains a set of videos 117 submitted by users.
- the video repository 116 can contain any number of videos 117 , such as tens of thousands or hundreds of millions.
- Each of the videos 117 has a unique video identifier that distinguishes it from each of the other videos, such as a textual name (e.g., the string “a91qrx8”), an integer, or any other way of uniquely naming a video.
- the videos 117 can be packaged in various containers such as AVI, MP4, or MOV, and can be encoded using video codecs such as MPEG-2, MPEG-4, WebM, WMV, H.263, H.264, and the like.
- the videos 117 further have associated metadata 117 A, e.g., textual metadata such as a title, description, and/or tags.
- the video hosting service 100 further comprises a highlight learning module 119 that trains accurate video classifiers for a set of highlights. The trained classifiers can then be applied to a given video to automatically determine whether the video is a highlight.
- the highlight learning module 119 can separate a longer video into component parts and identify the portion (or portions) that contain a highlight. If the portion is identified as a highlight, the portion can be made an individual video alone. For example, a user may submit a new video, and the highlight learning module 119 can automatically determine if the video or some portion(s) thereof is a highlight.
- the highlight status of a video can be used to update a video's metadata 117 A.
- the highlight learning module 119 is now described in greater detail.
- FIG. 2 illustrates the various components of the highlight learning module 119 , according to one embodiment.
- the highlight learning module 119 comprises various modules to model video events, derive video features, train classifiers for highlights, and the like.
- the highlight learning module 119 is incorporated into an existing video hosting service 100 , such as YOUTUBE.
- the highlight learning module 119 has access to the video repository 116 of the video hosting service 100 .
- the highlight learning module 119 additionally comprises a features repository 205 that stores, for videos of the video repository 116 , associated sets of features that characterize the videos with respect to one or more types of visual or audio information, such as color, motion, and audio information.
- the features of a video 117 are distinct from the raw content of the video itself and are derived from it by a feature extraction module 230 .
- the features are stored as a vector of values, the vector having the same dimensions for each of the videos 117 for purposes of consistency.
- the features repository 205 maintains features for the videos identified as pertaining to sports.
- the highlight learning module 119 further comprises highlight identifiers 250 that describe various types of highlights which can be classified by the highlight learning module 119 .
- different highlight videos can be learned based on different types of sports, or can be learned based on different definitions of a “highlight” for a sport.
- the highlight training can be performed by marking a training set which includes as positives the goal clips as highlights.
- the highlight identifiers 250 comprise an event model as well as highlight classifiers.
- the event models are used to detect events within a video clip. These events may or may not be semantically meaningful at a high level, but represent a coherent patterns of feature data in the video clips.
- the events identified in a particular video clip are chosen from a universe of event models.
- the event models are determined and refined by an event modeler 220 .
- Each event type can appear several times or not at all within a particular video clip.
- conceptually a baseball event model may have an event of a batter preparing, and another event of a pitcher throwing a ball, which are associated with characteristic patterns of features over time.
- An individual baseball clip may have several batter preparation events, several pitcher throwing a ball events, and a no “hit” events.
- the events are learned in an unsupervised manner, and therefore are not necessarily semantically meaningful. For example, a detected event type could merely be a momentary close-up on a player.
- the highlight identifiers 250 also include highlight classifiers.
- the highlight classifiers are used to classify video clips according a highlight using an event vector constructed from the events detected from the video clip.
- the highlight classifier is a function that outputs a score representing a degree to which the events associated with the video indicate that the particular highlight applies to the video, thus serving as a measure indicating whether the video is a highlight video.
- the highlight classifiers may output a confidence range for the highlight determination, or may output a boolean value for the highlight.
- Each highlight classifier can be used to identify a different type of highlight.
- the classifiers can be trained according to different definitions of what constitutes a highlight.
- one highlight for a baseball game could be defined as any hit, while another could be defined as any run, while another might be on an out. This can be accomplished by changing which videos are included in a positive highlight training set. As a result, different portions of a baseball game may be classified as a “highlight” depending on which highlight classifier is used.
- the features extracted using the feature extraction module 230 in one embodiment are visual low-level frame-based features.
- one embodiment uses a color histogram and one embodiment uses histogram of oriented gradients to extract features from frames in a video, though other frame-based features can be used.
- the features extracted are collected on a per-frame basis and could comprise other frame-based features such as an identified number of faces or a histogram of oriented optical flow, and may comprise a combination of extracted features.
- a Laplacian-of-Gaussian (LoG) or Scale Invariant Feature Transform (SIFT) feature extractor a color histogram computed using hue and saturation in HSV color space, motion rigidity features, texture features, filter responses (e.g. derived from Gabor wavelets), including 3D filter responses, edge features using edges detected by a Canny edge detector, gradiant location and orientation histogram (GLOH), local energy-based shape histogram (LESH), or speeded-up robust features (SURF).
- Additional audio features can also be used, such as volume, an audio spectrogram, or a stabilized auditory image.
- the features are reduced.
- the feature reduction is performed in one embodiment using a learned linear projection using principal component analysis to reduce the dimensionality of the feature vectors to 50, or some other suitable number less than 100.
- Other embodiments can use additional techniques to reduce the number of dimensions in the feature vectors.
- Partitioning module 235 partitions the videos 117 into different sets used for performing training of the classifiers 212 . More specifically, the partitioning module 235 determines training and validation sets from the videos 117 , where the training set is used for training the highlight identifiers 250 and the validation set is used to test the accuracy of the trained/learned identifiers 250 . The partitioning module 235 also divides the videos of the training and validation sets into “positive” examples representative of a video highlight and “negative” examples which are not representative. In one embodiment, the designation of highlight or no-highlight is determined manually for the training and validation sets.
- the training sets can also be determined on a per-classifier basis. That is, different training sets may be used for each different highlight classifier.
- the training set as a whole can be the same across highlight, while divisions among positive and negative training sets differ according to the highlight used. For example, a positive training set comprising a “hit” in baseball will train a classifier to identify hits, while a positive training set comprising a “walk” will train a classifier to identify walks.
- a positive training set can also be designated to include several such “types” of highlights (e.g. hits and walks).
- the highlight learning module 119 additionally comprises a classifier training module 240 that iteratively learns the highlight classifier based on the events from the positive and negative training sets identified by the event models.
- the highlight classifier is a linear support vector machine (LSVM) which is trained to classify an input video event vector as either highlight or non-highlight.
- LSVM linear support vector machine
- Other classification methods can be used, such as a logical regression or a boosting algorithm.
- FIG. 3 A detailed view of the event modeling framework is shown in FIG. 3 according to an embodiment.
- This figure illustrates the conceptual framework for event modeling as well as the initialization for the initial event models.
- the initial event modeling and the event model refinement is performed on an unsupervised basis, and can be determined based on the extracted features alone.
- a diarization process is used to initialize the event models, such that the videos are first separated into shots, then segmented into small sequential chunks of video, and the chunks are then clustered into initial event types.
- the event types themselves are composed of multiple possible states. This organization is shown in FIG. 3 .
- the video 300 is separated into several shots 310 .
- Each shot 310 is an individual sequence of frames captured without interruption.
- the transition between one video shot to another shot can be determined by a shot boundary detection algorithm, for example using color histogram, edge changes, changes in pixel intensities, motion differences, or other features.
- the shot is separated into a selection of segments according to time.
- the segments are chunks of 500 milliseconds of video. Since the segments are small discrete chunks within a single shot, we can assume the segments each correspond to an individual event.
- each training video is broken down into individual segments, each presumed to belong to an individual event.
- the segments in the entire training set are clustered using the feature vectors associated with each segment.
- the clustering in one embodiment is performed using a bottom-up hierarchical agglomerative clustering algorithm with the Ward's linkage function. This function specifies the distance between two clusters and is computed as the increase in the error sum of squares after fusing two clusters into a single cluster.
- the clustering can also be performed using a K-means clustering algorithm.
- the membership in each cluster is used to assign an event to the associated segments, and therefore the final number of clusters determines the number of events in the set of events.
- the number of clusters can be chosen by the designer or modified to fit the number of events which optimizes the highlight classification results. Generally, the number of clusters is within a range of 30-100.
- each cluster is assigned an event designation (e.g. E 1 , E 2 , . . . E n ), and the associated segments are assigned to an event 330 .
- E 1 , E 2 , . . . E n an event designation
- the clusters are used to identify initial event designations for the video clips. These event designations are used to develop event models to identify event sequences within video clips.
- the event models can be developed using two different methods, Hidden Markov Modeling and Mixture Modeling.
- each event is individually modeled as a group of states 330 using a hidden Markov model (HMM).
- HMM hidden Markov model
- the states of this instance of E 2 comprise several states which may repeat within the state model.
- the event models are a set of HMMs M, each M i associated with an event type E i .
- the set of models M are jointly trained using an embedded HMI training technique.
- the embedded HMI training technique uses frequency and transition information for each HMM to train HMM identification. These statistics are calculated for each event by determining the frequency of each event and a transition matrix between the events. These statistics are measured globally across the set of events and across all event training videos. That is, the statistics determine the frequency, globally, of the occurrence of each event E 1 -E n .
- the transition matrix quantizes the transition frequency of each event to another event. For each event pair (E i , E j ) the transition matrix determines the number of times the first state immediately precedes the second state within the group of event training designations. State order is relevant, for example the transition (E 1 , E 5 ) is distinct from the transition (E 5 , E 1 ).
- the embedded HMI technique does not train each HMI in isolation (i.e. based only on the video frames/segments associated with the event). Rather, the HMMs are trained on the videos themselves. Each video clip is treated as a sequence of distinct events to be modeled by the group of HMMs.
- Each HMI is a “super-state” for a “super-HMM,” where the super-HMM models the video clips and therefore identifies a sequence of event HMMs within the video clip. In this way, the event HMMs are inter-connected and trained simultaneously.
- the initial event designations along with the transition frequency and transition matrix are used according to known embedded HMI training techniques.
- the embedded HMIs trained by this method result in an initial set of event models.
- an expectation maximization model is used to identify the most likely sequence of events in the video clip.
- One embodiment uses the Viterbi algorithm to identify the maximum-likelihood path and thereby identify the events associated with the video clip features. Other expectation-maximization techniques can also be used.
- the event models are developed using a mixture model, such as a Gaussian Mixture Model (GMM).
- GMM Gaussian Mixture Model
- the event state models 340 are not used. Instead, a Gaussian mixture model is trained using the features and the associated event designations 330 .
- the initial event designations are used to train a GMM classifier to assign probable event membership to the features of a frame.
- the GMM classifies the features for a frame by assigning probabilities P for each possible event P i .
- the probabilities can be calculated as a distance from the center of the cluster of features comprising each event. In this way, the probabilities can also be thought of as a distance “cost.”
- the GMM assignment of features to an event can also be adjusted by using a cost matrix to model the costs of transitioning from one state to another.
- the cost matrix C indicates the costs to transition from each event to each other event.
- the cost of transitioning from a state to itself is set to zero.
- the cost matrix can be derived from a probability matrix as discussed with respect to the HMM method above.
- the cost is set inversely to the probability that one state transitions to another (i.e. it is more costly to transition to a state which is more infrequent, and less costly to transition to a state which is more frequent).
- An example cost matrix is shown in Table 1:
- Each column in Table 1 indicates a cost function for a different source event, while the rows indicate the destination event.
- the third column for E 3 would be the event transition costs C 3 from E 3 to each event E 1 -E 4 .
- the cost matrix C shown in table 1 is used adjust the frame-based probabilities P according to the likelihood of the associated transition. For example, if the prior frame was E 3 , the probability vector P calculated from the next frame features is adjusted by the transition cost vector for the frame C 3 .
- the model F is used to identify events within a video clip using a GMM. Since this model includes a notion of state (the values for C depends on the previously-identified event and therefore introduces an element of sequential dependence), the identification of events in a video clip can be captured using an expectation-maximization model to determine the most-likely sequence of events. This describes the calculation of the event models and the identification by the event models of events within a video clip.
- the event models developed above were an initial pass based on clustering the events. The event models can be further refined with reference to FIG. 4 .
- FIG. 4 illustrates a data flow diagram showing iterative refinement of the event models, according to an embodiment.
- the system obtains a set of extracted features 410 as described above.
- the extracted features are clustered to develop initial event designations 420 .
- the initial event designations used to develop event models 430 .
- the features extracted 410 from the training video clips 400 are input to re-designate the training set video clips into an event sequence by evaluating the most likely event sequence.
- the event models in one embodiment are embedded HMMs, and the re-designation of the training set clips into an event sequence uses the Viterbi algorithm.
- the event model is a GMM and the events are designated using an expectation maximization (EM) model.
- EM expectation maximization
- the event models are then refined using the event designations 440 . That is, the event models are recalculated using the model calculation strategy according to the event model chosen. In addition, the event modeling can take into account a cost function of transitioning from one event to another in order to smooth event transition frequency.
- the refined event models 430 can now be used as the final event models, or the event designations 440 is used to iteratively refine the event models 430 , optionally to convergence.
- events are identified and transcribed within the video clips. The identified events are used as inputs to train highlight classifiers and as an input to classify the video using a trained highlight classifier.
- One set of event models can be used for event classification for a particular type of video, or several event models can be developed according to the training videos or the highlight classification needs.
- FIG. 5 an overview of highlight classifier training for a video clip using event modeling is shown according to an embodiment.
- This classification training can be performed by the highlight learning module 119 .
- the video clip features 500 have been extracted from the video clip.
- the video clip features 500 are the sequentially ordered frames in the video, F 1 ⁇ F i .
- events are discovered associated with those video clips using the event models. This produces for the video clip a specific sequence of events E 1 ⁇ E Fi .
- the event models detect the video clip events 510 according to a set of events E 1 ⁇ E n .
- the events are detected using an event model as described above.
- the size of the event model set varies.
- the event set comprises thirty events.
- the detected video events are assigned to each of the frames in the video using the event detection of the event models.
- the video clip events 510 include a list of the events corresponding to the video clip features. In this way, the video clip events 510 represent the video clip in an event space which indicates the video clip's composition of events.
- the video clip events 510 can be in any detected order, and may repeat sequentially or throughout the whole of the video clip events 510 .
- the histogram vector includes a unigram event histogram and a bigram event histogram.
- the unigram event histogram tallies the frequency each individual event occurs within the video clip events 510 . That is, the unigram event histogram calculates E 1 by tallying the number of times E 1 occurs in the video clip events 510 .
- the bigram statistics are also calculated to determine the frequency of event-to-event transitions. For each event pair (E i , E j ) the bigram statistics determine the number of times the first state immediately precedes the second state.
- E 1 followed by E 4 in the video clip events 310 would increment the tally of (E 1 , E 4 ) transitions in the event bigram histogram.
- the event vector statistics are calculated with reference to the individual video being characterized. This is distinct from the statistics calculated with respect to event modeling calculation and refinement, which were calculated across all event occurrence and transition information for the training set.
- the event bigrams include the transition states from an event to itself, for example (E 1 , E 1 ) and more generally (E n , E n ).
- the transition states for an event transition to the same event are not included. These transitions may be excluded in order to reduce the dimensionality of the histogram vector 520 , particularly because similar information can be determined from the unigram event histogram.
- the histograms are combined into the histogram vector 520 .
- the histogram vector 520 serves as the input to the classifier that makes a video clip highlight determination 530 .
- the histogram vector 520 is also used for training of the highlight classifier.
- the histogram vectors are determined for the training set of videos by assessing the events within the training set of videos, and then forming a histogram vector on the training set of videos.
- the highlight classifier is next trained on the histogram vectors for the training set of videos to learn the histogram vectors identified with highlight videos and non-highlight videos.
- the determination of what constitutes a “highlight” can be determined by a supervised designation, or, for example, by other means.
- a highlight in one embodiment is a specific type of event in a game, such as a “hit” or an “out” in baseball or a play of over 5 yards in football.
- a supervised designation could be made by manual assessment of video clips.
- Automatic detection of a highlight status is performed in one embodiment by associating statistics recorded in-time with the video clips, such as a change in score.
- the event detection framework using embedded hidden Markov models enables the detection of complex events including the possibility of varying states within an event.
- the classifier is successfully applied to calculated event vectors based on event statistics.
- Video highlights are typically presented as a portion of a much longer video for an entire sporting event. For example, an entire video of a game can be assessed to determine which portions are the highlights. To achieve this, those skilled in the art will realize that the entire video can be broken into individual portions of the video, and the individual portions can be identified as highlights or non-highlights using the trained classifiers, as described above.
- the portions are determined by splitting the sports video into portions on a static determination such as on a temporal basis, by every 5 or 10 minutes or on the basis of detected shots, such as every 5 or 10 shots. Alternatively, the portions are determined by using a “rolling” portion determination.
- a “rolling” determination could use a window of a determined length, and use the window to capture portions of the video. For example, a window of a length of 5 minutes could capture a first portion comprising the first 5 minutes, and a second portion comprising minutes 2-6. The identified highlight portions from the video could then be used to identify highlights for a user, or may be concatenated to form a “highlights only” video clip.
- While this disclosure relates to methods of identifying highlights in sports videos, the use of low-level feature extraction, event modeling, and event vectors can be used to identify events and classify according to a chosen property for any type of video with a set of recurring events. Since the features are low-level, the techniques do not require timely re-modeling for individual applications. For example, these techniques could be applied to traffic cameras, security cameras, and other types of video to identify recurring events of interest.
- Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
- the present disclosure also relates to an apparatus for performing the operations herein.
- This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer.
- a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of non-transient computer-readable storage medium suitable for storing electronic instructions.
- the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- the present disclosure is well suited to a wide variety of computer network systems over numerous topologies.
- the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- The application is a continuation of U.S. patent application Ser. No. 15/656,566, filed Jul. 21, 2017, which is a continuation of U.S. patent application Ser. No. 14,585,075, filed Dec. 29, 2014, which is a continuation of U.S. patent application Ser. No. 13/314,837, filed Dec. 8, 2011, which claims the benefit of U.S. Provisional Application No. 61/421,145, filed Dec. 8, 2010, each of which is hereby incorporated by reference herein in its entirety.
- The present disclosure generally relates to automatically identifying video highlights.
- Sports videos are punctuated by moments of excitement. For many sports, the exciting moments are scattered throughout a full video of the game which typically includes primarily uninteresting material. For every home run there are balls and strikes. For every touchdown or interception, unproductive running plays and incomplete passes. Soccer and hockey can have entire games with just a few goals. Most viewers just want to get these interesting and exciting portions (herein “sports video highlights”), without having to watch an entire game.
- A highlight learning module trains highlight classifiers to identify highlights in videos based on event vectors which characterize the videos according to detected events. To identify the events, features are extracted from the videos on a frame basis. The features are used to identify events within the video using event models trained in an unsupervised manner to identify recurring events within the videos. Using the event framework, the training videos are transcribed into a series of events and event vectors are constructed for the training videos to train a classifier according to the event vectors. Since the event framework is developed with an unsupervised assessment of the low-level features, the only supervision which need be used in this technique is to designate the video at a high level for the training sets as highlight or non-highlight. Moreover, the low-level feature and event detection framework enables a system applicable to a wide variety of sports videos.
- The highlight learning module is used to classify video clips using the trained classifiers. The highlight learning module receives a video, or portion of a video to be classified. The highlight learning module extracts features from the video or portion thereof to match the features used to train the event models. The extracted features from the video clips are transcribed by the event models. An event vector is created for the transcribed events, and the video is classified using the event vector applied to the highlight classifier to determine if the video is a highlight according to this highlight classifier. The same event vector for a video can be classified using several highlight classifiers which can determine whether the video belongs to any of the highlight types.
- The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
-
FIG. 1 is a block diagram of a video hosting service in which highlight learning can be employed according to an embodiment. -
FIG. 2 illustrates the various components of a highlight learning module used in the video hosting service according to an embodiment. -
FIG. 3 is a detailed view of the event modeling components according to an embodiment. -
FIG. 4 is a data flow diagram showing iterative refinement of the event models. -
FIG. 5 presents an overview of highlight detection using event modeling according to an embodiment. - The figures depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
-
FIG. 1 is a block diagram of avideo hosting service 100 in which highlight learning with event modeling can be employed, according to one embodiment. Thevideo hosting service 100 represents a system such as that of YOUTUBE that stores and provides videos to clients such as theclient device 135. Thevideo hosting site 100 communicates with a plurality ofcontent providers 130 andclient devices 135 via anetwork 140 to facilitate sharing of video content between users. Note that inFIG. 1 , for the sake of clarity only one instance ofcontent provider 130 andclient 135 is shown, though there could be any number of each. Thevideo hosting service 100 additionally includes afront end interface 102, avideo serving module 104, avideo search module 106, anupload server 108, auser database 114, and avideo repository 116. Other conventional features, such as firewalls, load balancers, authentication servers, application servers, failover servers, site management tools, and so forth are not shown so as to more clearly illustrate the features of thevideo hosting site 100. One example of asuitable site 100 is the YOUTUBE website, found at www.youtube.com. Other video hosting sites can be adapted to operate according to the teachings disclosed herein. The illustrated components of thevideo hosting website 100 can be implemented as single or multiple components of software or hardware. In general, functions described in one embodiment as being performed by one component can also be performed by other components in other embodiments, or by a combination of components. Furthermore, functions described in one embodiment as being performed by components of thevideo hosting website 100 can also be performed by one ormore client devices 135 in other embodiments if appropriate. -
Client devices 135 are computing devices that execute client software, e.g., a web browser or built-in client application, to connect to thefront end interface 102 of thevideo hosting service 100 via anetwork 140 and to display videos. Theclient device 135 might be, for example, a personal computer, a personal digital assistant, a cellular, mobile, or smart phone, or a laptop computer. - The
network 140 is typically the Internet, but may be any network, including but not limited to a LAN, a MAN, a WAN, a mobile wired or wireless network, a cloud computing network, a private network, or a virtual private network.Client device 135 may comprise a personal computer or other network-capable device such as a personal digital assistant (PDA), a mobile telephone, a pager, a television “set-top box,” and the like. - Conceptually, the
content provider 130 provides video content to thevideo hosting service 100 and theclient 135 views that content. In practice, content providers may also be content viewers. Additionally, thecontent provider 130 may be the same entity that operates thevideo hosting site 100. - The
content provider 130 operates a client device to perform various content provider functions. Content provider functions may include, for example, uploading a video file to thevideo hosting website 100, editing a video file stored by thevideo hosting website 100, or editing content provider preferences associated with a video file. - The
client 135 is a device operating to view video content stored by thevideo hosting site 100.Client 135 may also be used to configure viewer preferences related to video content. In some embodiments, theclient 135 includes an embedded video player such as, for example, the FLASH player from Adobe Systems, Inc. or any other player adapted for the video file formats used in thevideo hosting website 100. Note that the terms “client” and “content provider” as used herein may refer to software providing both client and content providing functionality, to hardware on which the software executes. A “content provider” also includes the entities operating the software and/or hardware, as is apparent from the context in which the terms are used. - The
upload server 108 of thevideo hosting service 100 receives video content from aclient devices 135. Received content is stored in thevideo repository 116. In response to requests fromclient devices 135, avideo serving module 104 provides video data from thevideo repository 116 to theclients 135.Client devices 135 may also search for videos of interest stored in thevideo repository 116 using avideo search module 106, such as by entering textual queries containing keywords of interest.Front end interface 102 provides the interface betweenclient 135 and the various components of thevideo hosting site 100. - In some embodiments, the
user database 114 is responsible for maintaining a record of all registered users of thevideo hosting server 100. Registered users includecontent providers 130 and/or users who simply view videos on thevideo hosting website 100. Eachcontent provider 130 and/or individual user registers account information including login name, electronic mail (e-mail) address and password with thevideo hosting server 100, and is provided with a unique user ID. This account information is stored in theuser database 114. - The
video repository 116 contains a set ofvideos 117 submitted by users. Thevideo repository 116 can contain any number ofvideos 117, such as tens of thousands or hundreds of millions. Each of thevideos 117 has a unique video identifier that distinguishes it from each of the other videos, such as a textual name (e.g., the string “a91qrx8”), an integer, or any other way of uniquely naming a video. Thevideos 117 can be packaged in various containers such as AVI, MP4, or MOV, and can be encoded using video codecs such as MPEG-2, MPEG-4, WebM, WMV, H.263, H.264, and the like. In addition to their audiovisual content, thevideos 117 further have associatedmetadata 117A, e.g., textual metadata such as a title, description, and/or tags. - The
video hosting service 100 further comprises ahighlight learning module 119 that trains accurate video classifiers for a set of highlights. The trained classifiers can then be applied to a given video to automatically determine whether the video is a highlight. In addition, thehighlight learning module 119 can separate a longer video into component parts and identify the portion (or portions) that contain a highlight. If the portion is identified as a highlight, the portion can be made an individual video alone. For example, a user may submit a new video, and thehighlight learning module 119 can automatically determine if the video or some portion(s) thereof is a highlight. The highlight status of a video can be used to update a video'smetadata 117A. Thehighlight learning module 119 is now described in greater detail. -
FIG. 2 illustrates the various components of thehighlight learning module 119, according to one embodiment. Thehighlight learning module 119 comprises various modules to model video events, derive video features, train classifiers for highlights, and the like. In one embodiment, thehighlight learning module 119 is incorporated into an existingvideo hosting service 100, such as YOUTUBE. - The
highlight learning module 119 has access to thevideo repository 116 of thevideo hosting service 100. Thehighlight learning module 119 additionally comprises afeatures repository 205 that stores, for videos of thevideo repository 116, associated sets of features that characterize the videos with respect to one or more types of visual or audio information, such as color, motion, and audio information. The features of avideo 117 are distinct from the raw content of the video itself and are derived from it by afeature extraction module 230. In one embodiment, the features are stored as a vector of values, the vector having the same dimensions for each of thevideos 117 for purposes of consistency. In one embodiment, thefeatures repository 205 maintains features for the videos identified as pertaining to sports. - The
highlight learning module 119 further compriseshighlight identifiers 250 that describe various types of highlights which can be classified by thehighlight learning module 119. For example, different highlight videos can be learned based on different types of sports, or can be learned based on different definitions of a “highlight” for a sport. For example, if someone is interested exclusively in a soccer goal, the highlight training can be performed by marking a training set which includes as positives the goal clips as highlights. Thehighlight identifiers 250 comprise an event model as well as highlight classifiers. - The event models are used to detect events within a video clip. These events may or may not be semantically meaningful at a high level, but represent a coherent patterns of feature data in the video clips. The events identified in a particular video clip are chosen from a universe of event models. The event models are determined and refined by an
event modeler 220. Each event type can appear several times or not at all within a particular video clip. For example, conceptually a baseball event model may have an event of a batter preparing, and another event of a pitcher throwing a ball, which are associated with characteristic patterns of features over time. An individual baseball clip may have several batter preparation events, several pitcher throwing a ball events, and a no “hit” events. The events are learned in an unsupervised manner, and therefore are not necessarily semantically meaningful. For example, a detected event type could merely be a momentary close-up on a player. - The
highlight identifiers 250 also include highlight classifiers. The highlight classifiers are used to classify video clips according a highlight using an event vector constructed from the events detected from the video clip. The highlight classifier is a function that outputs a score representing a degree to which the events associated with the video indicate that the particular highlight applies to the video, thus serving as a measure indicating whether the video is a highlight video. The highlight classifiers may output a confidence range for the highlight determination, or may output a boolean value for the highlight. Each highlight classifier can be used to identify a different type of highlight. The classifiers can be trained according to different definitions of what constitutes a highlight. For example, one highlight for a baseball game could be defined as any hit, while another could be defined as any run, while another might be on an out. This can be accomplished by changing which videos are included in a positive highlight training set. As a result, different portions of a baseball game may be classified as a “highlight” depending on which highlight classifier is used. - The features extracted using the
feature extraction module 230 in one embodiment are visual low-level frame-based features. For example, one embodiment uses a color histogram and one embodiment uses histogram of oriented gradients to extract features from frames in a video, though other frame-based features can be used. The features extracted are collected on a per-frame basis and could comprise other frame-based features such as an identified number of faces or a histogram of oriented optical flow, and may comprise a combination of extracted features. Further features are extracted in other embodiments, such as a Laplacian-of-Gaussian (LoG) or Scale Invariant Feature Transform (SIFT) feature extractor, a color histogram computed using hue and saturation in HSV color space, motion rigidity features, texture features, filter responses (e.g. derived from Gabor wavelets), including 3D filter responses, edge features using edges detected by a Canny edge detector, gradiant location and orientation histogram (GLOH), local energy-based shape histogram (LESH), or speeded-up robust features (SURF). Additional audio features can also be used, such as volume, an audio spectrogram, or a stabilized auditory image. In order to reduce the dimensionality of these features while maintaining the discriminating aspects, the features are reduced. The feature reduction is performed in one embodiment using a learned linear projection using principal component analysis to reduce the dimensionality of the feature vectors to 50, or some other suitable number less than 100. Other embodiments can use additional techniques to reduce the number of dimensions in the feature vectors. -
Partitioning module 235 partitions thevideos 117 into different sets used for performing training of the classifiers 212. More specifically, thepartitioning module 235 determines training and validation sets from thevideos 117, where the training set is used for training thehighlight identifiers 250 and the validation set is used to test the accuracy of the trained/learnedidentifiers 250. Thepartitioning module 235 also divides the videos of the training and validation sets into “positive” examples representative of a video highlight and “negative” examples which are not representative. In one embodiment, the designation of highlight or no-highlight is determined manually for the training and validation sets. - The training sets can also be determined on a per-classifier basis. That is, different training sets may be used for each different highlight classifier. In addition, the training set as a whole can be the same across highlight, while divisions among positive and negative training sets differ according to the highlight used. For example, a positive training set comprising a “hit” in baseball will train a classifier to identify hits, while a positive training set comprising a “walk” will train a classifier to identify walks. A positive training set can also be designated to include several such “types” of highlights (e.g. hits and walks).
- The
highlight learning module 119 additionally comprises aclassifier training module 240 that iteratively learns the highlight classifier based on the events from the positive and negative training sets identified by the event models. In one embodiment, the highlight classifier is a linear support vector machine (LSVM) which is trained to classify an input video event vector as either highlight or non-highlight. Other classification methods can be used, such as a logical regression or a boosting algorithm. - A detailed view of the event modeling framework is shown in
FIG. 3 according to an embodiment. This figure illustrates the conceptual framework for event modeling as well as the initialization for the initial event models. The initial event modeling and the event model refinement is performed on an unsupervised basis, and can be determined based on the extracted features alone. - A diarization process is used to initialize the event models, such that the videos are first separated into shots, then segmented into small sequential chunks of video, and the chunks are then clustered into initial event types. The event types themselves are composed of multiple possible states. This organization is shown in
FIG. 3 . - The
video 300 is separated intoseveral shots 310. Each shot 310 is an individual sequence of frames captured without interruption. The transition between one video shot to another shot can be determined by a shot boundary detection algorithm, for example using color histogram, edge changes, changes in pixel intensities, motion differences, or other features. Next, within each shot, the shot is separated into a selection of segments according to time. In one embodiment, the segments are chunks of 500 milliseconds of video. Since the segments are small discrete chunks within a single shot, we can assume the segments each correspond to an individual event. - Therefore, to develop a set of event designations, each training video is broken down into individual segments, each presumed to belong to an individual event. To develop an assignment of events to the individual segments, the segments in the entire training set are clustered using the feature vectors associated with each segment. The clustering in one embodiment is performed using a bottom-up hierarchical agglomerative clustering algorithm with the Ward's linkage function. This function specifies the distance between two clusters and is computed as the increase in the error sum of squares after fusing two clusters into a single cluster. The clustering can also be performed using a K-means clustering algorithm. The membership in each cluster is used to assign an event to the associated segments, and therefore the final number of clusters determines the number of events in the set of events. For any clustering algorithm, the number of clusters can be chosen by the designer or modified to fit the number of events which optimizes the highlight classification results. Generally, the number of clusters is within a range of 30-100. Using the clustered segments, each cluster is assigned an event designation (e.g. E1, E2, . . . En), and the associated segments are assigned to an
event 330. These identified events represent identifiable patterns of features of within the video content itself; some of these events may correspond to semantically meaningful action within the video (e.g., such as motions of a batter swinging) while others may have no specific high level semantic content. The clusters are used to identify initial event designations for the video clips. These event designations are used to develop event models to identify event sequences within video clips. - The event models can be developed using two different methods, Hidden Markov Modeling and Mixture Modeling.
- In one embodiment, since the events are evolving patterns over time, each event is individually modeled as a group of
states 330 using a hidden Markov model (HMM). As shown in this figure, the states of this instance of E2 comprise several states which may repeat within the state model. As a result, the event models are a set of HMMs M, each Mi associated with an event type Ei. Using theevent designations 330, the set of models M are jointly trained using an embedded HMI training technique. - The embedded HMI training technique uses frequency and transition information for each HMM to train HMM identification. These statistics are calculated for each event by determining the frequency of each event and a transition matrix between the events. These statistics are measured globally across the set of events and across all event training videos. That is, the statistics determine the frequency, globally, of the occurrence of each event E1-En. The transition matrix quantizes the transition frequency of each event to another event. For each event pair (Ei, Ej) the transition matrix determines the number of times the first state immediately precedes the second state within the group of event training designations. State order is relevant, for example the transition (E1, E5) is distinct from the transition (E5, E1).
- The embedded HMI technique does not train each HMI in isolation (i.e. based only on the video frames/segments associated with the event). Rather, the HMMs are trained on the videos themselves. Each video clip is treated as a sequence of distinct events to be modeled by the group of HMMs. One method of conceptualizing this is that each HMI is a “super-state” for a “super-HMM,” where the super-HMM models the video clips and therefore identifies a sequence of event HMMs within the video clip. In this way, the event HMMs are inter-connected and trained simultaneously. To accomplish this training, the initial event designations along with the transition frequency and transition matrix are used according to known embedded HMI training techniques. The embedded HMIs trained by this method result in an initial set of event models.
- Once the clip HMIs are trained, in order to characterize the features of a video clip into a series of events using the event models, an expectation maximization model is used to identify the most likely sequence of events in the video clip. One embodiment uses the Viterbi algorithm to identify the maximum-likelihood path and thereby identify the events associated with the video clip features. Other expectation-maximization techniques can also be used.
- In another embodiment, the event models are developed using a mixture model, such as a Gaussian Mixture Model (GMM). Using a mixture model, the
event state models 340 are not used. Instead, a Gaussian mixture model is trained using the features and the associatedevent designations 330. Stated another way, the initial event designations are used to train a GMM classifier to assign probable event membership to the features of a frame. Hence, the GMM classifies the features for a frame by assigning probabilities P for each possible event Pi. The probabilities can be calculated as a distance from the center of the cluster of features comprising each event. In this way, the probabilities can also be thought of as a distance “cost.” The collection of probabilities associated with each possible event is stored as a vector: P={P1 . . . Pn}. - The GMM assignment of features to an event can also be adjusted by using a cost matrix to model the costs of transitioning from one state to another. The cost matrix C indicates the costs to transition from each event to each other event. The cost matrix includes for each event an associated cost model Ci for transitioning to each other event, such that the cost matrix C is the set of all event transition costs: C={Ci . . . Cn}. The cost model for each event Ci is a vector representing the transition to each other event: Ci={Ci,1 . . . Ci,n}. In order to smooth the event assignments and to reflect the high frequency of same-event transitions, in one embodiment the cost of transitioning from a state to itself is set to zero. The cost matrix can be derived from a probability matrix as discussed with respect to the HMM method above. The cost is set inversely to the probability that one state transitions to another (i.e. it is more costly to transition to a state which is more infrequent, and less costly to transition to a state which is more frequent). An example cost matrix is shown in Table 1:
-
TABLE 1 Source Destination E1 E2 E3 E4 E1 0 1 5 3 E2 7 0 3 2 E 31 2 0 1 E 41 3 6 0 - Each column in Table 1 indicates a cost function for a different source event, while the rows indicate the destination event. For example, the third column for E3 would be the event transition costs C3 from E3 to each event E1-E4. The cost matrix C shown in table 1 is used adjust the frame-based probabilities P according to the likelihood of the associated transition. For example, if the prior frame was E3, the probability vector P calculated from the next frame features is adjusted by the transition cost vector for the frame C3. In an embodiment where the probabilities P are treated as costs, the final classification calculation for the GMM is F, where F=P+Ci,={P1+Ci,1, P2+Ci,2, . . . Pn+Ci,n}. In this way, the most likely next event in the sequence is the event with the lowest final cost.
- The model F is used to identify events within a video clip using a GMM. Since this model includes a notion of state (the values for C depends on the previously-identified event and therefore introduces an element of sequential dependence), the identification of events in a video clip can be captured using an expectation-maximization model to determine the most-likely sequence of events. This describes the calculation of the event models and the identification by the event models of events within a video clip. The event models developed above were an initial pass based on clustering the events. The event models can be further refined with reference to
FIG. 4 . -
FIG. 4 illustrates a data flow diagram showing iterative refinement of the event models, according to an embodiment. Using training video clips 400, the system obtains a set of extractedfeatures 410 as described above. Next, the extracted features are clustered to developinitial event designations 420. The initial event designations used to developevent models 430. - Using the
event models 430, the features extracted 410 from the training video clips 400 are input to re-designate the training set video clips into an event sequence by evaluating the most likely event sequence. As described, the event models in one embodiment are embedded HMMs, and the re-designation of the training set clips into an event sequence uses the Viterbi algorithm. In another embodiment, the event model is a GMM and the events are designated using an expectation maximization (EM) model. Using the training methods for the event model as described above, the current iteration of training set event designations is used to train a new set of event models. These new event models are applied to the training set to determine the next iteration of event designations. - The event models are then refined using the
event designations 440. That is, the event models are recalculated using the model calculation strategy according to the event model chosen. In addition, the event modeling can take into account a cost function of transitioning from one event to another in order to smooth event transition frequency. Therefined event models 430 can now be used as the final event models, or theevent designations 440 is used to iteratively refine theevent models 430, optionally to convergence. Using the event models, events are identified and transcribed within the video clips. The identified events are used as inputs to train highlight classifiers and as an input to classify the video using a trained highlight classifier. One set of event models can be used for event classification for a particular type of video, or several event models can be developed according to the training videos or the highlight classification needs. - Referring now to
FIG. 5 , an overview of highlight classifier training for a video clip using event modeling is shown according to an embodiment. This classification training can be performed by thehighlight learning module 119. As a preliminary step it is assumed that the video clip features 500 have been extracted from the video clip. The video clip features 500 are the sequentially ordered frames in the video, F1−Fi. Using the video clip features, events are discovered associated with those video clips using the event models. This produces for the video clip a specific sequence of events E1−EFi. The event models detect thevideo clip events 510 according to a set of events E1−En. The events are detected using an event model as described above. Depending on the embodiment, the size of the event model set varies. In one embodiment, the event set comprises thirty events. - As illustrated in
FIG. 5 , the detected video events are assigned to each of the frames in the video using the event detection of the event models. Thevideo clip events 510 include a list of the events corresponding to the video clip features. In this way, thevideo clip events 510 represent the video clip in an event space which indicates the video clip's composition of events. Thevideo clip events 510 can be in any detected order, and may repeat sequentially or throughout the whole of thevideo clip events 510. - Using the detected
video clip events 510, statistical analysis is performed to create anevent vector 520 to characterize the event frequency and occurrence data of events within the video clip. In one embodiment, the histogram vector includes a unigram event histogram and a bigram event histogram. The unigram event histogram tallies the frequency each individual event occurs within thevideo clip events 510. That is, the unigram event histogram calculates E1 by tallying the number of times E1 occurs in thevideo clip events 510. The bigram statistics are also calculated to determine the frequency of event-to-event transitions. For each event pair (Ei, Ej) the bigram statistics determine the number of times the first state immediately precedes the second state. For example, E1 followed by E4 in thevideo clip events 310 would increment the tally of (E1, E4) transitions in the event bigram histogram. The event vector statistics are calculated with reference to the individual video being characterized. This is distinct from the statistics calculated with respect to event modeling calculation and refinement, which were calculated across all event occurrence and transition information for the training set. As shown in this embodiment, the event bigrams include the transition states from an event to itself, for example (E1, E1) and more generally (En, En). In another embodiment, the transition states for an event transition to the same event are not included. These transitions may be excluded in order to reduce the dimensionality of thehistogram vector 520, particularly because similar information can be determined from the unigram event histogram. The histograms are combined into thehistogram vector 520. - The
histogram vector 520 serves as the input to the classifier that makes a videoclip highlight determination 530. Thehistogram vector 520 is also used for training of the highlight classifier. The histogram vectors are determined for the training set of videos by assessing the events within the training set of videos, and then forming a histogram vector on the training set of videos. The highlight classifier is next trained on the histogram vectors for the training set of videos to learn the histogram vectors identified with highlight videos and non-highlight videos. The determination of what constitutes a “highlight” can be determined by a supervised designation, or, for example, by other means. A highlight in one embodiment is a specific type of event in a game, such as a “hit” or an “out” in baseball or a play of over 5 yards in football. A supervised designation could be made by manual assessment of video clips. Automatic detection of a highlight status is performed in one embodiment by associating statistics recorded in-time with the video clips, such as a change in score. - Using the concept of event decoding within sports videos, the identification of highlight videos is improved. The event detection framework using embedded hidden Markov models enables the detection of complex events including the possibility of varying states within an event. By using these detected events, the classifier is successfully applied to calculated event vectors based on event statistics.
- The above-described process, and the classifiers obtained therefrom, have a number of valuable applications.
- 1) Highlight Labeling of Videos: Using the highlight classifiers trained above, new videos can now be assessed according to its component events, and when classified can be used to identify whether the new video is a highlight video. For videos stored on the video repository, it is often useful to allow users to search for and identify videos of interest. As such, the determination of a sports highlight video is useful for identifying a searchable keyword for the video, and can be used to update a tag associated with the video or to place the video in a category for sports highlights.
- 2) Identifying Highlight Portions of a Video: Though to this point the length of the videos as a whole has been not considered, video highlights are typically presented as a portion of a much longer video for an entire sporting event. For example, an entire video of a game can be assessed to determine which portions are the highlights. To achieve this, those skilled in the art will realize that the entire video can be broken into individual portions of the video, and the individual portions can be identified as highlights or non-highlights using the trained classifiers, as described above. The portions are determined by splitting the sports video into portions on a static determination such as on a temporal basis, by every 5 or 10 minutes or on the basis of detected shots, such as every 5 or 10 shots. Alternatively, the portions are determined by using a “rolling” portion determination. That is, a “rolling” determination could use a window of a determined length, and use the window to capture portions of the video. For example, a window of a length of 5 minutes could capture a first portion comprising the first 5 minutes, and a second portion comprising minutes 2-6. The identified highlight portions from the video could then be used to identify highlights for a user, or may be concatenated to form a “highlights only” video clip.
- While this disclosure relates to methods of identifying highlights in sports videos, the use of low-level feature extraction, event modeling, and event vectors can be used to identify events and classify according to a chosen property for any type of video with a set of recurring events. Since the features are low-level, the techniques do not require timely re-modeling for individual applications. For example, these techniques could be applied to traffic cameras, security cameras, and other types of video to identify recurring events of interest.
- The present disclosure has been described in particular detail with respect to one possible embodiment. Those of skill in the art will appreciate that the disclosure may be practiced in other embodiments. First, the particular naming of the components and variables, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the disclosure or its features may have different names, formats, or protocols. Also, the particular division of functionality between the various system components described herein is merely for purposes of example, and is not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component.
- Some portions of above description present the features of the present disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or by functional names, without loss of generality.
- Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
- The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of non-transient computer-readable storage medium suitable for storing electronic instructions. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present disclosure is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein, and any references to specific languages are provided for disclosure of enablement and best mode of the present disclosure.
- The present disclosure is well suited to a wide variety of computer network systems over numerous topologies. Within this field, the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.
- Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present disclosure is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/120,581 US11556743B2 (en) | 2010-12-08 | 2020-12-14 | Learning highlights using event detection |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US42114510P | 2010-12-08 | 2010-12-08 | |
US13/314,837 US8923607B1 (en) | 2010-12-08 | 2011-12-08 | Learning sports highlights using event detection |
US14/585,075 US9715641B1 (en) | 2010-12-08 | 2014-12-29 | Learning highlights using event detection |
US15/656,566 US10867212B2 (en) | 2010-12-08 | 2017-07-21 | Learning highlights using event detection |
US17/120,581 US11556743B2 (en) | 2010-12-08 | 2020-12-14 | Learning highlights using event detection |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/656,566 Continuation US10867212B2 (en) | 2010-12-08 | 2017-07-21 | Learning highlights using event detection |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210166072A1 true US20210166072A1 (en) | 2021-06-03 |
US11556743B2 US11556743B2 (en) | 2023-01-17 |
Family
ID=52112560
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/314,837 Active 2033-03-26 US8923607B1 (en) | 2010-12-08 | 2011-12-08 | Learning sports highlights using event detection |
US14/585,075 Active 2031-12-28 US9715641B1 (en) | 2010-12-08 | 2014-12-29 | Learning highlights using event detection |
US15/656,566 Active 2032-04-19 US10867212B2 (en) | 2010-12-08 | 2017-07-21 | Learning highlights using event detection |
US17/120,581 Active US11556743B2 (en) | 2010-12-08 | 2020-12-14 | Learning highlights using event detection |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/314,837 Active 2033-03-26 US8923607B1 (en) | 2010-12-08 | 2011-12-08 | Learning sports highlights using event detection |
US14/585,075 Active 2031-12-28 US9715641B1 (en) | 2010-12-08 | 2014-12-29 | Learning highlights using event detection |
US15/656,566 Active 2032-04-19 US10867212B2 (en) | 2010-12-08 | 2017-07-21 | Learning highlights using event detection |
Country Status (1)
Country | Link |
---|---|
US (4) | US8923607B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11556743B2 (en) * | 2010-12-08 | 2023-01-17 | Google Llc | Learning highlights using event detection |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9767845B2 (en) | 2013-02-05 | 2017-09-19 | Alc Holdings, Inc. | Activating a video based on location in screen |
US9275306B2 (en) * | 2013-11-13 | 2016-03-01 | Canon Kabushiki Kaisha | Devices, systems, and methods for learning a discriminant image representation |
CN104679779B (en) * | 2013-11-29 | 2019-02-01 | 华为技术有限公司 | The method and apparatus of visual classification |
US10129608B2 (en) * | 2015-02-24 | 2018-11-13 | Zepp Labs, Inc. | Detect sports video highlights based on voice recognition |
US9449248B1 (en) | 2015-03-12 | 2016-09-20 | Adobe Systems Incorporated | Generation of salient contours using live video |
US10572735B2 (en) * | 2015-03-31 | 2020-02-25 | Beijing Shunyuan Kaihua Technology Limited | Detect sports video highlights for mobile computing devices |
US10025986B1 (en) * | 2015-04-27 | 2018-07-17 | Agile Sports Technologies, Inc. | Method and apparatus for automatically detecting and replaying notable moments of a performance |
US9704020B2 (en) * | 2015-06-16 | 2017-07-11 | Microsoft Technology Licensing, Llc | Automatic recognition of entities in media-captured events |
US20170109584A1 (en) * | 2015-10-20 | 2017-04-20 | Microsoft Technology Licensing, Llc | Video Highlight Detection with Pairwise Deep Ranking |
US10229324B2 (en) * | 2015-12-24 | 2019-03-12 | Intel Corporation | Video summarization using semantic information |
CN105809198B (en) * | 2016-03-10 | 2019-01-08 | 西安电子科技大学 | SAR image target recognition method based on depth confidence network |
US10970554B2 (en) * | 2016-06-20 | 2021-04-06 | Pixellot Ltd. | Method and system for automatically producing video highlights |
US10163041B2 (en) * | 2016-06-30 | 2018-12-25 | Oath Inc. | Automatic canonical digital image selection method and apparatus |
US10671852B1 (en) | 2017-03-01 | 2020-06-02 | Matroid, Inc. | Machine learning in video classification |
IT201700053345A1 (en) * | 2017-05-17 | 2018-11-17 | Metaliquid S R L | METHOD AND EQUIPMENT FOR THE ANALYSIS OF VIDEO CONTENTS IN DIGITAL FORMAT |
US20190028766A1 (en) * | 2017-07-18 | 2019-01-24 | Audible Magic Corporation | Media classification for media identification and licensing |
CN107871120B (en) * | 2017-11-02 | 2022-04-19 | 汕头市同行网络科技有限公司 | Sports event understanding system and method based on machine learning |
US20190296933A1 (en) * | 2018-03-20 | 2019-09-26 | Microsoft Technology Licensing, Llc | Controlling Devices Based on Sequence Prediction |
US11373404B2 (en) * | 2018-05-18 | 2022-06-28 | Stats Llc | Machine learning for recognizing and interpreting embedded information card content |
US11025985B2 (en) | 2018-06-05 | 2021-06-01 | Stats Llc | Audio processing for detecting occurrences of crowd noise in sporting event television programming |
US11264048B1 (en) | 2018-06-05 | 2022-03-01 | Stats Llc | Audio processing for detecting occurrences of loud sound characterized by brief audio bursts |
US10839224B2 (en) | 2018-10-19 | 2020-11-17 | International Business Machines Corporation | Multivariate probability distribution based sports highlight detection |
WO2020168434A1 (en) * | 2019-02-22 | 2020-08-27 | Sportlogiq Inc. | System and method for model-driven video summarization |
US10897658B1 (en) * | 2019-04-25 | 2021-01-19 | Amazon Technologies, Inc. | Techniques for annotating media content |
CN110232357A (en) * | 2019-06-17 | 2019-09-13 | 深圳航天科技创新研究院 | A kind of video lens dividing method and system |
CN113286194B (en) * | 2020-02-20 | 2024-10-15 | 北京三星通信技术研究有限公司 | Video processing method, device, electronic equipment and readable storage medium |
US20220374515A1 (en) * | 2021-04-23 | 2022-11-24 | Ut-Battelle, Llc | Universally applicable signal-based controller area network (can) intrusion detection system |
KR102411081B1 (en) * | 2021-08-05 | 2022-06-22 | 주식회사 와이즈넛 | System for recommending related data based on similarity and method thereof |
Family Cites Families (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7076102B2 (en) * | 2001-09-27 | 2006-07-11 | Koninklijke Philips Electronics N.V. | Video monitoring system employing hierarchical hidden markov model (HMM) event learning and classification |
US5828809A (en) * | 1996-10-01 | 1998-10-27 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for extracting indexing information from digital video data |
US6072542A (en) * | 1997-11-25 | 2000-06-06 | Fuji Xerox Co., Ltd. | Automatic video segmentation using hidden markov model |
US5956026A (en) * | 1997-12-19 | 1999-09-21 | Sharp Laboratories Of America, Inc. | Method for hierarchical summarization and browsing of digital video |
EP1067800A4 (en) | 1999-01-29 | 2005-07-27 | Sony Corp | Signal processing method and video/voice processing device |
US6774917B1 (en) * | 1999-03-11 | 2004-08-10 | Fuji Xerox Co., Ltd. | Methods and apparatuses for interactive similarity searching, retrieval, and browsing of video |
US7028325B1 (en) * | 1999-09-13 | 2006-04-11 | Microsoft Corporation | Annotating programs for automatic summary generation |
US6754389B1 (en) * | 1999-12-01 | 2004-06-22 | Koninklijke Philips Electronics N.V. | Program classification using object tracking |
US6813313B2 (en) * | 2000-07-06 | 2004-11-02 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for high-level structure analysis and event detection in domain specific videos |
US6763069B1 (en) * | 2000-07-06 | 2004-07-13 | Mitsubishi Electric Research Laboratories, Inc | Extraction of high-level features from low-level features of multimedia content |
US6931595B2 (en) * | 2000-11-02 | 2005-08-16 | Sharp Laboratories Of America, Inc. | Method for automatic extraction of semantically significant events from video |
US6892193B2 (en) * | 2001-05-10 | 2005-05-10 | International Business Machines Corporation | Method and apparatus for inducing classifiers for multimedia based on unified representation of features reflecting disparate modalities |
US7296231B2 (en) * | 2001-08-09 | 2007-11-13 | Eastman Kodak Company | Video structuring by probabilistic merging of video segments |
US7474698B2 (en) * | 2001-10-19 | 2009-01-06 | Sharp Laboratories Of America, Inc. | Identification of replay segments |
US6865226B2 (en) * | 2001-12-05 | 2005-03-08 | Mitsubishi Electric Research Laboratories, Inc. | Structural analysis of videos with hidden markov models and dynamic programming |
US20040205482A1 (en) * | 2002-01-24 | 2004-10-14 | International Business Machines Corporation | Method and apparatus for active annotation of multimedia content |
US7657836B2 (en) * | 2002-07-25 | 2010-02-02 | Sharp Laboratories Of America, Inc. | Summarization of soccer video content |
AU2003265318A1 (en) * | 2002-08-02 | 2004-02-23 | University Of Rochester | Automatic soccer video analysis and summarization |
JP4036328B2 (en) * | 2002-09-30 | 2008-01-23 | 株式会社Kddi研究所 | Scene classification apparatus for moving image data |
US20040113933A1 (en) * | 2002-10-08 | 2004-06-17 | Northrop Grumman Corporation | Split and merge behavior analysis and understanding using Hidden Markov Models |
US7164798B2 (en) * | 2003-02-18 | 2007-01-16 | Microsoft Corporation | Learning-based automatic commercial content detection |
US20040167767A1 (en) * | 2003-02-25 | 2004-08-26 | Ziyou Xiong | Method and system for extracting sports highlights from audio signals |
US7327885B2 (en) * | 2003-06-30 | 2008-02-05 | Mitsubishi Electric Research Laboratories, Inc. | Method for detecting short term unusual events in videos |
WO2005031609A1 (en) * | 2003-09-30 | 2005-04-07 | Koninklijke Philips Electronics, N.V. | Method and apparatus for identifying the high level structure of a program |
US20050125223A1 (en) * | 2003-12-05 | 2005-06-09 | Ajay Divakaran | Audio-visual highlights detection using coupled hidden markov models |
US7313269B2 (en) * | 2003-12-12 | 2007-12-25 | Mitsubishi Electric Research Laboratories, Inc. | Unsupervised learning of video structures in videos using hierarchical statistical models to detect events |
GB2429597B (en) * | 2004-02-06 | 2009-09-23 | Agency Science Tech & Res | Automatic video event detection and indexing |
US7403664B2 (en) * | 2004-02-26 | 2008-07-22 | Mitsubishi Electric Research Laboratories, Inc. | Traffic event detection in compressed videos |
US7302451B2 (en) * | 2004-05-07 | 2007-11-27 | Mitsubishi Electric Research Laboratories, Inc. | Feature identification of events in multimedia |
US7409407B2 (en) * | 2004-05-07 | 2008-08-05 | Mitsubishi Electric Research Laboratories, Inc. | Multimedia event detection and summarization |
US7802188B2 (en) * | 2004-05-13 | 2010-09-21 | Hewlett-Packard Development Company, L.P. | Method and apparatus for identifying selected portions of a video stream |
US7426301B2 (en) * | 2004-06-28 | 2008-09-16 | Mitsubishi Electric Research Laboratories, Inc. | Usual event detection in a video using object and frame features |
US20050285937A1 (en) * | 2004-06-28 | 2005-12-29 | Porikli Fatih M | Unusual event detection in a video using object and frame features |
US20080138029A1 (en) * | 2004-07-23 | 2008-06-12 | Changsheng Xu | System and Method For Replay Generation For Broadcast Video |
US20060059120A1 (en) * | 2004-08-27 | 2006-03-16 | Ziyou Xiong | Identifying video highlights using audio-visual objects |
US7606425B2 (en) * | 2004-09-09 | 2009-10-20 | Honeywell International Inc. | Unsupervised learning of events in a video sequence |
US20060149693A1 (en) * | 2005-01-04 | 2006-07-06 | Isao Otsuka | Enhanced classification using training data refinement and classifier updating |
US7843491B2 (en) * | 2005-04-05 | 2010-11-30 | 3Vr Security, Inc. | Monitoring and presenting video surveillance data |
US20070010998A1 (en) * | 2005-07-08 | 2007-01-11 | Regunathan Radhakrishnan | Dynamic generative process modeling, tracking and analyzing |
US8233708B2 (en) * | 2005-08-17 | 2012-07-31 | Panasonic Corporation | Video scene classification device and video scene classification method |
US7545954B2 (en) * | 2005-08-22 | 2009-06-09 | General Electric Company | System for recognizing events |
US7773813B2 (en) * | 2005-10-31 | 2010-08-10 | Microsoft Corporation | Capture-intention detection for video content analysis |
US20100005485A1 (en) * | 2005-12-19 | 2010-01-07 | Agency For Science, Technology And Research | Annotation of video footage and personalised video generation |
US7558809B2 (en) * | 2006-01-06 | 2009-07-07 | Mitsubishi Electric Research Laboratories, Inc. | Task specific audio classification for identifying video highlights |
US7359836B2 (en) * | 2006-01-27 | 2008-04-15 | Mitsubishi Electric Research Laboratories, Inc. | Hierarchical processing in scalable and portable sensor networks for activity recognition |
US20070255755A1 (en) * | 2006-05-01 | 2007-11-01 | Yahoo! Inc. | Video search engine using joint categorization of video clips and queries based on multiple modalities |
US8009193B2 (en) * | 2006-06-05 | 2011-08-30 | Fuji Xerox Co., Ltd. | Unusual event detection via collaborative video mining |
KR100785076B1 (en) * | 2006-06-15 | 2007-12-12 | 삼성전자주식회사 | Method for detecting real time event of sport moving picture and apparatus thereof |
US7945142B2 (en) * | 2006-06-15 | 2011-05-17 | Microsoft Corporation | Audio/visual editing tool |
US7756338B2 (en) * | 2007-02-14 | 2010-07-13 | Mitsubishi Electric Research Laboratories, Inc. | Method for detecting scene boundaries in genre independent videos |
US20080215318A1 (en) * | 2007-03-01 | 2008-09-04 | Microsoft Corporation | Event recognition |
US9177209B2 (en) * | 2007-12-17 | 2015-11-03 | Sinoeast Concept Limited | Temporal segment based extraction and robust matching of video fingerprints |
US7991715B2 (en) * | 2007-12-27 | 2011-08-02 | Arbor Labs, Inc. | System and method for image classification |
US8881191B2 (en) * | 2008-03-31 | 2014-11-04 | Microsoft Corporation | Personalized event notification using real-time video analysis |
US8358856B2 (en) * | 2008-06-02 | 2013-01-22 | Eastman Kodak Company | Semantic event detection for digital content records |
US9633275B2 (en) * | 2008-09-11 | 2017-04-25 | Wesley Kenneth Cobb | Pixel-level based micro-feature extraction |
US8284258B1 (en) * | 2008-09-18 | 2012-10-09 | Grandeye, Ltd. | Unusual event detection in wide-angle video (based on moving object trajectories) |
US9141859B2 (en) * | 2008-11-17 | 2015-09-22 | Liveclips Llc | Method and system for segmenting and transmitting on-demand live-action video in real-time |
US8611677B2 (en) * | 2008-11-19 | 2013-12-17 | Intellectual Ventures Fund 83 Llc | Method for event-based semantic classification |
WO2010083238A1 (en) * | 2009-01-13 | 2010-07-22 | Futurewei Technologies, Inc. | Method and system for image processing to classify an object in an image |
US8213725B2 (en) * | 2009-03-20 | 2012-07-03 | Eastman Kodak Company | Semantic event detection using cross-domain knowledge |
US8559720B2 (en) * | 2009-03-30 | 2013-10-15 | Thomson Licensing S.A. | Using a video processing and text extraction method to identify video segments of interest |
US8683521B1 (en) * | 2009-03-31 | 2014-03-25 | Google Inc. | Feature-based video suggestions |
US8503770B2 (en) * | 2009-04-30 | 2013-08-06 | Sony Corporation | Information processing apparatus and method, and program |
US8254671B1 (en) * | 2009-05-14 | 2012-08-28 | Adobe Systems Incorporated | System and method for shot boundary detection in video clips |
US8396286B1 (en) * | 2009-06-25 | 2013-03-12 | Google Inc. | Learning concepts for video annotation |
US20120109901A1 (en) * | 2009-07-01 | 2012-05-03 | Nec Corporation | Content classification apparatus, content classification method, and content classification program |
US8345990B2 (en) * | 2009-08-03 | 2013-01-01 | Indian Institute Of Technology Bombay | System for creating a capsule representation of an instructional video |
US20110047163A1 (en) * | 2009-08-24 | 2011-02-24 | Google Inc. | Relevance-Based Image Selection |
US8797405B2 (en) * | 2009-08-31 | 2014-08-05 | Behavioral Recognition Systems, Inc. | Visualizing and updating classifications in a video surveillance system |
US20110099195A1 (en) * | 2009-10-22 | 2011-04-28 | Chintamani Patwardhan | Method and Apparatus for Video Search and Delivery |
US8533134B1 (en) * | 2009-11-17 | 2013-09-10 | Google Inc. | Graph-based fusion for video classification |
US8452763B1 (en) * | 2009-11-19 | 2013-05-28 | Google Inc. | Extracting and scoring class-instance pairs |
US8452778B1 (en) * | 2009-11-19 | 2013-05-28 | Google Inc. | Training of adapted classifiers for video categorization |
JP2011228918A (en) * | 2010-04-20 | 2011-11-10 | Sony Corp | Information processing apparatus, information processing method, and program |
US20120087588A1 (en) * | 2010-10-08 | 2012-04-12 | Gerald Carter | System and method for customized viewing of visual media |
US8923607B1 (en) * | 2010-12-08 | 2014-12-30 | Google Inc. | Learning sports highlights using event detection |
US9367745B2 (en) * | 2012-04-24 | 2016-06-14 | Liveclips Llc | System for annotating media content for automatic content understanding |
US20180350131A1 (en) * | 2013-12-31 | 2018-12-06 | Google Inc. | Vector representation for video segmentation |
US9805268B2 (en) * | 2014-07-14 | 2017-10-31 | Carnegie Mellon University | System and method for processing a video stream to extract highlights |
CN107077595A (en) * | 2014-09-08 | 2017-08-18 | 谷歌公司 | Selection and presentation representative frame are for video preview |
-
2011
- 2011-12-08 US US13/314,837 patent/US8923607B1/en active Active
-
2014
- 2014-12-29 US US14/585,075 patent/US9715641B1/en active Active
-
2017
- 2017-07-21 US US15/656,566 patent/US10867212B2/en active Active
-
2020
- 2020-12-14 US US17/120,581 patent/US11556743B2/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11556743B2 (en) * | 2010-12-08 | 2023-01-17 | Google Llc | Learning highlights using event detection |
Also Published As
Publication number | Publication date |
---|---|
US9715641B1 (en) | 2017-07-25 |
US10867212B2 (en) | 2020-12-15 |
US8923607B1 (en) | 2014-12-30 |
US20170323178A1 (en) | 2017-11-09 |
US11556743B2 (en) | 2023-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11556743B2 (en) | Learning highlights using event detection | |
US8930288B2 (en) | Learning tags for video annotation using latent subtags | |
US10140575B2 (en) | Sports formation retrieval | |
US9087297B1 (en) | Accurate video concept recognition via classifier combination | |
US8396286B1 (en) | Learning concepts for video annotation | |
CN102549603B (en) | Relevance-based image selection | |
US11023523B2 (en) | Video content retrieval system | |
US8819024B1 (en) | Learning category classifiers for a video corpus | |
US9373040B2 (en) | Image matching using motion manifolds | |
US8358837B2 (en) | Apparatus and methods for detecting adult videos | |
US8983192B2 (en) | High-confidence labeling of video volumes in a video sharing service | |
US9177208B2 (en) | Determining feature vectors for video volumes | |
US9098807B1 (en) | Video content claiming classifier | |
Sang et al. | Robust face-name graph matching for movie character identification | |
WO2012071696A1 (en) | Method and system for pushing individual advertisement based on user interest learning | |
Shen et al. | Modality mixture projections for semantic video event detection | |
WO2016038522A1 (en) | Selecting and presenting representative frames for video previews | |
Chen et al. | Name-face association in web videos: A large-scale dataset, baselines, and open issues | |
Ulges et al. | A system that learns to tag videos by watching youtube | |
Pang et al. | Unsupervised celebrity face naming in web videos | |
Lu et al. | Temporal segmentation and assignment of successive actions in a long-term video | |
CN115376054A (en) | Target detection method, device, equipment and storage medium | |
Gao et al. | Cast2face: assigning character names onto faces in movie with actor-character correspondence | |
Sun et al. | A new segmentation method for broadcast sports video | |
Guillemot et al. | Algorithms for video structuring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |