WO2008063615A2 - Apparatus for and method of performing a weight-based search - Google Patents
Apparatus for and method of performing a weight-based search Download PDFInfo
- Publication number
- WO2008063615A2 WO2008063615A2 PCT/US2007/024198 US2007024198W WO2008063615A2 WO 2008063615 A2 WO2008063615 A2 WO 2008063615A2 US 2007024198 W US2007024198 W US 2007024198W WO 2008063615 A2 WO2008063615 A2 WO 2008063615A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- objects
- tags
- computer
- files
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/786—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
Definitions
- the invention is directed to searching content including video and multimedia and, more particularly, to a weight-based search of content.
- the prior art includes various searching methods and systems directed to identifying and retrieving content based on key words found in the file name, tags on associated web pages, transcripts, text of hyperlinks pointing to the content, etc.
- Such search methods rely on Boolean operators indicative of the presence or absence of search terms.
- a more robust search method is required to identify content satisfying search requirements.
- the invention is directed to a method, robust search software and apparatus for providing enhanced searching of content taking into consideration not only the existence (or absence) of certain characteristics (as might be indicated by corresponding "tags" attached to the content or portions thereof, e.g., files), but the importance of those characteristics with respect to the content.
- Tags may name or describe a feature, quality of, and/or objects associated with the content (e.g., video file) and/or of objects appearing in the content (e.g., an object appearing within a video file and/or associated with one or more objects appearing in a video file and/or associated with objects appearing in the video file.)
- Search results may include importance values for the tags that were searched for and identified within the content.
- Additional tags e.g., tags not part of the preceding queried search terms
- tags may also be provided and displayed to the user including, for example, tags for other characteristics suggested by the preceding search and/or suggested tags that might be useful as part of a subsequent search.
- Suggested tags may be based in part on past search histories, user profile information, etc. and/or may be directed to related products and/or services suggested by the prior search or search results.
- Results of searches may further include a display of thumbnails corresponding and linking to content most closely satisfying search criteria, the thumbnails arranged in order of match quality with the size of the thumbnail indicative of its match quality (e.g., best matching video files indicated by large thumbnail images, next best by intermediate size thumbnails, etc.)
- a user may click on and/or hover over a thumbnail to enlarge the thumbnail, be presented with a preview of the content (e.g., a video clip most relevant to the search terms and criteria) and/or to retrieve or otherwise access the content.
- embodiments of the invention are equally applicable to processing, organizing, storing and searching a wide range of content types including video, audio, text and signal files.
- an audio embodiment may be used to provide a searchable database of and search audio files for speech, music, or other audio types for desired characteristics of specified importance.
- embodiments may be directed to content in the form of or represented by text, signals, etc.
- engine in describing embodiments and features of the invention is not intended to be limiting of any particular implementation for accomplishing and/or performing the actions, steps, processes, etc. attributable to the engine.
- An engine may be, but is not limited to, software, hardware and/or firmware or any combination thereof that performs the specified functions including, but not limited to, any using a general and/or specialized processor.
- Software may be stored in or using a suitable machine-readable medium such as, but not limited to, random access memory (RAM) and other forms of electronic storage, data storage media such as hard drives, removable media such as CDs and DVDs, etc.
- RAM random access memory
- any name associated with a particular engine is, unless otherwise specified, for purposes of convenience of reference and not intended to be limiting to a specific implementation. Additionally, any functionality attributed to an engine may be equally performed by multiple engines, incorporated into the functionality of another or different engine, or distributed across one or more engines of various configurations.
- a method comprises the steps of assigning tags to and descriptive of content, assigning, to the tags, respective weights with respect to the content, and storing the tags and associated weights in a memory.
- the step of assigning respective weights may include determining an importance of the tags to respective portions of the content.
- the content may comprise a plurality of video, audio, text and/or signal files, at least one of the tags being assigned to each of the files.
- a highlight segment may be identified within the content.
- a clickable thumbnail representing and linking to the content may be created.
- information may be identified and stored (i) for retrieving the content, (ii) identifying objects within the content, and (iii) weights for each of the objects associated with the content.
- metadata associated with and characterizing the content may be identified and stored.
- the tags may include information including, but not limited to, content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and/or (x) format.
- the content may be segmented so as to extract objects that may then be tracked thought the content and/or assigned tags and associated weights.
- Assigning tags may include recognizing at least one of the objects and, in response, assigning one of the tags to the object.
- a time-space thread may be created for each of the objects, the objects being tracked and/or recognized throughout the content (e.g., within a contiguous file).
- an apparatus includes an engine (for convenience of reference, a "tagging” engine") operating to assign tags to and descriptive of content; a “weighting” engine operating to assign, to the tags, respective weights with respect to the content, and a memory storing the tags and associated weights.
- the weighting engine (or another engine) may further determine an importance of the tags to respective portions of the content.
- the content may comprise a plurality of video, audio, text and/or signal files, at least one of the tags being assigned to each of the files.
- an engine may operate to receive an input, automatically, or otherwise operate to identify a highlight segment within the content.
- an engine may operate to create a clickable thumbnail representing and linking to the content.
- one or more engines may operate to identify and/or store information (i) for retrieving the content, (ii) identifying objects within the content, and/or (iii) weights for each of the objects associated with the content.
- an engine may operate to identify and/or store metadata associated with and characterizing the content.
- the tags may include information including, but not limited to, content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and/or (x) format.
- an engine may segment the content and extract objects.
- An engine may track the objects though the content and/or assign tags and associated weight to the objects.
- Assigning tags may include recognizing at least one of the objects and, in response, assigning one of the tags to the object.
- an engine may create or identify a time- space thread for each of the objects, the objects being tracked and/or recognized throughout the content (e.g., within a contiguous file).
- assigning weights to each of the tags may include identification of relative features of the objects within the content including, but not limited to, (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and/or (viii) texture.
- a computer program includes a computer usable medium having computer readable program code embodied therein for implementing a weight-based database.
- the computer readable program code may include computer readable program code for causing the computer to assign tags to and descriptive of content, assign, to the tags, respective weights with respect to the content, and store the tags and associated weights in a memory.
- the program code for assigning respective weights may include code for determining an importance of the tags to respective portions of the content.
- the content may comprise a plurality of video, audio, text and/or signal files, at least one of the tags being assigned to each of the files.
- a highlight segment may be identified within the content.
- a clickable thumbnail representing and linking to the content may be created.
- information may be identified and stored (i) for retrieving the content, (ii) identifying objects within the content, and (iii) weights for each of the objects associated with the content.
- metadata associated with and characterizing the content may be identified and stored.
- the tags may include information including, but not limited to, content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and/or (x) format.
- the content may be segmented so as to extract objects that may then be tracked thought the content and/or assigned tags and associated weights.
- Assigning tags may include recognizing at least one of the objects and, in response, assigning one of the tags to the object.
- a time-space thread may be created for each of the objects, the objects being tracked and/or recognized throughout the content (e.g., within a contiguous file).
- Figure 1 is a flow chart of a method of processing content to segment, tag, and associate weights with the content and various components thereof;
- Figure 2 is a flow chart of a method searching for and retrieving content based on weighted search terms
- Figure 3 is a screen shot of a user interface used to identify a video to be processed and indexed
- Figure 4 is a screen shot of a user interface displaying a video that has been uploaded for processing and providing input fields for receiving descriptive information about the video;
- Figure 5 is a screen shot of a user interface used to designate an object appearing in a video
- Figure 6 is a screen shot of a user interface used to enter information about a designated object
- Figure 7 is a screen shot of a user interface used to assign and/or adjust weights associated with respective object tags and to associate links and open text with the object;
- Figure 8 a screen shot of a user interface used to add a highlight for a video
- Figure 9 is a screen shot of an interface allowing a user to view thumbnails of the first and last frames of a highlight and provide a name for the highlight;
- Figure 10 is a screen shot of a user interface depicting a recently added highlight
- Figure 1 1 is a screen shot of a user interface displaying an array of popular searches and providing a text box for a user to enter search terms for conducting a search of available video content;
- Figure 12 is a screen shot of a user interface displaying video thumbnails resulting from a search together with initial weights associated with each search term and suggested (associative) terms;
- Figure 13 is a screen shot of a user interface displaying video thumbnails of a revised set of videos resulting from user adjustment of weighting values assigned to the various search terms;
- Figure 14 is a screen shot of a user interface displaying designation of a video by a user "rolling over" an associated thumbnail;
- Figure 15 is a screen shot of a user interface displaying a revised set of videos resulting from user deletion of one of the search results;
- Figure 16 is a screen shot of a simplified user interface used to input search terms and adjust search parameters.
- Figure 17 is a block diagram of a computer platform for executing computer program code implementing processes and steps according to various embodiments of the invention.
- embodiments of the invention are equally applicable to processing, organizing, storing and searching a wide range of content types including video, audio, text and signal files.
- an audio embodiment may be used to provide a searchable database of and search audio files for speech, music, etc.
- embodiments may be directed to content in the form of or represented by text, signals, etc.
- Embodiment of the invention include, among other things, methods and apparatus for processing content represented in a wide range of formats including, for example, video, audio, waveforms, etc. so as to identify object present in the content, tag the content and the objects identified, identify weights indicating an importance of the tag and/or related object within the context of the content, and provide a searchable database used to identify and retrieve content satisfying specified search criteria. Further embodiments of the invention provide methods and apparatus for supporting and/or performing a weighted search of such a database.
- step 101 content to be processed is identified and acquired.
- a user interface may be provided allowing a user to select a video file and/or identify a link pointing to a video file (e.g., a URL or Uniform Resource Locator).
- information about the video can be provided using, for example, the user interface illustrated in Figure 4.
- Descriptive information may include video metadata such as the Title of the video, a narrative description, author, location and shoot date of the video, and any tags (and associated weights) to be associated with the video.
- the interface may include a viewer for displaying the video as processed.
- Objects within the content being or to be processed may be identified at step 103.
- Object identification may be initiated automatically or manually by a user designating a region of interest.
- step 104 segments frames of the video while step 105 creates time-space threads or "tubes" that track objects across multiple frames.
- various objects have been identified as represented by the corresponding thumbnails shown on the right portion of the display screen, either automatically or upon user initiation.
- a user may designate a region of interest using the viewer and a graphic input device (e.g., a mouse) to delineate offence" an area of the image.
- the region of interest is then processed to identify an object within the region and a tube to represent the region is created.
- the newly created tube can be merged with other tubes or be a part of another tube.
- suggested tags, weights and/or alternative thumbnail images may be associated with an object as provided by step 107.
- This information may be provided automatically or, at step 108, the user may modify or manually designate this information. User intervention may be provided by use of the "Tag Me Now" buttons shown in Figure 5 that may cause a popup window to appear.
- the popup window may include a thumbnail of the object and text fields for the entry and/or display of metadata associated with the object such as the name of the associated tag, links, object caption, free or open text description of the object, etc.
- tags may appear in the popup window as shown in Figure 7.
- Adjacent each tag designation a slider may indicate an initial importance or weight value associated with each tag and further provide for user adjustment of the weight value.
- Weight values may correspond to the importance attributed to a tag and/or the associated object within the context of the video. For example, in the context of a video clip about a soccer player, a name tag associated with the soccer player "object" (i.e., the image of the soccer player) as depicted in the video may be regarded as highly important and be given a large weight value. Alternatively, an object corresponding to a soccer shoe may be a relatively minor part of the video and be assigned a low weight value.
- weight values may be automatically determined by criteria such as the length of time the object (in this case, image(s) of the soccer player and shoe) appears in the video, relative motion of the object indicating, for example, visual tracking of and/or centering on the object, the amount of space within the image occupied by the object, etc.
- the calculated, default or manually designated weight value may be represented by the position of the slider depicted in Figure 7. A user may then adjust the weight value(s) using the sliders as appropriate.
- Steps 109 - 111 provide for the creation of Highlights as supported by, for example, the user interfaces of Figures 8 - 10.
- processing is performed to suggest one or more highlights to be associated with the content, e.g., video segments representative of the video as a whole and/or of particular objects appearing in the video.
- This process may be manually initiated by the user via an "Add Highlight" button as shown in Figure 8.
- the user may designate start and end frames by setting corresponding arrows on a slider at the bottom of the video player. Once the start and end points are designated, a popup window displays thumbnails corresponding to the start and end frames and provides a text entry field to input the name of the highlight as shown in Figure 9.
- Step 1 1 1 provides for user acceptance and/or modification of the highlights, tags, weights and/or thumbnails.
- Step 1 12 creates a preview of the content.
- the preview may correspond to a designated highlight.
- processing continues to generate descriptive metadata associated with the content (e.g., video) including, for example, designation of objects and their associated tags and weights, highlights, duration of time during which an object appears, etc.
- descriptive metadata associated with the content e.g., video
- the content or link to the content and the associated metadata and other information generated and/or collected during the previous steps may then be stored in a searchable database at step 114.
- a method of searching for and retrieving content is depicted by the flow chart of Figure 2.
- a user inputs search terms associated with content to be located.
- An example of a suitable interface is shown in the screen shot of Figure 11 including a text entry field for inputting search terms.
- the interface may include other features such as, for example, popular searches that may be of interest to the user as depicted by the three groups of rotating thumbnail images in the middle of the screen with the associated tag identifiers listed below each group of thumbnails.
- the system and/or user may identify weights, i.e. an importance level, for each of the search terms.
- Step 203 identifies content satisfying the search criteria, that is, content responsive to the search terms and, if provided, weight values for tags associated with the search terms and displayed at step 204.
- a number of thumbnails corresponding to videos, identified by the search may be displayed to the user on a portion of a video display.
- the thumbnails may be arranged in order of match quality, with the largest thumbnails corresponding to best matches, content of lower match confidence levels being displayed afterwards and with smaller thumbnails, etc.
- Tags associated with the videos may be identified and displayed to the user (step 205) together with their corresponding weights (e.g., as present in the videos identified, calculated to be responsive to the search terms entered, or otherwise identified).
- the weights may be associated with means to adjust the weights such as my use of respective slider controls as depicted in the upper left portion of Figure 12.
- additional and/or alternate tags may be identified and made available for inclusion in adjusting and/or refining the search as also shown in Figure 12 (see “Or add one of these").
- the system updates the search and resulting thumbnails as shown in Figure 13.
- Step 207 provides for user selection of content. This may be accomplished by using a pointing device, such as a mouse, to designate a thumbnail corresponding to the desired content among those identified by the search.
- a pointing device such as a mouse
- One implementation detects a curser postion so that, as the user "rolls-over" a thumbnail, it becomes active as indicated by its increased size (step 208) and the display of additional options (e.g., controls to watch a clip of the video, go to a content provider to access the full the video, delete the video from the search results, etc.) and information about the video (e.g, length, etc.) as shown in the screen shot of Figure 14.
- Step 209 provides for editing of the list of search results including replacement of thumbnails of deleted search results with thumbnails of other, previously nondisplayed, video(s).
- Figure 16 is a screen shot of a simplified user interface used to input search terms and adjust search parameters.
- This implementation may be used when screen real estate (i.e., usable display area) is limited.
- a single thumbnail corresponding to a best match may be displayed together with sliders associated with weight values of the associated tags.
- FIG 17 is a block diagram of a computer platform for executing computer program code implementing processes and steps according to various embodiments of the invention.
- Object processing and database searching may be performed by computer system 1700 in which central processing unit (CPU) 1701 is coupled to system bus 1702.
- CPU 1701 may be any general purpose CPU.
- the present invention is not restricted by the architecture of CPU 1701 (or other components of exemplary system 1700) as long as CPU 1701 (and other components of system 1700) supports the inventive operations as described herein.
- CPU 1701 may execute the various logical instructions according to embodiments of the present invention.
- CPU 1701 may execute machine-level instructions according to the exemplary operational flows described above in conjunction with Figures 1 and 2.
- Computer system 1700 also preferably includes random access memory (RAM) 1703, which may be SRAM, DRAM, SDRAM, or the like.
- Computer system 1700 preferably includes read-only memory (ROM) 1704 which may be PROM, EPROM, EEPROM, or the like.
- RAM 1703 and ROM 1704 hold / store user and system data and programs, such as a machine-readable and/or executable program of instructions for object extraction and/or video indexing according to embodiments of the present invention.
- Computer system 1700 also preferably includes input/output (I/O) adapter 1705, communications adapter 171 1 , user interface adapter 1708, and display adapter 1709.
- I/O adapter 1705, user interface adapter 1708, and/or communications adapter 1711 may, in certain embodiments, enable a user to interact with computer system 1700 in order to input information.
- I/O adapter 1705 preferably connects to storage device(s) 1706, such as one or more of hard drive, compact disc (CD) drive, floppy disk drive, tape drive, etc. to computer system 1700.
- storage devices may be utilized when RAM 1703 is insufficient for the memory requirements associated with storing data for operations of the system (e.g., storage of videos and related information).
- RAM 1703, ROM 1704 and/or storage device(s) 1706 may include media suitable for storing a program of instructions for video process, object extraction and/or video indexing according to embodiments of the present invention, those having removable media may also be used to load the program and/or bulk data such as large video files.
- Communications adapter 1711 is preferably adapted to couple computer system 1700 to network 1712, which may enable information to be input to and/or output from system 1700 via such network 1712 (e.g., the Internet or other wide-area network, a local-area network, a public or private switched telephony network, a wireless network, any combination of the foregoing). For instance, users identifying or otherwise supplying a video for processing may remotely input access information or video files to system 1700 via network 1712 from a remote computer.
- User interface adapter 1708 couples user input devices, such as keyboard 1713, pointing device 1707, and microphone 1714 and/or output devices, such as speaker(s) 1715 to computer system 1700.
- Display adapter 1709 is driven by CPU 1701 to control the display on display device 1710 to, for example, display information regarding a video being processed and providing for interaction of a local user or system operator during object extraction and/or video indexing operations.
- the present invention is not limited to the architecture of system 1700.
- any suitable processor-based device may be utilized for implementing object extraction and video indexing, including without limitation personal computers, laptop computers, computer workstations, and multi-processor servers.
- embodiments of the present invention may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits.
- ASICs application specific integrated circuits
- VLSI very large scale integrated circuits
- embodiments and/or implementations of the invention may include a weighted pricing and/or object bidding feature. Such a feature supports paid advertising that may be included as part of and/or incorporated into a video.
- CPC paid ads
- methods which take into account the qualification of a user based on previous activities on the property and other demographic/geographic elements. For example if a user is found to have searched more times for the same term he/she will be considered more qualified (e.g., interested in a corresponding product or service) and therefore advertisers will be willing to pay more for that specific link.
- Existing application of this method are quite limited. For example, advertisers may be limited to textual campaigns, i.e. they can only bid using text terms.
- a weighted pricing and object bidding feature may use the previously described weight based index system to capture and collect information about how important each term/element is in the content. This data can then be used to support a dynamic pricing mechanism for selling links and/or advertising to a customer (e.g., to the advertiser) based on the level of importance associated with the inquiry by the user (e.g., person initiating a search or inquiry).
- a customer e.g., to the advertiser
- an advertiser may be able to bid different prices (for a specific term) for different relative weights of the term in the search query, where the assumption is that the higher the weight of the term in the query is, the more qualified the user is and the higher the CPC the advertiser is willing to pay.
- such a system and method may allow an advertiser to place a bid with an image/object. The advertiser is then able to upload an image of an item/object and place a bid for his advertisement to show up every time this item appears in a video, web page etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Signal Processing For Recording (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
An apparatus assigns tags to and descriptive of content. Assigned to the tags are respective weights with respect to the content. The tags and associated weights may be stored in a memory. The weights may be indicative of an importance of the tags to respective portions of the content. The content may be any of a wide range of content and/or file types including, but not limited to, video, audio, text and signal files. Highlights corresponding to selected portions of the files may be identified and provided for user review. The stored information may be searched based on search terms associated with tags together with the weights to be associated with each tag, the weights indicative of an importance of items identified by corresponding tags with respect to the identified content.
Description
Title
Apparatus for and Method of Performing A Weight-Based Search
Cross Reference to Related Applications
This application claims priority of U.S. Provisional Application Nos. 60/869,271 and 60/869,279 filed December 8, 2006 and 60/866,552 filed November 20, 2006; U.S. Patent Application Serial No. 11/687,290 entitled Apparatus for Performing a Weight-Based Search; Serial No. 11/687,300 entitled Method of Performing a Weight-Based Search and Serial No. 11/387,326 entitled Computer Program Implementing a Weight-Based Search; Serial No. 11/687,261 entitled Method of Performing Motion-Based Object Extraction and Tracking in Video; and Serial No. 11/687,341 entitled Computer Program and Apparatus for Motion- Based Object Extraction and Tacking in Video, all of the previously cited provisional and non-provisional applications being incorporated herein by reference in their entireties.
Field of the Invention
The invention is directed to searching content including video and multimedia and, more particularly, to a weight-based search of content.
Background
The prior art includes various searching methods and systems directed to identifying and retrieving content based on key words found in the file name, tags on associated web pages, transcripts, text of hyperlinks pointing to the content, etc. Such search methods rely on Boolean operators indicative of the presence or absence of search terms. However, a more robust search method is required to identify content satisfying search requirements.
Summary of the Invention
The invention is directed to a method, robust search software and apparatus for providing enhanced searching of content taking into consideration not only the existence (or absence) of certain characteristics (as might be indicated by corresponding "tags" attached to the content or portions thereof, e.g., files), but the importance of those characteristics with respect to the content. Tags may name or describe a feature, quality of, and/or objects associated with the content (e.g., video file) and/or of objects appearing in the content (e.g.,
an object appearing within a video file and/or associated with one or more objects appearing in a video file and/or associated with objects appearing in the video file.)
Search results, whether or not based on search criteria specifying importance values, may include importance values for the tags that were searched for and identified within the content. Additional tags (e.g., tags not part of the preceding queried search terms) may also be provided and displayed to the user including, for example, tags for other characteristics suggested by the preceding search and/or suggested tags that might be useful as part of a subsequent search. Suggested tags may be based in part on past search histories, user profile information, etc. and/or may be directed to related products and/or services suggested by the prior search or search results.
Results of searches may further include a display of thumbnails corresponding and linking to content most closely satisfying search criteria, the thumbnails arranged in order of match quality with the size of the thumbnail indicative of its match quality (e.g., best matching video files indicated by large thumbnail images, next best by intermediate size thumbnails, etc.) A user may click on and/or hover over a thumbnail to enlarge the thumbnail, be presented with a preview of the content (e.g., a video clip most relevant to the search terms and criteria) and/or to retrieve or otherwise access the content.
While the following description of a preferred embodiment of the invention uses an example based on indexing and searching of video content, e.g., video files, visual objects, etc., embodiments of the invention are equally applicable to processing, organizing, storing and searching a wide range of content types including video, audio, text and signal files. Thus, an audio embodiment may be used to provide a searchable database of and search audio files for speech, music, or other audio types for desired characteristics of specified importance. Likewise, embodiments may be directed to content in the form of or represented by text, signals, etc.
It is further noted that the use of the term "engine" in describing embodiments and features of the invention is not intended to be limiting of any particular implementation for accomplishing and/or performing the actions, steps, processes, etc. attributable to the engine. An engine may be, but is not limited to, software, hardware and/or firmware or any combination thereof that performs the specified functions including, but not limited to, any using a general and/or specialized processor. Software may be stored in or using a suitable
machine-readable medium such as, but not limited to, random access memory (RAM) and other forms of electronic storage, data storage media such as hard drives, removable media such as CDs and DVDs, etc. Further, any name associated with a particular engine is, unless otherwise specified, for purposes of convenience of reference and not intended to be limiting to a specific implementation. Additionally, any functionality attributed to an engine may be equally performed by multiple engines, incorporated into the functionality of another or different engine, or distributed across one or more engines of various configurations.
According to an aspect of the invention, a method comprises the steps of assigning tags to and descriptive of content, assigning, to the tags, respective weights with respect to the content, and storing the tags and associated weights in a memory. The step of assigning respective weights may include determining an importance of the tags to respective portions of the content. The content may comprise a plurality of video, audio, text and/or signal files, at least one of the tags being assigned to each of the files.
According to a feature of the invention, a highlight segment may be identified within the content.
According to another feature of the invention, a clickable thumbnail representing and linking to the content may be created.
According to another feature of the invention, information may be identified and stored (i) for retrieving the content, (ii) identifying objects within the content, and (iii) weights for each of the objects associated with the content.
According to another feature of the invention, metadata associated with and characterizing the content may be identified and stored.
According to another feature of the invention, the tags may include information including, but not limited to, content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and/or (x) format.
According to another feature of the invention, the content may be segmented so as to extract objects that may then be tracked thought the content and/or assigned tags and associated weights. Assigning tags may include recognizing at least one of the objects and, in response, assigning one of the tags to the object.
According to another feature of the invention, a time-space thread may be created for each of the objects, the objects being tracked and/or recognized throughout the content (e.g., within a contiguous file).
According to another aspect of the invention, an apparatus includes an engine (for convenience of reference, a "tagging" engine") operating to assign tags to and descriptive of content; a "weighting" engine operating to assign, to the tags, respective weights with respect to the content, and a memory storing the tags and associated weights. The weighting engine (or another engine) may further determine an importance of the tags to respective portions of the content. The content may comprise a plurality of video, audio, text and/or signal files, at least one of the tags being assigned to each of the files.
According to a feature of the invention, an engine may operate to receive an input, automatically, or otherwise operate to identify a highlight segment within the content.
According to another feature of the invention, an engine may operate to create a clickable thumbnail representing and linking to the content.
According to another feature of the invention, one or more engines may operate to identify and/or store information (i) for retrieving the content, (ii) identifying objects within the content, and/or (iii) weights for each of the objects associated with the content.
According to another feature of the invention, an engine may operate to identify and/or store metadata associated with and characterizing the content.
According to another feature of the invention, the tags may include information including, but not limited to, content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and/or (x) format.
According to another feature of the invention, an engine may segment the content and extract objects. An engine may track the objects though the content and/or assign tags and associated weight to the objects. Assigning tags may include recognizing at least one of the objects and, in response, assigning one of the tags to the object.
According to another feature of the invention, an engine may create or identify a time- space thread for each of the objects, the objects being tracked and/or recognized throughout the content (e.g., within a contiguous file).
According to another feature of the invention, assigning weights to each of the tags may include identification of relative features of the objects within the content including, but not limited to, (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and/or (viii) texture.
According another aspect of the invention, a computer program includes a computer usable medium having computer readable program code embodied therein for implementing a weight-based database. The computer readable program code may include computer readable program code for causing the computer to assign tags to and descriptive of content, assign, to the tags, respective weights with respect to the content, and store the tags and associated weights in a memory. The program code for assigning respective weights may include code for determining an importance of the tags to respective portions of the content. The content may comprise a plurality of video, audio, text and/or signal files, at least one of the tags being assigned to each of the files.
According to a feature of the invention, a highlight segment may be identified within the content.
According to another feature of the invention, a clickable thumbnail representing and linking to the content may be created.
According to another feature of the invention, information may be identified and stored (i) for retrieving the content, (ii) identifying objects within the content, and (iii) weights for each of the objects associated with the content.
According to another feature of the invention, metadata associated with and characterizing the content may be identified and stored.
According to another feature of the invention, the tags may include information including, but not limited to, content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and/or (x) format.
According to another feature of the invention, the content may be segmented so as to extract objects that may then be tracked thought the content and/or assigned tags and associated weights. Assigning tags may include recognizing at least one of the objects and, in response, assigning one of the tags to the object.
According to another feature of the invention, a time-space thread may be created for each of the objects, the objects being tracked and/or recognized throughout the content (e.g., within a contiguous file).
Additional objects, advantages and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
Brief Description of the Drawings
The drawing figures depict preferred embodiments of the present invention by way of example, not by way of limitations. In the figures, like reference numerals refer to the same or similar elements.
Figure 1 is a flow chart of a method of processing content to segment, tag, and associate weights with the content and various components thereof;
Figure 2 is a flow chart of a method searching for and retrieving content based on weighted search terms;
Figure 3 is a screen shot of a user interface used to identify a video to be processed and indexed;
Figure 4 is a screen shot of a user interface displaying a video that has been uploaded for processing and providing input fields for receiving descriptive information about the video;
Figure 5 is a screen shot of a user interface used to designate an object appearing in a video;
Figure 6 is a screen shot of a user interface used to enter information about a designated object;
Figure 7 is a screen shot of a user interface used to assign and/or adjust weights
associated with respective object tags and to associate links and open text with the object;
Figure 8 a screen shot of a user interface used to add a highlight for a video;
Figure 9 is a screen shot of an interface allowing a user to view thumbnails of the first and last frames of a highlight and provide a name for the highlight;
Figure 10 is a screen shot of a user interface depicting a recently added highlight;
Figure 1 1 is a screen shot of a user interface displaying an array of popular searches and providing a text box for a user to enter search terms for conducting a search of available video content;
Figure 12 is a screen shot of a user interface displaying video thumbnails resulting from a search together with initial weights associated with each search term and suggested (associative) terms;
Figure 13 is a screen shot of a user interface displaying video thumbnails of a revised set of videos resulting from user adjustment of weighting values assigned to the various search terms;
Figure 14 is a screen shot of a user interface displaying designation of a video by a user "rolling over" an associated thumbnail;
Figure 15 is a screen shot of a user interface displaying a revised set of videos resulting from user deletion of one of the search results;
Figure 16 is a screen shot of a simplified user interface used to input search terms and adjust search parameters; and
Figure 17 is a block diagram of a computer platform for executing computer program code implementing processes and steps according to various embodiments of the invention.
Detailed Description of the Preferred Embodiments
While the following preferred embodiment of the invention uses an example based on indexing and searching of video content, e.g., video files, visual objects, etc., embodiments of the invention are equally applicable to processing, organizing, storing and searching a wide range of content types including video, audio, text and signal files. Thus, an audio
embodiment may be used to provide a searchable database of and search audio files for speech, music, etc. Likewise, embodiments may be directed to content in the form of or represented by text, signals, etc.
Embodiment of the invention include, among other things, methods and apparatus for processing content represented in a wide range of formats including, for example, video, audio, waveforms, etc. so as to identify object present in the content, tag the content and the objects identified, identify weights indicating an importance of the tag and/or related object within the context of the content, and provide a searchable database used to identify and retrieve content satisfying specified search criteria. Further embodiments of the invention provide methods and apparatus for supporting and/or performing a weighted search of such a database.
With reference to Figure 1 of the drawings, an embodiment of the invention directed to a method of processing content in the form of videos will be described including segmentation, tagging and associating weights with the content and various components thereof. Thus, at step 101 content to be processed is identified and acquired. For example, with reference to Figure 3, a user interface may be provided allowing a user to select a video file and/or identify a link pointing to a video file (e.g., a URL or Uniform Resource Locator). At step 102 information about the video can be provided using, for example, the user interface illustrated in Figure 4. Descriptive information may include video metadata such as the Title of the video, a narrative description, author, location and shoot date of the video, and any tags (and associated weights) to be associated with the video. The interface may include a viewer for displaying the video as processed.
Objects within the content being or to be processed may be identified at step 103. Object identification may be initiated automatically or manually by a user designating a region of interest. Once a region of interest has been designated, step 104 segments frames of the video while step 105 creates time-space threads or "tubes" that track objects across multiple frames. Thus, as shown in Figure 5, various objects have been identified as represented by the corresponding thumbnails shown on the right portion of the display screen, either automatically or upon user initiation. Using the "Add Object" button, a user may designate a region of interest using the viewer and a graphic input device (e.g., a mouse) to delineate offence" an area of the image. The region of interest is then processed to identify an object within the region and a tube to represent the region is created. The newly created
tube can be merged with other tubes or be a part of another tube.
Once appearing in the thumbnails, suggested tags, weights and/or alternative thumbnail images may be associated with an object as provided by step 107. This information may be provided automatically or, at step 108, the user may modify or manually designate this information. User intervention may be provided by use of the "Tag Me Now" buttons shown in Figure 5 that may cause a popup window to appear. The popup window may include a thumbnail of the object and text fields for the entry and/or display of metadata associated with the object such as the name of the associated tag, links, object caption, free or open text description of the object, etc. As tags are designated and associated with the object, the tags may appear in the popup window as shown in Figure 7. Adjacent each tag designation a slider may indicate an initial importance or weight value associated with each tag and further provide for user adjustment of the weight value. Weight values may correspond to the importance attributed to a tag and/or the associated object within the context of the video. For example, in the context of a video clip about a soccer player, a name tag associated with the soccer player "object" (i.e., the image of the soccer player) as depicted in the video may be regarded as highly important and be given a large weight value. Alternatively, an object corresponding to a soccer shoe may be a relatively minor part of the video and be assigned a low weight value. These weight values may be automatically determined by criteria such as the length of time the object (in this case, image(s) of the soccer player and shoe) appears in the video, relative motion of the object indicating, for example, visual tracking of and/or centering on the object, the amount of space within the image occupied by the object, etc. Once determined, the calculated, default or manually designated weight value may be represented by the position of the slider depicted in Figure 7. A user may then adjust the weight value(s) using the sliders as appropriate.
Steps 109 - 111 provide for the creation of Highlights as supported by, for example, the user interfaces of Figures 8 - 10. Referring to Figure 1, at step 109 processing is performed to suggest one or more highlights to be associated with the content, e.g., video segments representative of the video as a whole and/or of particular objects appearing in the video. This process may be manually initiated by the user via an "Add Highlight" button as shown in Figure 8. The user may designate start and end frames by setting corresponding arrows on a slider at the bottom of the video player. Once the start and end points are designated, a popup window displays thumbnails corresponding to the start and end frames
and provides a text entry field to input the name of the highlight as shown in Figure 9. Pushing the "Done" button results in the highlight being added as shown in Figure 10. As with videos and objects within the video, thumbnails, tags and weights may be associated with each highlight as provided by step 1 10. Step 1 1 1 provides for user acceptance and/or modification of the highlights, tags, weights and/or thumbnails.
Step 1 12 creates a preview of the content. The preview may correspond to a designated highlight. At step 113 processing continues to generate descriptive metadata associated with the content (e.g., video) including, for example, designation of objects and their associated tags and weights, highlights, duration of time during which an object appears, etc. The content or link to the content and the associated metadata and other information generated and/or collected during the previous steps may then be stored in a searchable database at step 114.
A method of searching for and retrieving content is depicted by the flow chart of Figure 2. At step 201 a user inputs search terms associated with content to be located. An example of a suitable interface is shown in the screen shot of Figure 11 including a text entry field for inputting search terms. The interface may include other features such as, for example, popular searches that may be of interest to the user as depicted by the three groups of rotating thumbnail images in the middle of the screen with the associated tag identifiers listed below each group of thumbnails. At step 202 the system and/or user may identify weights, i.e. an importance level, for each of the search terms. Step 203 identifies content satisfying the search criteria, that is, content responsive to the search terms and, if provided, weight values for tags associated with the search terms and displayed at step 204. For example, with reference to Figure 12, a number of thumbnails corresponding to videos, identified by the search may be displayed to the user on a portion of a video display. The thumbnails may be arranged in order of match quality, with the largest thumbnails corresponding to best matches, content of lower match confidence levels being displayed afterwards and with smaller thumbnails, etc. Tags associated with the videos may be identified and displayed to the user (step 205) together with their corresponding weights (e.g., as present in the videos identified, calculated to be responsive to the search terms entered, or otherwise identified). The weights may be associated with means to adjust the weights such as my use of respective slider controls as depicted in the upper left portion of Figure 12. In addition to tags corresponding to the entered search terms, additional and/or alternate tags
may be identified and made available for inclusion in adjusting and/or refining the search as also shown in Figure 12 (see "Or add one of these"). As the user deletes, adds and/or modifies the weights associated with the tags, the system updates the search and resulting thumbnails as shown in Figure 13.
Step 207 provides for user selection of content. This may be accomplished by using a pointing device, such as a mouse, to designate a thumbnail corresponding to the desired content among those identified by the search. One implementation detects a curser postion so that, as the user "rolls-over" a thumbnail, it becomes active as indicated by its increased size (step 208) and the display of additional options (e.g., controls to watch a clip of the video, go to a content provider to access the full the video, delete the video from the search results, etc.) and information about the video (e.g, length, etc.) as shown in the screen shot of Figure 14. Step 209 provides for editing of the list of search results including replacement of thumbnails of deleted search results with thumbnails of other, previously nondisplayed, video(s).
Figure 16 is a screen shot of a simplified user interface used to input search terms and adjust search parameters. This implementation may be used when screen real estate (i.e., usable display area) is limited. In this case, a single thumbnail corresponding to a best match may be displayed together with sliders associated with weight values of the associated tags.
Figure 17 is a block diagram of a computer platform for executing computer program code implementing processes and steps according to various embodiments of the invention. Object processing and database searching may be performed by computer system 1700 in which central processing unit (CPU) 1701 is coupled to system bus 1702. CPU 1701 may be any general purpose CPU. The present invention is not restricted by the architecture of CPU 1701 (or other components of exemplary system 1700) as long as CPU 1701 (and other components of system 1700) supports the inventive operations as described herein. CPU 1701 may execute the various logical instructions according to embodiments of the present invention. For example, CPU 1701 may execute machine-level instructions according to the exemplary operational flows described above in conjunction with Figures 1 and 2.
Computer system 1700 also preferably includes random access memory (RAM) 1703, which may be SRAM, DRAM, SDRAM, or the like. Computer system 1700 preferably includes read-only memory (ROM) 1704 which may be PROM, EPROM, EEPROM, or the like. RAM 1703 and ROM 1704 hold / store user and system data and programs, such as a
machine-readable and/or executable program of instructions for object extraction and/or video indexing according to embodiments of the present invention.
Computer system 1700 also preferably includes input/output (I/O) adapter 1705, communications adapter 171 1 , user interface adapter 1708, and display adapter 1709. I/O adapter 1705, user interface adapter 1708, and/or communications adapter 1711 may, in certain embodiments, enable a user to interact with computer system 1700 in order to input information.
I/O adapter 1705 preferably connects to storage device(s) 1706, such as one or more of hard drive, compact disc (CD) drive, floppy disk drive, tape drive, etc. to computer system 1700. The storage devices may be utilized when RAM 1703 is insufficient for the memory requirements associated with storing data for operations of the system (e.g., storage of videos and related information). Although RAM 1703, ROM 1704 and/or storage device(s) 1706 may include media suitable for storing a program of instructions for video process, object extraction and/or video indexing according to embodiments of the present invention, those having removable media may also be used to load the program and/or bulk data such as large video files.
Communications adapter 1711 is preferably adapted to couple computer system 1700 to network 1712, which may enable information to be input to and/or output from system 1700 via such network 1712 (e.g., the Internet or other wide-area network, a local-area network, a public or private switched telephony network, a wireless network, any combination of the foregoing). For instance, users identifying or otherwise supplying a video for processing may remotely input access information or video files to system 1700 via network 1712 from a remote computer. User interface adapter 1708 couples user input devices, such as keyboard 1713, pointing device 1707, and microphone 1714 and/or output devices, such as speaker(s) 1715 to computer system 1700. Display adapter 1709 is driven by CPU 1701 to control the display on display device 1710 to, for example, display information regarding a video being processed and providing for interaction of a local user or system operator during object extraction and/or video indexing operations.
It shall be appreciated that the present invention is not limited to the architecture of system 1700. For example, any suitable processor-based device may be utilized for implementing object extraction and video indexing, including without limitation personal
computers, laptop computers, computer workstations, and multi-processor servers. Moreover, embodiments of the present invention may be implemented on application specific integrated circuits (ASICs) or very large scale integrated (VLSI) circuits. In fact, persons of ordinary skill in the art may utilize any number of suitable structures capable of executing logical operations according to the embodiments of the present invention.
While the foregoing has described what are considered to be the best mode and/or other preferred embodiments of the invention, it is understood that various modifications may be made therein and that the invention may be implemented in various forms and embodiments, and that it may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all modifications and variations that fall within the true scope of the inventive concepts. For example, embodiments and/or implementations of the invention may include a weighted pricing and/or object bidding feature. Such a feature supports paid advertising that may be included as part of and/or incorporated into a video.
Currently most advertisers pay the same amount to all consumers coming via paid ads (CPC) from the same property. There are some variations of methods, which take into account the qualification of a user based on previous activities on the property and other demographic/geographic elements. For example if a user is found to have searched more times for the same term he/she will be considered more qualified (e.g., interested in a corresponding product or service) and therefore advertisers will be willing to pay more for that specific link. Existing application of this method are quite limited. For example, advertisers may be limited to textual campaigns, i.e. they can only bid using text terms.
A weighted pricing and object bidding feature may use the previously described weight based index system to capture and collect information about how important each term/element is in the content. This data can then be used to support a dynamic pricing mechanism for selling links and/or advertising to a customer (e.g., to the advertiser) based on the level of importance associated with the inquiry by the user (e.g., person initiating a search or inquiry). According to such a system, an advertiser may be able to bid different prices (for a specific term) for different relative weights of the term in the search query, where the assumption is that the higher the weight of the term in the query is, the more qualified the user is and the higher the CPC the advertiser is willing to pay. In addition, such a system and method may allow an advertiser to place a bid with an image/object. The advertiser is then
able to upload an image of an item/object and place a bid for his advertisement to show up every time this item appears in a video, web page etc.
It should also be noted and understood that all publications, patents and patent applications mentioned in this specification are indicative of the level of skill in the art to which the invention pertains. All publications, patents and patent applications are herein incorporated by reference to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated by reference in its entirety.
Claims
1. A method comprising the steps of: assigning tags to and descriptive of content; assigning, to said tags, respective weights with respect to said content; and storing said tags and associated weights in a memory.
2. The method according to claim 1 wherein said step of assigning, to said tags, respective weights includes determining an importance of said tags to respective portions of said content.
3. The method according to claim 1 wherein said content comprises a plurality of video files and at least one of said tags is assigned to each of said video files.
4. The method according to claim 1 wherein said content comprises a plurality of audio files and at least one of said tags is assigned to each of said audio files.
5. The method according to claim 1 wherein said content comprises a plurality of text files and at least one of said tags is assigned to each of said text files.
6. The method according to claim 1 wherein said content comprises a plurality of signal files and at least one of said tags is assigned to each of said signal files.
7. The method according to claim 1 further comprising a step of identifying a highlight segment within the content.
8. The method according to claim 1 further comprising a step of creating a clickable thumbnail representing and linking to said content.
9. The method according to claim 1 further comprising a step of storing information (i) for retrieving said content, (ii) identifying objects within said content, and (iii) weights for each of said objects associated with said content.
10. The method according to claim 1 further comprising a step of storing metadata associated with and characterizing said content.
11. The method according to claim 1 wherein said tags include information selected from the set consisting of content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and (x) format.
12. The method according to claim 1 further comprising the steps of: segmenting said content to extract objects; tracking said objects through the content; and assigning tags and associated weights to each of said objects.
13. The method according to claim 12 wherein said step of assigning tags includes a step of recognizing at least one of said objects and, in response, assigning one of said tags to said object.
14. The method according to claim 12 further comprising a step of creating a time-space thread for each of said objects including said step of tracking said objects and further comprising recognizing said objects through said content.
15. The method according to claim 12 wherein said step of assigning weights to each of said tags includes relative features of said objects within said content selected from said set consisting of (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and (viii) texture.
16. The method according to claim 12 further comprising a step of extracting actions of the objects.
17. A method comprising the steps of: segmenting content to extract objects; tracking said objects through the content; and assigning tags and associated weights to each of said objects.
18. The method according to claim 17 wherein said step of assigning tags and associated weights includes a step of recognizing at least one of said objects and, in response, associating a corresponding tag with said object.
19. The method according to claim 17 further comprising a step of creating a time-space thread for each of said objects including said step of tracking said objects and further comprising recognizing said objects through said content.
20. The method according to claim 17 wherein said content comprises a plurality of video files and said objects each comprise a coherent video object.
21. The method according to claim 17 wherein said content comprises a plurality of audio files and said objects each comprise a coherent audio object.
22. The method according to claim 17 wherein said content comprises a plurality of text files and said objects each comprise a coherent text object.
23. The method according to claim 17 wherein said content comprises a plurality of signal files and said objects each comprise a coherent signal object.
24. The method according to claim 17 wherein said step of assigning weights to each of said objects includes relative features of said objects within said content selected from said set consisting of (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and (viii) texture.
25. A method of searching content comprising the steps of: specifying search criteria including describing characteristics and associated importance values of said characteristics with respect to the content; searching a plurality of tags for said characteristics and associated weights, said weights qualitatively linking each of said tags to associated portions of said content based on an importance of said characteristic within said portion of content; and identifying at least one portion of said content most closely matching said search criteria.
26. The method according to claim 25 wherein said content comprises a plurality of video files and said portion of said content comprises at least one of said video files.
27. The method according to claim 25 further comprising a step of displaying said portion of said content.
28. The method according to claim 25 wherein said portion of said content comprises a plurality of files, said method further comprising a step of displaying representations of said files arranged in a decreasing match quality order.
29. The method according to claim 25 wherein said portion of said content comprises a plurality of files, said method further comprising a step of displaying thumbnails of said files such that a size of each of said thumbnails is representative a quality of match of an associated one of said files.
30. The method according to claim 25 wherein said portion of said content comprises a plurality of files, said method further comprising the step of eliminating duplicate listings of said files.
31. The method according to claim 25 further comprising a step of displaying additional tags associated with said portion of said content together with importance values associated with each of said additional tags.
32. The method according to claim 25 further comprising the steps of processing user input adjusting said importance values to provide user adjusted importance values and, in response, initiating a search of said content for tags corresponding to said characteristics with said user adjusted importance values.
33. The method according to claim 25 wherein the content comprises a plurality of video files and target objects each comprise a coherent video object.
34. The method according to claim 25 wherein the content comprises a plurality of audio files and said target objects each comprise a coherent audio object.
35. The method according to claim 25 wherein the content comprises a plurality of text files and said target objects each comprise a coherent text object.
36. The method according to claim 25 wherein the content comprises a plurality of signal files and said portion of said target objects each comprise a coherent signal object.
37. A method comprising the steps of: identifying a first set of video files satisfying search criteria with respect to specified search terms; displaying a listing of tags corresponding to said first set of video files together with associated weight values associated with each of said tags; refining said search criteria by adjusting at least one of said weight values; and identifying a second set of video files satisfying said refined match.
38. The method according to claim 37 further comprising the step of: displaying thumbnails for a subset of at least one of said first and second sets of video files; deleting from the display, in response to user input, one of said thumbnails; and inserting a new thumbnail into said display.
39. The method according to claim 38 further comprising the step of displaying thumbnails of said second set of video files arranged in an order corresponding to match quality.
40. The method according to claim 39 further comprising a step of adjusting a size of said thumbnails in response to said match quality.
41. The method according to claim 37 further comprising a step of selecting ones of said tags to display .
42. The method according to claim 37 further comprising a step of, in response to said step of identifying said first set of video files, suggesting tags to be included as new search terms.
43. An apparatus comprising: a tagging engine operating to assign tags to and descriptive of content; a weighting engine operating to assign, to said tags, respective weights with respect to said content; and a memory storing said tags and associated weights.
44. The apparatus according to claim 43 wherein said weighting engine determines an importance of said tags to respective portions of said content.
45. The apparatus according to claim 43 wherein said content comprises a plurality of video files and at least one of said tags is assigned to each of said video files.
46. The apparatus according to claim 43 wherein said content comprises a plurality of audio files and at least one of said tags is assigned to each of said audio files.
47. The apparatus according to claim 43 wherein said content comprises a plurality of text files and at least one of said tags is assigned to each of said text files.
48. The apparatus according to claim 43 wherein said content comprises a plurality of signal files and at least one of said tags is assigned to each of said signal files.
49. The apparatus according to claim 43 further comprising a highlights identification engine operating to identify a highlight segment within the content.
50. The apparatus according to claim 43 further a thumbnail creation engine operating to create a clickable thumbnail representing and linking to said content.
51. The apparatus according to claim 43 further comprising one or more engines operating to (i) for retrieve said content, (ii) identify objects within said content, and (iii) associate weights for each of said objects with said content.
52. The apparatus according to claim 43 further comprising an engine operating to store metadata associated with and characterizing said content.
53. The apparatus according to claim 43 wherein said tags include information selected from the set consisting of content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and (x) format.
54. The apparatus according to claim 43 further one or more engines operating to: segment said content to extract objects; track said objects through the content; and assign tags and associated weights to each of said objects.
55. The apparatus according to claim 54 wherein said engine operating to assign tags and associated weights to each of said objects further operates to recognize at least one of said objects and, in response, assigns one of said tags to said object.
56. The apparatus according to claim 54 further including an engine operating to create a time-space thread for each of said objects including tracking said objects and further operating to recognize said objects through said content.
57. The apparatus according to claim 54 wherein said engine operating to assign tags and associated weights to each of said objects further includes relative features of said objects within said content selected from said set consisting of (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and (viii) texture.
58. The apparatus according to claim 54 further an engine operating to extract actions of the objects.
59. An apparatus comprising: an engine operating to segment content to extract objects; an engine operating to track said objects through the content; and an engine operating to assign tags and associated weights to each of said objects.
60. The apparatus according to claim 59 wherein said an engine operating to assign tags and associated weights to each of said objects further operates to recognize at least one of said objects and, in response, associates a corresponding tag with said object.
61. The apparatus according to claim 59 further comprising an engine operating to create a time-space thread for each of said objects and an engine operating to recognize said objects through said content.
62. The apparatus according to claim 59 wherein said content comprises a plurality of video files and said objects each comprise a coherent video object.
63. The apparatus according to claim 59 wherein said content comprises a plurality of audio files and said objects each comprise a coherent audio object.
64. The apparatus according to claim 59 wherein said content comprises a plurality of text files and said objects each comprise a coherent text object.
65. The apparatus according to claim 59 wherein said content comprises a plurality of signal files and said objects each comprise a coherent signal object.
66. The apparatus according to claim 59 wherein said engine operating to assign weights to each of said objects includes relative features of said objects within said content selected from said set consisting of (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and (viii) texture.
67. An apparatus for searching content comprising: an engine operating to specify search criteria and describe characteristics and associated importance values of said characteristics with respect to the content; an engine operating to search a plurality of tags for said characteristics and associated weights, said weights qualitatively linking each of said tags to associated portions of said content based on an importance of said characteristic within said portion of content; and an engine operating to identify at least one portion of said content most closely matching said search criteria.
68. The apparatus according to claim 67 wherein said content comprises a plurality of video files and said portion of said content comprises at least one of said video files.
69. The apparatus according to claim 67 further comprising an engine operating to display said portion of said content.
70. The apparatus according to claim 67 wherein said portion of said content comprises a plurality of files, said apparatus further comprising an engine operating to display representations of said files arranged in a decreasing match quality order.
71. The apparatus according to claim 67 wherein said portion of said content comprises a plurality of files, said apparatus further comprising an engine operating to display thumbnails of said files such that a size of each of said thumbnails is representative a quality of match of an associated one of said files.
72. The apparatus according to claim 67 wherein said portion of said content comprises a plurality of files, said apparatus further comprising an engine operating to eliminate duplicate listings of said files.
73. The apparatus according to claim 67 further comprising an engine operating to display additional tags associated with said portion of said content together with importance values associated with each of said additional tags.
74. The apparatus according to claim 67 further comprising an engine operating to process user input adjusting said importance values to provide user adjusted importance values and, in response, initiate a search of said content for tags corresponding to said characteristics with said user adjusted importance values.
75. The apparatus according to claim 67 wherein the content comprises a plurality of video files and target objects each comprise a coherent video object.
76. The apparatus according to claim 67 wherein the content comprises a plurality of audio files and said target objects each comprise a coherent audio object.
77. The apparatus according to claim 67 wherein the content comprises a plurality of text files and said target objects each comprise a coherent text object.
78. The apparatus according to claim 67 wherein the content comprises a plurality of signal files and said portion of said target objects each comprise a coherent signal object.
79. An apparatus comprising: an engine operating to identify a first set of video files satisfying search criteria with respect to specified search terms; an engine operating to display a listing of tags corresponding to said first set of video files together with associated weight values associated with each of said tags; an engine operating to receive input refining said search criteria by adjusting at least one of said weight values; and an engine operating to identify a second set of video files satisfying said refined match.
80. The apparatus according to claim 37 further comprising the step of: displaying thumbnails for a subset of at least one of said first and second sets of video files; deleting from the display, in response to user input, one of said thumbnails; and inserting a new thumbnail into said display.
81. The apparatus according to claim 80 further comprising an engine operating to display thumbnails of said second set of video files arranged in an order corresponding to match quality.
82. The apparatus according to claim 81 further comprising an engine operating to adjust a size of said thumbnails in response to said match quality.
83. The apparatus according to claim 79 further comprising an engine operating to receive an input for selecting ones of said tags to display .
84. The apparatus according to claim 79 further comprising an engine operating in response to identification of said first set of video files to suggest tags to be included as new search terms.
85. A computer program comprising: a computer usable medium having computer readable program code embodied therein, the computer readable program code including: computer readable program code for causing the computer to assigning tags to and descriptive of content; computer readable program code for causing the computer to assign, to said tags, respective weights with respect to said content; and computer readable program code for causing the computer to store said tags and associated weights in a memory.
86. The computer program according to claim 85 wherein said computer readable program code for causing the computer to assign, to said tags, respective weights includes computer readable program code for causing the computer to determine an importance of said tags to respective portions of said content.
87. The computer program according to claim 85 wherein said content comprises a plurality of video files and at least one of said tags is assigned to each of said video files.
88. The computer program according to claim 85 wherein said content comprises a plurality of audio files and at least one of said tags is assigned to each of said audio files.
89. The computer program according to claim 85 wherein said content comprises a plurality of text files and at least one of said tags is assigned to each of said text files.
90. The computer program according to claim 85 wherein said content comprises a plurality of signal files and at least one of said tags is assigned to each of said signal files.
91. The computer program according to claim 85 further comprising computer readable program code for causing the computer to identify a highlight segment within the content.
92. The computer program according to claim 85 further comprising computer readable program code for causing the computer to create a clickable thumbnail representing and linking to said content.
93. The computer program according to claim 85 further comprising computer readable program code for causing the computer to store information (i) for retrieving said content, (ii) identifying objects within said content, and (iii) weights for each of said objects associated with said content.
94. The computer program according to claim 85 further comprising computer readable program code for causing the computer to store metadata associated with and characterizing said content.
95. The computer program according to claim 85 wherein said tags include information selected from the set consisting of content (i) type, (ii) location, (iii) title, (iv) description, (v) author, (vi) creation date, (vii) duration, (viii) quality, (ix) size, and (x) format.
96. The computer program according to claim 85 further comprising computer readable program code for causing the computer to: comprising computer readable program code for causing the computer to segment said content to extract objects; comprising computer readable program code for causing the computer to track said objects through the content; and comprising computer readable program code for causing the computer to assign tags and associated weights to each of said objects.
97. The computer program according to claim 96 wherein said computer readable program code for causing the computer to assign said tags includes computer readable program code for causing the computer to recognize at least one of said objects and, in response, assign one of said tags to said object.
98. The computer program according to claim 96 further comprising computer readable program code for causing the computer to create a time-space thread for each of said objects including said computer readable program code for tracking said objects and further comprising computer readable program code for causing the computer to recognize said objects through said content.
99. The computer program according to claim 96 wherein said computer readable program code for assigning weights to each of said tags includes relative features of said objects within said content selected from said set consisting of (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and (viii) texture.
100. The computer program according to claim 96 further comprising computer readable program code for causing the computer to extract actions of the objects.
101. A computer program comprising: a computer usable medium having computer readable program code embodied therein for extracting objects from a video, the computer readable program code including: computer readable program code for causing the computer to segment content to extract objects; computer readable program code for causing the computer to track said objects through the content; and computer readable program code for causing the computer to assign tags and associated weights to each of said objects.
102. The computer program according to claim 101 wherein said computer readable program code for causing the computer to assign said tags and associated weights includes computer readable program code for causing the computer to recognize at least one of said objects and, in response, associate a corresponding tag with said object.
103. The computer program according to claim 101 further comprising computer readable program code for causing the computer to create a time-space thread for each of said objects including said computer readable program code for causing the computer to track said objects and further comprising computer readable program code for causing the computer to recognize said objects through said content.
104. The computer program according to claim 101 wherein said content comprises a plurality of video files and said objects each comprise a coherent video object.
105. The computer program according to claim 101 wherein said content comprises a plurality of audio files and said objects each comprise a coherent audio object.
106. The computer program according to claim 101 wherein said content comprises a plurality of text files and said objects each comprise a coherent text object.
107. The computer program according to claim 101 wherein said content comprises a plurality of signal files and said objects each comprise a coherent signal object.
108. The computer program according to claim 101 wherein said computer readable program code for causing the computer to assign weights to each of said objects includes relative features of said objects within said content selected from said set consisting of (i) object duration, (ii) size, (iii) dominant motion, (iv) photometric features, (v) focus, (vi) screen position, (vii) shape, and (viii) texture.
109. A computer program comprising: a computer usable medium having computer readable program code embodied therein for searching content, the computer readable program code including: computer readable program code for causing the computer to receive search criteria including characteristics and associated importance values of said characteristics with respect to the content; computer readable program code for causing the computer to search a plurality of tags for said characteristics and associated weights, said weights qualitatively linking each of said tags to associated portions of said content based on an importance of said characteristic within said portion of content; and computer readable program code for causing the computer to identify at least one portion of said content most closely matching said search criteria.
1 10. The computer program according to claim 109 wherein said content comprises a plurality of video files and said portion of said content comprises at least one of said video files.
1 1 1. The computer program according to claim 109 further comprising computer readable program code for causing the computer to display said portion of said content.
112. The computer program according to claim 109 wherein said portion of said content comprises a plurality of files, said computer program further comprising computer readable program code for causing the computer to display representations of said files arranged in a decreasing match quality order.
113. The computer program according to claim 109 wherein said portion of said content comprises a plurality of files, said computer program further comprising computer readable program code for causing the computer to display thumbnails of said files such that a size of each of said thumbnails is representative a quality of match of an associated one of said files.
114. The computer program according to claim 109 wherein said portion of said content comprises a plurality of files, said computer program further comprising computer readable program code for causing the computer to eliminate duplicate listings of said files.
115. The computer program according to claim 109 further comprising computer readable program code for causing the computer to display additional tags associated with said portion of said content together with importance values associated with each of said additional tags.
116. The computer program according to claim 109 further comprising computer readable program code for causing the computer to process user input adjusting said importance values to provide user adjusted importance values and, in response, initiate a search of said content for tags corresponding to said characteristics with said user adjusted importance values.
117. The computer program according to claim 109 wherein the content comprises a plurality of video files and target objects each comprise a coherent video object.
1 18. The computer program according to claim 109 wherein the content comprises a plurality of audio files and said target objects each comprise a coherent audio object.
1 19. The computer program according to claim 109 wherein the content comprises a plurality of text files and said target objects each comprise a coherent text object.
120. The computer program according to claim 109 wherein the content comprises a plurality of signal files and said portion of said target objects each comprise a coherent signal object.
121. A computer program comprising: a computer usable medium having computer readable program code embodied therein, the computer readable program code including: computer readable program code for causing the computer to identify a first set of video files satisfying search criteria with respect to specified search terms; computer readable program code for causing the computer to display a listing of tags corresponding to said first set of video files together with associated weight values associated with each of said tags; computer readable program code for causing the computer to receive input refining said search criteria by adjusting at least one of said weight values; and computer readable program code for causing the computer to identify a second set of video files satisfying said refined match.
122. The computer program according to claim 121 further comprising computer readable program code for causing the computer to: display thumbnails for a subset of at least one of said first and second sets of video files; delete from the display, in response to user input, one of said thumbnails; and insert a new thumbnail into said display.
123. The computer program according to claim 122 further comprising computer readable program code for causing the computer to display thumbnails of said second set of video files arranged in an order corresponding to match quality.
124. The computer program according to claim 123 further comprising computer readable program code for causing the computer to adjust a size of said thumbnails in response to said match quality.
125. The computer program according to claim 121 further comprising computer readable program code for causing the computer to select ones of said tags to display .
126. The computer program according to claim 121 further comprising computer readable program code for causing the computer to respond to said computer readable program code for identifying said first set of video files so as to suggest tags to be included as new search terms.
Applications Claiming Priority (16)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US86655206P | 2006-11-20 | 2006-11-20 | |
US60/866,552 | 2006-11-20 | ||
US86927906P | 2006-12-08 | 2006-12-08 | |
US86927106P | 2006-12-08 | 2006-12-08 | |
US60/869,271 | 2006-12-08 | ||
US60/869,279 | 2006-12-08 | ||
US11/687,261 | 2007-03-16 | ||
US11/687,341 US8488839B2 (en) | 2006-11-20 | 2007-03-16 | Computer program and apparatus for motion-based object extraction and tracking in video |
US11/687,326 US20080120291A1 (en) | 2006-11-20 | 2007-03-16 | Computer Program Implementing A Weight-Based Search |
US11/687,300 | 2007-03-16 | ||
US11/687,300 US20080120328A1 (en) | 2006-11-20 | 2007-03-16 | Method of Performing a Weight-Based Search |
US11/687,290 US20080120290A1 (en) | 2006-11-20 | 2007-03-16 | Apparatus for Performing a Weight-Based Search |
US11/687,290 | 2007-03-16 | ||
US11/687,261 US8379915B2 (en) | 2006-11-20 | 2007-03-16 | Method of performing motion-based object extraction and tracking in video |
US11/687,326 | 2007-03-16 | ||
US11/687,341 | 2007-03-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2008063615A2 true WO2008063615A2 (en) | 2008-05-29 |
WO2008063615A3 WO2008063615A3 (en) | 2008-10-30 |
Family
ID=39430363
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/024198 WO2008063615A2 (en) | 2006-11-20 | 2007-11-20 | Apparatus for and method of performing a weight-based search |
PCT/US2007/024197 WO2008063614A2 (en) | 2006-11-20 | 2007-11-20 | Method of and apparatus for performing motion-based object extraction and tracking in video |
PCT/US2007/024199 WO2008063616A2 (en) | 2006-11-20 | 2007-11-20 | Apparatus for and method of robust motion estimation using line averages |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/024197 WO2008063614A2 (en) | 2006-11-20 | 2007-11-20 | Method of and apparatus for performing motion-based object extraction and tracking in video |
PCT/US2007/024199 WO2008063616A2 (en) | 2006-11-20 | 2007-11-20 | Apparatus for and method of robust motion estimation using line averages |
Country Status (1)
Country | Link |
---|---|
WO (3) | WO2008063615A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016099228A1 (en) * | 2014-12-19 | 2016-06-23 | Samsung Electronics Co., Ltd. | Method of providing content and electronic apparatus performing the method |
CN113936015A (en) * | 2021-12-17 | 2022-01-14 | 青岛美迪康数字工程有限公司 | Method and device for extracting effective region of image |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110123117A1 (en) * | 2009-11-23 | 2011-05-26 | Johnson Brian D | Searching and Extracting Digital Images From Digital Video Files |
JP2011233039A (en) * | 2010-04-28 | 2011-11-17 | Sony Corp | Image processor, image processing method, imaging device, and program |
ITUA20164783A1 (en) * | 2016-06-30 | 2017-12-30 | Lacs S R L | A TACTICAL SIMULATION CONTROL SYSTEM |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6317741B1 (en) * | 1996-08-09 | 2001-11-13 | Altavista Company | Technique for ranking records of a database |
US20030120652A1 (en) * | 1999-10-19 | 2003-06-26 | Eclipsys Corporation | Rules analyzer system and method for evaluating and ranking exact and probabilistic search rules in an enterprise database |
US20040123319A1 (en) * | 2002-12-13 | 2004-06-24 | Samsung Electronics Co., Ltd. | Broadcast program information search system and method |
US20050050023A1 (en) * | 2003-08-29 | 2005-03-03 | Gosse David B. | Method, device and software for querying and presenting search results |
US7003513B2 (en) * | 2000-07-04 | 2006-02-21 | International Business Machines Corporation | Method and system of weighted context feedback for result improvement in information retrieval |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5886745A (en) * | 1994-12-09 | 1999-03-23 | Matsushita Electric Industrial Co., Ltd. | Progressive scanning conversion apparatus |
US6766037B1 (en) * | 1998-10-02 | 2004-07-20 | Canon Kabushiki Kaisha | Segmenting moving objects and determining their motion |
US6643387B1 (en) * | 1999-01-28 | 2003-11-04 | Sarnoff Corporation | Apparatus and method for context-based indexing and retrieval of image sequences |
US7072398B2 (en) * | 2000-12-06 | 2006-07-04 | Kai-Kuang Ma | System and method for motion vector generation and analysis of digital video clips |
JP4612760B2 (en) * | 2000-04-25 | 2011-01-12 | キヤノン株式会社 | Image processing apparatus and method |
-
2007
- 2007-11-20 WO PCT/US2007/024198 patent/WO2008063615A2/en active Application Filing
- 2007-11-20 WO PCT/US2007/024197 patent/WO2008063614A2/en active Application Filing
- 2007-11-20 WO PCT/US2007/024199 patent/WO2008063616A2/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6317741B1 (en) * | 1996-08-09 | 2001-11-13 | Altavista Company | Technique for ranking records of a database |
US20030120652A1 (en) * | 1999-10-19 | 2003-06-26 | Eclipsys Corporation | Rules analyzer system and method for evaluating and ranking exact and probabilistic search rules in an enterprise database |
US7003513B2 (en) * | 2000-07-04 | 2006-02-21 | International Business Machines Corporation | Method and system of weighted context feedback for result improvement in information retrieval |
US20040123319A1 (en) * | 2002-12-13 | 2004-06-24 | Samsung Electronics Co., Ltd. | Broadcast program information search system and method |
US20050050023A1 (en) * | 2003-08-29 | 2005-03-03 | Gosse David B. | Method, device and software for querying and presenting search results |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016099228A1 (en) * | 2014-12-19 | 2016-06-23 | Samsung Electronics Co., Ltd. | Method of providing content and electronic apparatus performing the method |
CN113936015A (en) * | 2021-12-17 | 2022-01-14 | 青岛美迪康数字工程有限公司 | Method and device for extracting effective region of image |
CN113936015B (en) * | 2021-12-17 | 2022-03-25 | 青岛美迪康数字工程有限公司 | Method and device for extracting effective region of image |
Also Published As
Publication number | Publication date |
---|---|
WO2008063616A2 (en) | 2008-05-29 |
WO2008063614A3 (en) | 2008-08-14 |
WO2008063615A3 (en) | 2008-10-30 |
WO2008063614A2 (en) | 2008-05-29 |
WO2008063616A3 (en) | 2008-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080120290A1 (en) | Apparatus for Performing a Weight-Based Search | |
US20080120328A1 (en) | Method of Performing a Weight-Based Search | |
US20080120291A1 (en) | Computer Program Implementing A Weight-Based Search | |
US11606622B2 (en) | User interface for labeling, browsing, and searching semantic labels within video | |
US9348935B2 (en) | Systems and methods for augmenting a keyword of a web page with video content | |
US9031974B2 (en) | Apparatus and software system for and method of performing a visual-relevance-rank subsequent search | |
US8364660B2 (en) | Apparatus and software system for and method of performing a visual-relevance-rank subsequent search | |
US8234281B2 (en) | Method and system for matching advertising using seed | |
US9619469B2 (en) | Adaptive image browsing | |
US9697230B2 (en) | Methods and apparatus for dynamic presentation of advertising, factual, and informational content using enhanced metadata in search-driven media applications | |
US7836040B2 (en) | Method and system for creating search result list | |
KR101659097B1 (en) | Method and apparatus for searching a plurality of stored digital images | |
TWI588764B (en) | Computer-storage media ,method,and computerized system for feature-value attachment, re-ranking, and filtering of advertisements | |
US9002895B2 (en) | Systems and methods for providing modular configurable creative units for delivery via intext advertising | |
US9286611B2 (en) | Map topology for navigating a sequence of multimedia | |
US20080313570A1 (en) | Method and system for media landmark identification | |
US20130046749A1 (en) | Image search infrastructure supporting user feedback | |
US20090254455A1 (en) | System and method for virtual canvas generation, product catalog searching, and result presentation | |
US8880536B1 (en) | Providing book information in response to queries | |
JP4896268B2 (en) | Information retrieval method and apparatus reflecting information value | |
WO2009006234A2 (en) | Automatic video recommendation | |
CN1804838A (en) | File management system employing time-line based representation of data | |
US20070208621A1 (en) | Method of and system for generating list using flexible adjustment of advertising domain | |
US20100169178A1 (en) | Advertising Method for Image Search | |
US20070244755A1 (en) | Method and system for creating advertisement-list by value distribution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07862128 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: COMMUNICATION UNDER RULE 112(1) EPC, EPO FORM 1205A DATED 25/08/09 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07862128 Country of ref document: EP Kind code of ref document: A2 |