WO2016117039A1 - Image search device, image search method, and information storage medium - Google Patents
Image search device, image search method, and information storage medium Download PDFInfo
- Publication number
- WO2016117039A1 WO2016117039A1 PCT/JP2015/051433 JP2015051433W WO2016117039A1 WO 2016117039 A1 WO2016117039 A1 WO 2016117039A1 JP 2015051433 W JP2015051433 W JP 2015051433W WO 2016117039 A1 WO2016117039 A1 WO 2016117039A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- search
- feature amount
- image
- scene
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
Definitions
- the present invention relates to an image search device, an image search method, and an information recording medium storing a program.
- Patent Document 1 discloses an “object detection method capable of detecting an object whose background is moving”, specifically, “approximate the background motion with a predetermined conversion model (for example, affine transformation or perspective transformation), "Background motion is estimated by estimating the conversion coefficient of the conversion model from the motion vector of the video” and "Only the object is detected by obtaining the difference between the feature quantity related to the object and the feature quantity related to the background” Yes.
- a predetermined conversion model for example, affine transformation or perspective transformation
- Patent Document 1 In the technique of Patent Document 1 described above, first, a motion vector for each macroblock is extracted.
- the motion vector itself has a large error in addition to the motion to be detected, and includes background motion due to camera work. Therefore, in Patent Document 1, the background motion is estimated by approximating the motion of the camera work using affine transformation.
- the estimated background motion is removed from the actual motion vector, and macroblocks with similar obtained motion vector data are integrated and detected as an object.
- each frame is scanned in various size areas, and all the obtained partial areas and search data corresponding to the partial areas are used for searching. It is possible to register it in the database.
- the search for surveillance video, broadcast video, etc. the number of frames constituting the video becomes enormous, and the number of obtained areas becomes enormous. There is a problem that it takes.
- an image search apparatus includes an input unit to which a plurality of images are input, and a plurality of first regions from the plurality of images.
- a first extraction unit that extracts and extracts a first feature amount from each first region, and selects a first feature amount that has a low appearance frequency from a plurality of first feature amount distributions extracted from a plurality of images;
- a region determination unit that identifies the first region including the selected first feature amount as a second region, a first feature amount extracted from the second region, a second region, and an image from which the second region is extracted;
- a search unit that performs a search using the first feature amount.
- a first step in which a plurality of images are input and a second step in which a plurality of first regions are extracted from the plurality of images and a first feature amount is extracted from each of the first regions.
- a first feature amount having a low appearance frequency is selected from the distribution of the plurality of first feature amounts extracted from the plurality of images, and the first region including the selected first feature amount is specified as the second region.
- 3 steps, a first feature amount extracted from the second region, a second region, and an image from which the second region is extracted are stored in a storage unit, and a search is performed using the first feature amount. And performing a fifth step.
- an information recording medium on which a program is recorded wherein a first means for receiving a plurality of images and a plurality of first areas are extracted from the plurality of images and a first feature amount is extracted from each of the first areas.
- a first feature amount having a low appearance frequency is selected from a plurality of first feature amount distributions extracted from a plurality of images, and a first region including the selected first feature amount is defined as a second region.
- a fourth means for storing in the storage unit a third means for identifying the second feature, a first feature value extracted from the second area, a second area, and an image from which the second area has been extracted;
- a fifth means for performing a search using the program is recorded.
- the image search apparatus According to the image search apparatus according to the present invention, it is possible to realize a search focusing on candidate areas in a video at high speed.
- Block diagram showing overall system configuration Block diagram showing hardware configuration
- Video database configuration example Diagram explaining the registration process of video database Flowchart showing processing flow of registration processing of video database Diagram explaining video search processing
- Flow chart showing the processing flow of video search Configuration example of registration and search screen System-wide processing sequence
- region A flowchart showing the processing flow of saliency determination based on the appearance frequency of an area Diagram explaining saliency determination based on region tracking
- a saliency area in which a search target appears remarkably is determined for candidate areas in a scene composed of a plurality of frames (405).
- the saliency area is a candidate area that has a high possibility that the search target appears prominently. For example, if a candidate region has a small image feature amount similar to other candidate regions among a plurality of candidate regions, it is considered that some object is captured instead of a frequent pattern such as wallpaper. Therefore, this candidate area is determined as a saliency area.
- the other candidate areas are moving to the right in the frame, and the candidate areas that are moving only one to the left are likely to be candidate areas to be noted.
- FIG. 1 is a functional block diagram showing the configuration of the video search system 100 according to the first embodiment of the present invention.
- the video search system detects a candidate area that may contain an object from each frame of the input video, identifies a salient area from a plurality of candidate areas, and creates a database, thereby acquiring large-scale video data.
- the system is intended to efficiently execute a video search focusing on a detection target.
- the video search system 100 includes a video storage device 101, an input device 102, a display device 103, and a video search device 104.
- the video storage device 101 is a storage medium for storing video data, and is configured using a hard disk drive built in a computer or a storage system connected via a network such as NAS (Network Attached Storage) or SAN (Storage Area Network). can do.
- the video storage device 101 may be a cache memory that temporarily holds video data continuously input from a camera, for example.
- the video data stored in the video storage device 101 may be data in any format as long as time series information between images can be acquired in some form.
- the stored video data may be moving image data shot by a video camera, or a series of still image data shot by a still camera at a predetermined interval.
- the input device 102 is an input interface for transmitting user operations to the video search device 104 such as a mouse, a keyboard, and a touch device.
- the display device 103 is an output interface such as a liquid crystal display, and is used for displaying the recognition result of the video search device 104, interactive operation with the user, and the like.
- the video search device 104 is similar to a query from a database using a registration process for extracting information necessary for search from the video stored in the video storage device 101 and creating a database, and a search query specified by the user from the input device 102. A search process for searching for a video and presenting information on the display device 103 is performed.
- the video search device 104 detects a candidate area from the frame and realizes a search focusing on an object area in the frame of the video, and after identifying the salient area using the first feature amount extracted from the candidate area, Feature quantities suitable for large-scale data retrieval are extracted from only the salient areas and registered in the database.
- the video search device 104 includes a video input unit 105, a first feature quantity extraction unit 106, a saliency area determination unit 107, a second feature quantity extraction unit 108, a video database 109, and a video search unit 110.
- the video input unit 105 reads video data from the video storage device 101 and converts it into a data format used in the video search device 104. Specifically, the video input unit 105 performs a video decoding process that decomposes video (moving image data format) into frames (still image data format). The obtained frame is sent to the first feature amount extraction unit 106. Further, an image feature amount is extracted from each obtained frame.
- the image feature amount is, for example, data represented by a fixed-length vector, and numerically representing appearance information such as an image color and shape. Information on the input video and information on the obtained frame are registered in the video database 109.
- the first feature quantity extraction unit 106 detects a candidate area included in the search target from each input frame.
- the candidate area is detected by scanning each frame several pixels in a plurality of areas, thereby detecting a plurality of areas of various sizes. At this time, it is easier to perform image processing later if the area shape is rectangular.
- the video search system 100 of the present invention is not limited to a specific type of object, and aims to realize a video search focusing on an arbitrary detection target designated by a user (including not only an object but also a symbol such as a mark). Therefore, a region having a large “objectness” index value is detected as a candidate region from the frame.
- a well-known technique can be used for detection of an object candidate region.
- the index value for example, the number of edges included in the region, the color difference from the surrounding region, the symmetry of the image, and the like can be used.
- the type of input video and the algorithm of a known technique when the type of object is not limited, several tens to several thousand candidate areas are output. In this way, by evaluating the candidate area with “likeness to be detected”, the number of candidates can be narrowed down before the saliency area is determined, and the processing load in the saliency area determination process thereafter can be reduced. Can do.
- the first feature quantity extraction unit 106 extracts feature quantities from all these candidate areas and registers them in the video database 109.
- feature quantities with a sufficiently large amount of information so that the feature quantities differ between different data. For example, it is possible to obtain a feature amount that combines a shape feature amount and a color feature amount, or a feature amount that takes a position into consideration by dividing a region into a lattice shape. In general, not only does it take a long time to calculate the feature amount, but also the registration data increases. Therefore, in the video search device 104 of the present invention, for the candidate areas detected by the first feature quantity extraction unit 106, feature quantities (first feature quantities) that can be distinguished between areas only within a limited scene are used. In the database registration, only the data writing may be performed without performing the clustering process.
- a feature quantity such as an image feature quantity having an information amount larger than the first feature quantity may be registered as a search feature quantity. This search feature amount will be described later as processing of the second extraction unit.
- the first feature amount for example, simple edge frequency, representative color, coordinate data representing motion, and the like can be used.
- the object to be searched is a thing with little movement such as a mark or a characteristic building
- the object to be searched is a person with movement such as a person or a car
- coordinate data representing a motion, a vector amount with a direction, or the like is desirable.
- the region determination unit 107 selects a saliency region in which a search target appears remarkably from the candidate regions detected by the first feature amount extraction unit 106.
- FIG. 10 is a diagram for explaining the saliency determination in the region determination unit.
- an index for obtaining the saliency of the candidate area there is a method of checking whether or not the pattern of the area appears stably (the appearance frequency is high). For example, for patterns that frequently appear in images of various compositions such as wallpaper and sky, even if the area is registered as search data, the possibility of actual registration is low, and the data compresses the storage unit. On the other hand, if patterns that appear only in a specific image, such as a person's face or a predetermined symbol, are registered as search data, they are often used for search and thus are not wasted.
- the present invention determines that an area with a low appearance frequency is an area in which data useful for searching is prominently identified, and identifies it as a prominent area. Then, by registering only the feature amount extracted from the saliency area, it is possible to reduce the load of registration processing. Furthermore, since the database configuration is carefully selected by registering the salient area, the speed of the search process can be improved.
- the registered image is an image constituting a plurality of moving images
- a scene change of each moving image is detected.
- the feature amount of the frame calculated by the video input unit 105 is used, and the distance between the feature amount of the current frame and the feature amount of the previous frame (for example, the square distance between feature amount vectors) is predetermined. This can be achieved by determining where the value is greater than or equal to the value.
- each candidate area detected by the first feature amount extraction unit may be tracked by associating between frames, and a frame in which tracking of many candidate areas is interrupted may be detected to determine a scene change. .
- a plurality of similar and temporally continuous image groups are set as one scene, the appearance frequency outside the scene is obtained, and the remarkable area is specified by the ratio of the frequency inside the scene and the frequency outside the scene.
- the pattern of the candidate area included in 1001 in FIG. 10 appears more frequently in the scene than 1002, but since the frequency outside the scene is high, it is determined that the saliency is low.
- 1003 determines that information useful for searching this scene is remarkable because the frequency outside the scene is low but the frequency inside the scene is high. Thereby, an area where a specific scene can be appropriately detected can be registered as search data.
- the first feature amount is clustered based on the distribution in the feature amount space, and if it is a small cluster, it is determined that the appearance frequency is low, and is set as a remarkable region. For example, as a determination method, a cluster in which the number of data in a cluster is less than several tens to one hundredth of the total number of data in the feature amount space is determined to have a low appearance frequency, and this cluster is selected. Several to several tens of remarkable regions are extracted from one frame.
- the first area is divided into the inside and outside of the scene, and clustering is performed based on the distribution in the feature amount space, and the appearance area that has a low appearance frequency outside the scene and a high appearance frequency within the scene is defined as a remarkable area.
- the degree of similarity with another first feature amount is obtained, and when the number of first feature amounts having a high degree of similarity is less than a threshold value, it is determined that the appearance frequency is low, and the region is defined as a remarkable region.
- other known techniques can be adopted as long as the method determines the appearance frequency.
- registration data can be reduced by narrowing down the saliency areas determined as described above by the following method.
- the remarkable area may be specified after tracking the candidate area using a plurality of temporally continuous images.
- the candidate area is first tracked. For example, another frame is searched using an image feature amount extracted from the candidate area, and a candidate area of another frame having a similarity equal to or higher than a threshold is specified as a tracking result of the same object.
- the movement amount of the candidate areas between the frames is obtained, and this is set as the first feature quantity.
- the appearance frequency is determined from the movement amount distribution, and the saliency area is specified. For example, in a plurality of candidate areas in the same frame, by clustering with the amount of movement, it corresponds to a small cluster (specifically, it is significantly larger or smaller than the surrounding movement amount, Identifies a region having a movement amount in the opposite direction) as a salient region.
- the candidate areas that are very similar in the scene are reduced to one remarkable area.
- several to several tens of salient regions are obtained from candidate regions existing in the scene from several thousand to several tens of thousands, and registration data can be reduced.
- the second feature quantity extraction unit 108 extracts a second feature quantity for search suitable for a wide range of similar image searches from the saliency area obtained by the area determination unit 107 and registers it in the video database 109.
- the second feature amount is a feature amount that can be distinguished between different regions even when the number of scenes and the number of registered data is increased by combining colors and shapes and performing composition division. For example, using a luminance gradient distribution A desired image feature amount can be considered.
- a search process for only similar clusters can be performed during a search.
- Two or more second feature values can be registered for one area so that the user can specify and switch during the search. For example, a feature value that emphasizes the shape and a feature value that emphasizes the color may be extracted and registered.
- a candidate area hereinafter referred to as a related area
- a related area related to the saliency area in the scene can also be registered in association with it. For example, when the saliency is determined using the frequency of the pattern, only one saliency area is selected from similar patterns, and the remaining candidate areas are registered as associated areas with the saliency area.
- the video database 109 is a database for managing information on video, frames, scenes, candidate areas, and salient areas necessary for video search.
- the video database 109 can store image feature amounts and perform a similar image search using the image feature amounts.
- the similar image search is a function of rearranging and outputting data in the order in which the query and the image feature amount are close to each other. For comparison of image feature amounts, for example, the Euclidean distance between vectors can be used. Details of the structure of the video database 109 will be described later with reference to FIG.
- the video search unit 110 searches for video desired by the user from the video database.
- the user specifies a search query using the input device 102.
- the search query may be registration data in the video database or an image input from the outside.
- the first feature amount or the second feature amount is extracted from the image, and an image search is performed using the extracted feature amount.
- the search result is presented to the user via the display device 103.
- the search it is possible to perform a search with higher accuracy by using the second feature amount (search feature amount) having a large amount of information.
- the first feature amount (comparison feature amount) is used. It is possible enough.
- FIG. 2 is a block diagram illustrating a hardware configuration of the video search system 100 according to the first embodiment of the present invention.
- the video search device 104 can be realized by a general computer, for example.
- the video search device 104 may include a processor 201 and a storage device 202 that are connected to each other.
- the storage device 202 is configured by any type of storage medium.
- the storage device 202 may be configured by a combination of a semiconductor memory and a hard disk drive.
- functional units such as the video input unit 105, the first feature amount extraction unit 106, the saliency area determination unit 107, the second feature amount extraction unit 108, the video database 109 search function, and the video search unit 110 shown in FIG. Is realized by the processor 201 executing the processing program 203 stored in the storage device 202.
- the processing executed by each functional unit is actually executed by the processor 201 based on the processing program 203 described above.
- the data of the image database 109 is included in the storage device 202.
- the video search device 104 further includes a network interface device (NIF) 204 connected to the processor.
- the video storage device 101 may be a NAS or a SAN connected to the video search device 104 via the network interface device 204. Alternatively, the video storage device 101 may be included in the storage device 202.
- FIG. 3 is an explanatory diagram showing a configuration and a data example of the video database 109 according to the first embodiment of the present invention.
- a configuration example of a table format is shown, but the data format of the video database 109 may be arbitrary.
- the video database 108 includes a video table 300, a scene table 310, a frame table 320, a candidate area table 330, and a salient area table 340.
- the table configuration in FIG. 3 and the field configuration of each table are configurations necessary for implementing the present invention, and tables and fields may be added according to the application.
- the video table 300 has a video ID field 301, a file path field 302, and a frame ID list field 303.
- the video ID field 301 holds the identification number of each video data.
- the file path field 302 holds a location on the video storage device 101.
- the frame ID list field 303 is a field for managing a list of frames extracted from the video, and holds a list of IDs managed by the frame table 320.
- the scene table 310 has a scene ID field 311 and a frame ID list field 312.
- the scene ID field 311 holds the identification number of each scene data.
- the frame ID list field 312 is a field for managing continuous frames belonging to the scene, and holds a list of IDs managed by the frame table 320.
- the frame table 320 includes a frame ID field 321, a video ID field 322, a scene ID field 323, a candidate area ID list field 324, a salient area ID list field 325, and a frame feature amount field 326.
- the frame ID field 321 holds an identification number of each frame data.
- the video ID field 322 holds a video ID of a video from which a frame is extracted.
- the scene ID field 323 holds the scene ID of the scene to which the frame belongs.
- the candidate area ID list field is a field for managing candidate areas detected from the frame, and holds a list of IDs managed by the candidate area table 330.
- the saliency area ID list field 325 is a field for managing areas determined to be prominent by the area determination unit 107 among the candidate areas detected from the frame, and is a list of IDs managed by the saliency area table 340. Hold.
- the frame feature amount field 326 holds image feature amounts extracted from the entire region of the frame. The image feature amount is given by, for example, fixed-length vector data.
- the candidate area table 330 has a candidate area ID field 331, a frame ID field 332, a coordinate field 333, and a first feature quantity field 334.
- the candidate area ID field 331 holds the identification number of each candidate area data.
- the frame ID field 332 holds the ID of the frame from which the candidate area is detected.
- the coordinate field 333 holds the coordinates of the candidate area in the detection source frame. The coordinates are expressed, for example, in the form of “horizontal coordinates of the upper left corner, vertical coordinates of the upper left corner, horizontal coordinates of the lower right corner, and vertical coordinates of the lower right corner of the rectangle” of the area rectangle.
- region is given as a rectangle for easy description, arbitrary area
- the first feature quantity field 334 holds the feature quantity of the candidate area extracted by the first feature quantity extraction unit 106.
- the saliency area table 340 includes a saliency area ID field 341, a representative candidate area ID field 342, a related candidate area ID list field 343, and a second feature quantity field 344.
- the saliency area ID field 341 holds an identification number of each saliency area data.
- the representative candidate area ID field 342 holds the ID of the candidate area selected as the salient area.
- the related candidate area ID field 343 holds a list of IDs of candidate areas related to the salient area.
- the second feature quantity field 344 holds a feature quantity for searching a saliency area extracted by the second feature quantity extraction unit 108.
- the image search apparatus described in the present embodiment extracts a plurality of first regions from the input unit that inputs a plurality of images and a plurality of images, and first feature amounts from the respective first regions. And a first feature amount having a low appearance frequency is selected from a plurality of first feature amount distributions extracted from a plurality of images, and a first region including the selected first feature amount is selected as a first region.
- a region determination unit that identifies two regions, a first feature amount extracted from the second region, a second region, and an image from which the second region is extracted, and a first feature amount And a search unit for performing a search.
- Second area By storing the feature quantities extracted only from the partial area (second area) specified in this way and using it for the search, the number of registered data can be reduced and the search speed can be improved.
- FIG. 5 is a flowchart for explaining processing in which the video search device 104 according to the first embodiment of the present invention detects a region from the video input from the video storage device 101 and registers it in the video database 109. Hereinafter, each step of FIG. 5 will be described.
- the video input unit 105 acquires a video from the video storage device 101 and converts it into a format that can be used inside the system. Specifically, the video input unit 105 decodes the video and extracts a frame (still image).
- Step S502 The video input unit 105 extracts an image feature amount from the frame obtained in step S501.
- the first feature quantity extraction unit 106 detects a region that is highly likely to contain an object from the frame obtained in step S501 and sets it as a candidate region.
- the first feature amount extraction unit 106 extracts a first feature amount intended to be used for saliency determination from each candidate region obtained in step S503.
- Step S505 The area determination unit 107 determines a scene change using the frame feature value extracted in step S502 or the first feature value of the candidate area extracted in step S504. If a scene change has occurred, step S506 and subsequent steps are executed for the data of the previous scene, and if not, the process moves to step S508.
- the area determination unit 107 determines a saliency area for all candidate areas included in the scene.
- Step S507 The second feature quantity extraction unit 108 extracts a second feature quantity that is intended to be used for search with respect to the saliency area specified in step S506.
- the video search apparatus 104 registers information on video, frames, scenes, candidate areas, and salient areas in the video database 109 in association with each other. Regarding data registration, registration may be sequentially performed in the video database 109 for each process of each preceding functional unit, or may be registered in the video database 109 in a lump after a series of processing for frames is completed.
- Step S509 If there is a next frame in the video storage device 101, the video search apparatus 104 returns to step S501 and repeats the series of registration processes described above. If not, the video search apparatus 104 ends the registration process.
- FIG. 6 is a diagram for explaining processing in which the video search apparatus 104 searches for videos registered in the video database 109 using a query designated by the user in the video search system 100 according to the first embodiment of the present invention.
- FIG. 6 is a diagram for explaining processing in which the video search apparatus 104 searches for videos registered in the video database 109 using a query designated by the user in the video search system 100 according to the first embodiment of the present invention.
- the user inputs information as a clue to search for a desired video from the video database 109.
- similar image search an image having a similar feature can be found from the database using the feature of the image given by the user.
- the user may specify a search target area (601). Further, for example, by managing text information representing a specific object and an image in association with each other, an image used for a similar image search can be given from text input by the user.
- the second feature value is extracted from the query image given by the user in this way (602).
- a similar image search is executed for the video database 109 (604).
- the similar image search is a process of searching for images with similar features, and the distance between feature quantity vectors can be regarded as dissimilarity. Further, when exp ( ⁇ d) ⁇ 100 is calculated using the distance d, it takes a value from 0 to 100, and this may be used as the similarity.
- the search results 605 are rearranged in the descending order of similarity and presented to the user.
- the search processing described above is a search using only information on the saliency area, but since the video search apparatus 104 of the present invention holds information on the candidate area, it is possible to perform a search again using this information. .
- the re-search for the candidate area in the scene can be switched by an option (610).
- the first feature quantity 612 is re-extracted from the query designated by the user (611).
- a search is performed for candidate regions related to the salient region of the search result 605 obtained using the second feature value (613).
- the clustering process for speeding up the search is not performed on the first feature quantity, the number of candidate areas related to the search result 605 is limited, and therefore, the first feature quantity can be executed without a heavy load.
- FIG. 7 is a flowchart for explaining processing in which the video search apparatus 104 according to the first embodiment of the present invention searches for videos registered in the video database 109 using a query designated by the user. Hereinafter, each step of FIG. 7 will be described.
- Step S701 The user designates a search query using the input device 102.
- Step S702 The video search unit 110 extracts the second feature amount from the image specified by the user.
- the second feature amount is extracted by the same processing procedure as that at the time of registration.
- Step S703 The video search unit 110 uses the second feature value obtained in step S702 to search the video database 109 for a saliency area with a close feature value.
- Step S704 If the user has instructed the re-search in the scene, the video search device 104 executes the processing from step S705 onward, and otherwise moves to step S707.
- Step S705 The video search unit 110 extracts the first feature amount from the image designated by the user in step S701.
- Step S706 The video search unit 110 uses the first feature amount obtained in step S705 to search for a region having a close feature amount, targeting candidate regions related to the salient region of the search result in step S703. This result is reflected in the search result.
- Step S707 The video search device 104 outputs the search result to the display device 103 and ends the process.
- FIG. 8 is a diagram illustrating a configuration example of an operation screen for registering video data and performing video search focusing on an object in a frame using the video search device 104 according to the first embodiment of the present invention.
- This screen is presented to the user on the display device 103.
- the user gives a processing instruction to the video search device 104 by operating the cursor 801 marked on the screen using the input device 102.
- a data registration button 802 includes a registration button 802, a registration option designation area 803, a query read button 804, a query image display area 805, a search option designation area 806, a search button 807, and a search result display area 808.
- the video search device 104 reads the video stored in the video storage device 102 and registers it in the video database 109. All data may be registered, or the user may designate a video file to be registered. Further, in the registration option designation area 803, all data may be registered without performing the saliency determination as in the past.
- the read image is displayed in the query image display area 805.
- the search option designation area 806 for example, the user can switch the search target to an entire frame area, a salient area, or a candidate area.
- the feature amount extracted from the query image and the search target change according to the region specified here.
- the search button 807 the video search device 104 searches for a similar video from the video database 109.
- the search result is displayed in a search result display area 808.
- the search result display area 808 can further improve the ease of use of the search results by providing operation buttons for performing video thumbnails, similarity, time during moving images, reproduction and data output to an external application.
- FIG. 9 is a diagram for explaining the processing sequence of the video search system 100 according to the first embodiment of the present invention. Specifically, the user 900 in the video registration and video search processing of the video search system 100 described above. The processing sequence of the video storage device 101, the computer 901, and the image database 109 is shown. The computer 901 is a computer that implements the video search device 104. Hereinafter, each step of FIG. 9 will be described.
- S910 represents a video registration process
- S930 represents a video search process
- the computer 901 acquires video data from the video storage device 101 (S912, S913).
- the subsequent processing corresponds to the series of registration processing described above with reference to FIG.
- the computer 901 cuts out a frame from the video (S914), extracts the feature amount of the frame (S915), and then extracts a large number of candidate areas from the frame (S916).
- the computer 901 extracts the first feature amount from each obtained candidate region (S917).
- the computer 901 detects a scene change (S918), and performs a saliency area determination for a candidate area in the scene (S919).
- a second feature amount is extracted for the obtained saliency area (S920), and information on the video, scene, frame, candidate area, and saliency area is associated and registered in the video database 109 (S921).
- the computer 901 notifies the user 900 of the end of the registration process (S922).
- the video search process S930 corresponds to the series of search processes described above with reference to FIG.
- the computer 900 extracts a second feature amount from the given query image (S932).
- a similar image search is performed on the video database 109 using the extracted second feature amount (S933).
- the computer 900 extracts the first feature amount from the query image (S934).
- a similar image search is performed on candidate areas related to the saliency area obtained in step S933 (S935). These results are integrated to generate a search result screen (S936), and the search result is presented to the user 900 (S937).
- FIG. 11 is a flowchart for explaining a processing flow of saliency determination in the region determination unit. Hereinafter, each step of FIG. 11 will be described.
- the saliency area determination unit 107 performs clustering processing on candidate areas in the scene.
- a known algorithm such as K-means clustering can be applied to the clustering process.
- the saliency area determination unit 107 calculates a representative vector from each cluster obtained in step S1102.
- the representative vector for example, an average value of the feature vector belonging to the cluster can be used.
- the saliency area determination unit 107 performs a similar image search on registered data using the representative vector obtained in step 1102.
- the first feature amount is not subjected to pre-processing for speeding up, it may be difficult to calculate the similarity with all registered data. Therefore, for example, random sampling is performed and compared with only a predetermined number of registered data.
- the information necessary for this processing is the number of similar registration data, for example, when the feature amount space is divided in advance and the first feature amount of the candidate region is registered, which partial space belongs to And the number of candidate areas belonging to the space is counted. By only referring to this count, the frequency of registered data similar to the representative vector can be obtained.
- the saliency area determination unit 107 calculates the frequency in the scene (number of cluster members) obtained in step 1101 and the frequency outside the scene obtained in step 1103 (number of search results with similarity equal to or greater than a predetermined value). The saliency is determined from the ratio. For a cluster whose saliency is equal to or greater than a predetermined value, a candidate area having the closest feature quantity to the representative vector is output as a saliency area.
- FIG. 12 is an explanatory diagram of saliency determination based on region tracking.
- FIG. 13 is a flowchart for explaining a processing flow of saliency determination based on region tracking. Hereinafter, each step of FIG. 13 will be described.
- the saliency area determination unit 107 associates adjacent frames with the candidate areas in the scene. For example, the similarity of the first feature amount may be used for the association, or a coordinate value may be used. In addition, in consideration of a case where tracking is interrupted due to occlusion or the like, association that allows a predetermined number of missing frames may be performed.
- Step S1302 The saliency area determination unit 107 calculates the duration, movement path, and movement amount of the locus from the locus of the candidate region obtained in step S1301.
- the saliency area determination unit 107 calculates the average movement amount of all the trajectories obtained in step S1302, and obtains the saliency from the difference from the average movement amount and the duration.
- One or more candidate areas are selected as saliency areas in a trajectory having a saliency greater than or equal to a predetermined value. For example, a candidate region is selected from the trajectory using information such as the size of the region, edge strength, and less blur (a frame with little change in movement amount).
- the first feature amount is used to classify candidate areas in the scene. Therefore, the first feature amount extraction algorithm may be changed in accordance with the scene characteristics. For example, in the case of an image in a dark place, the candidate areas in the scene can be effectively classified by using only the information on the shape and movement, not the color characteristics. For example, parameters such as luminance correction may be changed according to the scene without changing the algorithm for extracting the feature quantity.
- FIG. 14 is a flowchart showing switching of the first feature amount by scene determination. Hereinafter, each step of FIG. 14 will be described.
- the video input unit 105 performs scene discrimination using the image feature amount extracted from the frame.
- the scene type, the corresponding parameter, and the first feature extraction method are set when the system is constructed. For example, the process branches to a first feature amount extraction process in which shape, color, and movement are emphasized as follows.
- the first feature amount extraction unit 106 extracts shape feature amounts from the candidate areas.
- the saliency area determination unit 107 performs a saliency area determination process focusing on the shape.
- the first feature amount extraction unit 106 extracts a color feature amount from the candidate area.
- the saliency area determination unit 107 performs saliency area determination processing focusing on color.
- the first feature quantity extraction unit 106 extracts a motion feature quantity from the candidate area.
- the saliency area determination unit 107 performs a saliency area determination process focusing on movement. Note that when the first feature value is switched using scene discrimination, candidate regions outside the scene in the saliency discrimination based on the frequency described in the description of FIG. 10 are targeted only for regions using the same extraction method. .
- the image search method described in the present embodiment includes a first step in which a plurality of images are input, a plurality of first regions extracted from the plurality of images, and a first feature amount from each first region.
- a first feature amount having a low appearance frequency is selected from a plurality of first feature amount distributions extracted from a plurality of images, and a first region including the selected first feature amount is selected as a second region.
- a sixth step of performing a search using is
- Second area By storing the feature quantities extracted only from the partial area (second area) specified in this way and using it for the search, the number of registered data can be reduced and the search speed can be improved.
- Video search system 101: Video storage device, 102: Input device, 103: Display device, 104: Video search device, 105: Video input unit, 106: First feature amount extraction unit, 107: Area determination unit, 108 : Second feature amount extraction unit, 109: video database, 201: processor, 202: storage device, 203: processing program, 204: network interface device, 802: data registration button, 803: registration option designation area, 804: query read Button 805: Query image display area 806: Search option designation area 807: Search button 808: Search result display area
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Provided is an image search device, comprising: an input unit into which a plurality of images is inputted; a first extraction unit which extracts from the plurality of images a plurality of first regions (e.g., candidate regions, partial regions), and extracts a first feature value from each of the first regions; a region determination unit which selects, from a distribution of a plurality of the first feature values which is extracted from the plurality of images, the first feature values with a low frequency of occurrence, and specifies the first regions including the selected first feature values as second regions (e.g., regions of note, search regions); a storage unit which stores the first feature values which are extracted from the second regions, the second regions, and the images from which the second regions are extracted; and a search unit which carries out a search using the first feature value.
Description
本発明は、画像検索装置、画像検索方法およびプログラムを格納した情報記録媒体に関する。
The present invention relates to an image search device, an image search method, and an information recording medium storing a program.
テレビ映像のデジタルアーカイブ化やインターネット上の動画配信サービスの普及により、大規模な映像データを高速に検索・分類する必要性が増加している。特に、膨大な映像コンテンツに対して人手でテキスト情報を付与することが困難なことから、画像中の特徴量を用いた映像検索技術が求められている。また、映像フレーム全体の特徴だけでなく、映像中に含まれる物体や特定のパターンに着目した詳細な検索が期待されている。
特許文献1には「背景が動いているオブジェクトを検出できるオブジェクト検出方法」が開示され、具体的には「背景の動きを所定の変換モデル(例えばアフィン変換や透視変換など)で近似し、その変換モデルの変換係数を映像の動きベクトルから推定することによって背景の動きを推定する」「オブジェクトに関する特徴量と背景に関する特徴量との差分を求めることにより、オブジェクトのみを検出する」と記載されている。 The need to search and classify large-scale video data at high speed is increasing due to the digital archive of TV video and the spread of video distribution services on the Internet. In particular, since it is difficult to manually add text information to an enormous amount of video content, there is a need for a video search technique that uses feature quantities in images. In addition to the characteristics of the entire video frame, a detailed search focusing on an object included in the video and a specific pattern is expected.
Patent Document 1 discloses an “object detection method capable of detecting an object whose background is moving”, specifically, “approximate the background motion with a predetermined conversion model (for example, affine transformation or perspective transformation), "Background motion is estimated by estimating the conversion coefficient of the conversion model from the motion vector of the video" and "Only the object is detected by obtaining the difference between the feature quantity related to the object and the feature quantity related to the background" Yes.
特許文献1には「背景が動いているオブジェクトを検出できるオブジェクト検出方法」が開示され、具体的には「背景の動きを所定の変換モデル(例えばアフィン変換や透視変換など)で近似し、その変換モデルの変換係数を映像の動きベクトルから推定することによって背景の動きを推定する」「オブジェクトに関する特徴量と背景に関する特徴量との差分を求めることにより、オブジェクトのみを検出する」と記載されている。 The need to search and classify large-scale video data at high speed is increasing due to the digital archive of TV video and the spread of video distribution services on the Internet. In particular, since it is difficult to manually add text information to an enormous amount of video content, there is a need for a video search technique that uses feature quantities in images. In addition to the characteristics of the entire video frame, a detailed search focusing on an object included in the video and a specific pattern is expected.
上述した特許文献1の技術では、まずマクロブロックごとの動きベクトルを抽出する。動きベクトル自体には、検出対象の動きのほかに誤差も大きく、カメラワークによる背景の動きも含まれている。そこで特許文献1ではカメラワークの動きを、アフィン変換を用いて近似することで、背景の動きを推定する。推定した背景の動きを、実際の動きベクトルから除去し、得られた動きベクトルデータが類似するマクロブロックを統合し、オブジェクトとして検出する。
In the technique of Patent Document 1 described above, first, a motion vector for each macroblock is extracted. The motion vector itself has a large error in addition to the motion to be detected, and includes background motion due to camera work. Therefore, in Patent Document 1, the background motion is estimated by approximating the motion of the camera work using affine transformation. The estimated background motion is removed from the actual motion vector, and macroblocks with similar obtained motion vector data are integrated and detected as an object.
しかし、この技術では、カメラワークと違う動きをしているオブジェクトを検出することではきるが、オブジェクトのうち他のオブジェクトと違う動きをしているものや、背景と同じ動きをしているオブジェクトを検出することはできない。そのため、このような他とは違うオブジェクトを対象とする検索を行うこともできない。
However, with this technology, it is possible to detect objects that move differently from camera work, but objects that move differently from other objects or objects that move the same as the background can be detected. It cannot be detected. Therefore, it is not possible to perform a search for such an object different from others.
このようなオブジェクトを対象として検索できるようにするためには、例えば、各フレームを様々な大きさの領域で走査し、得られた部分領域と部分領域に対応する検索用データとをすべて検索用データべースに登録しておくことが考えられる。しかしながら、監視映像や放送映像等の検索においては、映像を構成するフレームの数が膨大になり、得られる領域の数も膨大となってしまうため、登録処理・検索処理ともに負荷が大きく、時間もかかってしまうという課題がある。
In order to be able to search for such objects, for example, each frame is scanned in various size areas, and all the obtained partial areas and search data corresponding to the partial areas are used for searching. It is possible to register it in the database. However, in the search for surveillance video, broadcast video, etc., the number of frames constituting the video becomes enormous, and the number of obtained areas becomes enormous. There is a problem that it takes.
上記課題を解決するために、例えば請求の範囲に記載の構成を採用する。本願は上記課題を解決する手段を複数含んでいるが、その一例を挙げるならば、画像検索装置であって、複数の画像が入力される入力部と、複数の画像から複数の第1領域を抽出し、それぞれの第1領域から第1特徴量を抽出する第1抽出部と、複数の画像から抽出された複数の第1特徴量の分布から、出現頻度が低い第1特徴量を選択し、選択した第1特徴量を含む第1領域を第2領域として特定する領域判定部と、第2領域から抽出した第1特徴量と、第2領域と、第2領域を抽出した画像と、を記憶する記憶部と、第1特徴量を用いて検索を行う検索部と、を有することを特徴とする。
In order to solve the above problems, for example, the configuration described in the claims is adopted. The present application includes a plurality of means for solving the above-described problems. For example, an image search apparatus includes an input unit to which a plurality of images are input, and a plurality of first regions from the plurality of images. A first extraction unit that extracts and extracts a first feature amount from each first region, and selects a first feature amount that has a low appearance frequency from a plurality of first feature amount distributions extracted from a plurality of images; A region determination unit that identifies the first region including the selected first feature amount as a second region, a first feature amount extracted from the second region, a second region, and an image from which the second region is extracted; And a search unit that performs a search using the first feature amount.
あるいは、画像検索方法であって、複数の画像が入力される第1ステップと、複数の画像から複数の第1領域を抽出し、それぞれの第1領域から第1特徴量を抽出する第2ステップと、複数の画像から抽出された複数の第1特徴量の分布から、出現頻度が低い第1特徴量を選択し、選択した第1特徴量を含む第1領域を第2領域として特定する第3ステップと、第2領域から抽出した第1特徴量と、第2領域と、第2領域を抽出した画像と、を記憶部に記憶する第4ステップと、第1特徴量を用いて検索を行う第5ステップと、を有することを特徴とする。
Alternatively, in the image search method, a first step in which a plurality of images are input, and a second step in which a plurality of first regions are extracted from the plurality of images and a first feature amount is extracted from each of the first regions. And a first feature amount having a low appearance frequency is selected from the distribution of the plurality of first feature amounts extracted from the plurality of images, and the first region including the selected first feature amount is specified as the second region. 3 steps, a first feature amount extracted from the second region, a second region, and an image from which the second region is extracted are stored in a storage unit, and a search is performed using the first feature amount. And performing a fifth step.
あるいは、プログラムが記録された情報記録媒体であって、コンピュータに、複数の画像を受け取る第1手段と、複数の画像から複数の第1領域を抽出し、それぞれの第1領域から第1特徴量を抽出する第2手段と、複数の画像から抽出した複数の第1特徴量の分布から出現頻度が低い第1特徴量を選択し、選択した第1特徴量を含む第1領域を第2領域として特定する第3手段と、第2領域から抽出した第1特徴量と、第2領域と、第2領域を抽出した画像と、を記憶部に記憶する第4手段と、第2特徴量を用いて検索を行う第5手段と、を実行させるプログラムが記録されていることを特徴とする。
Alternatively, an information recording medium on which a program is recorded, wherein a first means for receiving a plurality of images and a plurality of first areas are extracted from the plurality of images and a first feature amount is extracted from each of the first areas. A first feature amount having a low appearance frequency is selected from a plurality of first feature amount distributions extracted from a plurality of images, and a first region including the selected first feature amount is defined as a second region. A fourth means for storing in the storage unit a third means for identifying the second feature, a first feature value extracted from the second area, a second area, and an image from which the second area has been extracted; A fifth means for performing a search using the program is recorded.
本発明に係る画像検索装置によれば、映像中の候補領域に着目した検索を高速に実現することができる。
According to the image search apparatus according to the present invention, it is possible to realize a search focusing on candidate areas in a video at high speed.
<本発明の概要>
本発明の映像検索装置104では、複数フレームからなるシーン中の候補領域を対象として、検索対象が顕著に表れている顕著領域を判定する(405)。顕著領域とは、検索対象が顕著に映っている可能性の高い候補領域である。たとえば、複数の候補領域のうち、他の候補領域と似ている画像特徴量が少ない候補領域であれば、壁紙などの頻出パターンではなく、何らかの対象が写っていると考えられる。そのため、この候補領域を顕著領域として判定する。あるいは、複数の候補領域のうち、他の候補領域がフレーム内を右へ動いている中、一つだけ左へ動いているような候補領域は、注意すべき候補領域である可能性が高いため顕著領域として判定する。このように、複数の候補領域を同じ特徴量で比較した場合に出現頻度が低いものを顕著領域として判定することで、実際には検索に使われない単色の背景領域などの必要性が低いデータを除外することを目的としている。これにより、検索に有用なデータのみが厳選されてデータベース109に登録されるため、検索処理を高速化することができる。 <Outline of the present invention>
In thevideo search apparatus 104 of the present invention, a saliency area in which a search target appears remarkably is determined for candidate areas in a scene composed of a plurality of frames (405). The saliency area is a candidate area that has a high possibility that the search target appears prominently. For example, if a candidate region has a small image feature amount similar to other candidate regions among a plurality of candidate regions, it is considered that some object is captured instead of a frequent pattern such as wallpaper. Therefore, this candidate area is determined as a saliency area. Alternatively, among the plurality of candidate areas, the other candidate areas are moving to the right in the frame, and the candidate areas that are moving only one to the left are likely to be candidate areas to be noted. It is determined as a saliency area. In this way, when a plurality of candidate areas are compared with the same feature amount, data that has a low appearance frequency is determined as a remarkable area, so that data that has a low necessity such as a monochrome background area that is not actually used for search is used. The purpose is to exclude. As a result, only data useful for search is carefully selected and registered in the database 109, so that the search process can be speeded up.
本発明の映像検索装置104では、複数フレームからなるシーン中の候補領域を対象として、検索対象が顕著に表れている顕著領域を判定する(405)。顕著領域とは、検索対象が顕著に映っている可能性の高い候補領域である。たとえば、複数の候補領域のうち、他の候補領域と似ている画像特徴量が少ない候補領域であれば、壁紙などの頻出パターンではなく、何らかの対象が写っていると考えられる。そのため、この候補領域を顕著領域として判定する。あるいは、複数の候補領域のうち、他の候補領域がフレーム内を右へ動いている中、一つだけ左へ動いているような候補領域は、注意すべき候補領域である可能性が高いため顕著領域として判定する。このように、複数の候補領域を同じ特徴量で比較した場合に出現頻度が低いものを顕著領域として判定することで、実際には検索に使われない単色の背景領域などの必要性が低いデータを除外することを目的としている。これにより、検索に有用なデータのみが厳選されてデータベース109に登録されるため、検索処理を高速化することができる。 <Outline of the present invention>
In the
さらに映像検索装置104は、得られた顕著領域406に対して、全登録映像を対象とした検索に必要な、比較用の第1特徴量よりも情報量の多い検索用の第2特徴量(408)を抽出する(407)。検索用の第2特徴量の計算コストとデータサイズは第1特徴量に比べて非常に大きく、また、検索を効率化するための前処理(クラスタリング処理など)を行う必要が有る。しかし、本実施例に依れば、顕著領域の数を候補領域の数に比べて少なくすることができるため、一つの登録処理あたりの処理負荷は低減され、全体として計算コストの大きい処理を実現可能となる。
<システム構成>
図1は、本発明の実施例1に係る映像検索システム100の構成を示す機能ブロック図である。 Furthermore, thevideo search device 104 uses the second feature amount (for search) having a larger amount of information than the first feature amount for comparison, which is necessary for the search for all registered videos, for the obtained saliency area 406. 408) is extracted (407). The calculation cost and data size of the second feature amount for search are very large compared to the first feature amount, and it is necessary to perform preprocessing (such as clustering processing) for efficient search. However, according to the present embodiment, the number of saliency areas can be reduced compared to the number of candidate areas, so the processing load per registration process is reduced, and processing with a high calculation cost as a whole is realized. It becomes possible.
<System configuration>
FIG. 1 is a functional block diagram showing the configuration of thevideo search system 100 according to the first embodiment of the present invention.
<システム構成>
図1は、本発明の実施例1に係る映像検索システム100の構成を示す機能ブロック図である。 Furthermore, the
<System configuration>
FIG. 1 is a functional block diagram showing the configuration of the
映像検索システムは、入力映像の各フレームから物体が含まれる可能性のある候補領域を検出し、さらに複数の候補領域の中から顕著領域を特定してデータベース化することで、大規模な映像データに対して、検出対象に着目した映像検索を効率的に実行することを目的としたシステムである。
The video search system detects a candidate area that may contain an object from each frame of the input video, identifies a salient area from a plurality of candidate areas, and creates a database, thereby acquiring large-scale video data. On the other hand, the system is intended to efficiently execute a video search focusing on a detection target.
映像検索システム100は、映像記憶装置101、入力装置102、表示装置103、および映像検索装置104を備える。
The video search system 100 includes a video storage device 101, an input device 102, a display device 103, and a video search device 104.
映像記憶装置101は、映像データを保存する記憶媒体であり、コンピュータ内蔵のハードディスクドライブ、または、NAS(Network Attached Storage)もしくはSAN(Storage Area Network)などのネットワークで接続されたストレージシステムを用いて構成することができる。また、映像記憶装置101は、例えば、カメラから継続的に入力される映像データを一時的に保持するキャッシュメモリであっても良い。
The video storage device 101 is a storage medium for storing video data, and is configured using a hard disk drive built in a computer or a storage system connected via a network such as NAS (Network Attached Storage) or SAN (Storage Area Network). can do. The video storage device 101 may be a cache memory that temporarily holds video data continuously input from a camera, for example.
なお、映像記憶装置101に保存される映像データは、何らかの形で画像間の時系列情報が取得できる限りは、どのような形式のデータであってもよい。例えば、保存される映像データは、ビデオカメラで撮影された動画像データであってもよいし、スチルカメラによって所定の間隔で撮影された一連の静止画像データであってもよい。
Note that the video data stored in the video storage device 101 may be data in any format as long as time series information between images can be acquired in some form. For example, the stored video data may be moving image data shot by a video camera, or a series of still image data shot by a still camera at a predetermined interval.
入力装置102は、マウス、キーボード、タッチデバイスなど、ユーザの操作を映像検索装置104に伝えるための入力インタフェースである。表示装置103は、液晶ディスプレイなどの出力インタフェースであり、映像検索装置104の認識結果の表示、ユーザとの対話的操作などのために用いられる。
<各部の動作>
映像検索装置104は、映像記憶装置101に蓄積された映像から検索に必要な情報を抽出しデータベース化する登録処理と、ユーザが入力装置102から指定した検索クエリを用いてデータベースからクエリに類似する映像を検索して表示装置103に情報提示する検索処理を行う。映像検索装置104は、映像のフレーム中の物体領域に着目した検索を実現するために、フレームから候補領域を検出し、候補領域から抽出した第1特徴量を用いて顕著領域を特定した後、顕著領域のみから大規模データの検索に適した特徴量を抽出し、データベースに登録する。映像検索装置104は、映像入力部105、第1特徴量抽出部106、顕著領域判定部107、第2特徴量抽出部108、映像データベース109、映像検索部110を備える。 Theinput device 102 is an input interface for transmitting user operations to the video search device 104 such as a mouse, a keyboard, and a touch device. The display device 103 is an output interface such as a liquid crystal display, and is used for displaying the recognition result of the video search device 104, interactive operation with the user, and the like.
<Operation of each part>
Thevideo search device 104 is similar to a query from a database using a registration process for extracting information necessary for search from the video stored in the video storage device 101 and creating a database, and a search query specified by the user from the input device 102. A search process for searching for a video and presenting information on the display device 103 is performed. The video search device 104 detects a candidate area from the frame and realizes a search focusing on an object area in the frame of the video, and after identifying the salient area using the first feature amount extracted from the candidate area, Feature quantities suitable for large-scale data retrieval are extracted from only the salient areas and registered in the database. The video search device 104 includes a video input unit 105, a first feature quantity extraction unit 106, a saliency area determination unit 107, a second feature quantity extraction unit 108, a video database 109, and a video search unit 110.
<各部の動作>
映像検索装置104は、映像記憶装置101に蓄積された映像から検索に必要な情報を抽出しデータベース化する登録処理と、ユーザが入力装置102から指定した検索クエリを用いてデータベースからクエリに類似する映像を検索して表示装置103に情報提示する検索処理を行う。映像検索装置104は、映像のフレーム中の物体領域に着目した検索を実現するために、フレームから候補領域を検出し、候補領域から抽出した第1特徴量を用いて顕著領域を特定した後、顕著領域のみから大規模データの検索に適した特徴量を抽出し、データベースに登録する。映像検索装置104は、映像入力部105、第1特徴量抽出部106、顕著領域判定部107、第2特徴量抽出部108、映像データベース109、映像検索部110を備える。 The
<Operation of each part>
The
映像入力部105は、映像記憶装置101から、映像データを読み出し、映像検索装置104内部で使用するデータ形式に変換する。具体的には、映像入力部105は、映像(動画データ形式)をフレーム(静止画データ形式)に分解する動画デコード処理を行う。得られたフレームは、第1特徴量抽出部106へ送られる。また、得られた各フレームから画像特徴量を抽出する。画像特徴量は、例えば、固定長のベクトルで表現され、画像の色や形状などの見た目の情報を数値化したデータである。入力映像の情報や、得られたフレームの情報は、映像データベース109に登録される。
The video input unit 105 reads video data from the video storage device 101 and converts it into a data format used in the video search device 104. Specifically, the video input unit 105 performs a video decoding process that decomposes video (moving image data format) into frames (still image data format). The obtained frame is sent to the first feature amount extraction unit 106. Further, an image feature amount is extracted from each obtained frame. The image feature amount is, for example, data represented by a fixed-length vector, and numerically representing appearance information such as an image color and shape. Information on the input video and information on the obtained frame are registered in the video database 109.
第1特徴量抽出部106は、入力された各フレームから、検索対象の含まれる候補領域を検出する。候補領域の検出は、複数の大きさの領域で各フレームを数画素ずつ走査することで、大小さまざまな複数の領域を検出する。この時、領域形状が矩形である方が、のちの画像処理を行いやすい。
The first feature quantity extraction unit 106 detects a candidate area included in the search target from each input frame. The candidate area is detected by scanning each frame several pixels in a plurality of areas, thereby detecting a plurality of areas of various sizes. At this time, it is easier to perform image processing later if the area shape is rectangular.
本実施例では、さらに効率的に検索で用いられる可能性の高い候補領域のみを検出できるよう、フレーム中から検出対象が存在する可能性の高い領域を検出し、これを候補領域とする方法について説明する。本発明の映像検索システム100は、特定の種別の物体に限定せず、ユーザの指定した任意の検出対象(物体に限らずマーク等の記号も含む)に着目した映像検索の実現を目的とするため、フレーム中から「検出対象らしさ(Objectness)」の指標値が大きい領域を候補領域として検出する。物体の候補領域の検出には公知技術を使用することができる。指標値として、例えば、領域に含まれるエッジの数や周辺領域との色差、画像の対称性などを利用することができる。入力映像の種類や公知技術のアルゴリズムにもよるが、物体の種類を限定しない場合は、数10~数1000個の候補領域が出力される。このように、候補領域に対し「検出対象らしさ」で評価することで、顕著領域を判定する前に候補の数を絞り込むことができ、この後の顕著領域判定処理での処理負荷を軽減することができる。
In this embodiment, a method for detecting a region that is highly likely to be a detection target from a frame and using this as a candidate region so that only candidate regions that are likely to be used in a search can be detected more efficiently. explain. The video search system 100 of the present invention is not limited to a specific type of object, and aims to realize a video search focusing on an arbitrary detection target designated by a user (including not only an object but also a symbol such as a mark). Therefore, a region having a large “objectness” index value is detected as a candidate region from the frame. A well-known technique can be used for detection of an object candidate region. As the index value, for example, the number of edges included in the region, the color difference from the surrounding region, the symmetry of the image, and the like can be used. Although depending on the type of input video and the algorithm of a known technique, when the type of object is not limited, several tens to several thousand candidate areas are output. In this way, by evaluating the candidate area with “likeness to be detected”, the number of candidates can be narrowed down before the saliency area is determined, and the processing load in the saliency area determination process thereafter can be reduced. Can do.
第1特徴量抽出部106は、これら全ての候補領域から特徴量を抽出し、映像データベース109に登録する。
The first feature quantity extraction unit 106 extracts feature quantities from all these candidate areas and registers them in the video database 109.
大規模なデータを検索するためには、異なるデータ間で特徴量に差がでるように、十分に情報量の多い特徴量を使用する必要がある。例えば、形状の特徴量と色の特徴量を組み合わせた特徴量や、領域を格子状に分割することで位置を考慮した特徴量を得ることができる。これらは通常、特徴量計算に時間がかかるだけでなく、登録データも大きくなる。そこで本発明の映像検索装置104では、第1特徴量抽出部106で検出された候補領域に関しては、限定されたシーン内のみで領域間の判別が可能な特徴量(第1特徴量)を用い、データベース登録においてはクラスタリング処理を行わず、データ書き込みのみを行うことにしてもよい。こうすることで、第1特徴量をデータベース登録するための処理負荷を軽減することができる。さらに検索用のデータとしてデータベース109に登録するデータとして、第1特徴量よりも情報量の多い画像特徴量等の特徴量を検索用特徴量として登録しても良い。この検索用特徴量については第2抽出部の処理として後述する。
In order to retrieve large-scale data, it is necessary to use feature quantities with a sufficiently large amount of information so that the feature quantities differ between different data. For example, it is possible to obtain a feature amount that combines a shape feature amount and a color feature amount, or a feature amount that takes a position into consideration by dividing a region into a lattice shape. In general, not only does it take a long time to calculate the feature amount, but also the registration data increases. Therefore, in the video search device 104 of the present invention, for the candidate areas detected by the first feature quantity extraction unit 106, feature quantities (first feature quantities) that can be distinguished between areas only within a limited scene are used. In the database registration, only the data writing may be performed without performing the clustering process. By doing so, the processing load for registering the first feature amount in the database can be reduced. Further, as data to be registered in the database 109 as search data, a feature quantity such as an image feature quantity having an information amount larger than the first feature quantity may be registered as a search feature quantity. This search feature amount will be described later as processing of the second extraction unit.
第1特徴量としては、例えば、単純なエッジの頻度や代表色、動きを表す座標データなどを使用することが可能である。検索したい対象が、マークや特徴的な建造物などの動きが少ないものである場合には、エッジ頻度や色の特徴量を使うことが望ましい。また、検索したい対象が人や車などの移動を伴う者である場合には、動きを表す座標データや向きを伴うベクトル量等が望ましい。シーンに応じた第1特徴量(比較用特徴量)の選択について、図14の説明として後述する。
As the first feature amount, for example, simple edge frequency, representative color, coordinate data representing motion, and the like can be used. When the object to be searched is a thing with little movement such as a mark or a characteristic building, it is desirable to use the edge frequency or the color feature amount. In addition, when the object to be searched is a person with movement such as a person or a car, coordinate data representing a motion, a vector amount with a direction, or the like is desirable. Selection of the first feature value (comparison feature value) according to the scene will be described later with reference to FIG.
領域判定部107は、第1特徴量抽出部106で検出された候補領域から検索対象が顕著に表れている顕著領域を選択する。
The region determination unit 107 selects a saliency region in which a search target appears remarkably from the candidate regions detected by the first feature amount extraction unit 106.
図10は、領域判定部における顕著性判定を説明するための図である。候補領域の顕著性を求めるための指標として、その領域のパターンが安定して出現する(出現頻度が多い)か否かを調べる方法がある。たとえば、壁紙や空など様々な構図の画像によく出現するパターンについては、その領域を検索用データとして登録しても、実際に登録される可能性は低く、記憶部を圧迫するデータとなる。一方、人物の顔や所定の記号等、特定の画像にしか出現しないパターンを検索用データとして登録しておくと、実際に検索に用いられることが多いため、無駄になりにくい。そこで本発明は、このように出現頻度が少ない領域を検索に有用なデータが顕著に表れている領域であると判断し、顕著領域として特定する。そして顕著領域から抽出した特徴量のみを登録することで登録処理の負荷を低減することができる。さらに、顕著領域を登録することで厳選されたデータベース構成となるため、検索処理の速度を向上することができる。
FIG. 10 is a diagram for explaining the saliency determination in the region determination unit. As an index for obtaining the saliency of the candidate area, there is a method of checking whether or not the pattern of the area appears stably (the appearance frequency is high). For example, for patterns that frequently appear in images of various compositions such as wallpaper and sky, even if the area is registered as search data, the possibility of actual registration is low, and the data compresses the storage unit. On the other hand, if patterns that appear only in a specific image, such as a person's face or a predetermined symbol, are registered as search data, they are often used for search and thus are not wasted. In view of this, the present invention determines that an area with a low appearance frequency is an area in which data useful for searching is prominently identified, and identifies it as a prominent area. Then, by registering only the feature amount extracted from the saliency area, it is possible to reduce the load of registration processing. Furthermore, since the database configuration is carefully selected by registering the salient area, the speed of the search process can be improved.
さらに、登録される画像が複数の動画像を構成する画像であった場合には、まず、各動画像のシーンチェンジを検出する。シーンチェンジ検出処理は、例えば、映像入力部105で計算されたフレームの特徴量を用い、現フレームの特徴量と前フレームの特徴量との距離(例えば、特徴量ベクトル間2乗距離)が所定値以上になったところを判定することで実現できる。また、例えば、第1特徴量抽出部で検出された各候補領域をフレーム間で対応付けることで追跡し、多数の候補領域の追跡が途切れたフレームを検出して、シーンチェンジと判定してもよい。
Further, when the registered image is an image constituting a plurality of moving images, first, a scene change of each moving image is detected. In the scene change detection process, for example, the feature amount of the frame calculated by the video input unit 105 is used, and the distance between the feature amount of the current frame and the feature amount of the previous frame (for example, the square distance between feature amount vectors) is predetermined. This can be achieved by determining where the value is greater than or equal to the value. Further, for example, each candidate area detected by the first feature amount extraction unit may be tracked by associating between frames, and a frame in which tracking of many candidate areas is interrupted may be detected to determine a scene change. .
次に、複数の類似かつ時間的に連続する画像群を一のシーンとし、シーン外における出現頻度を求めておき、シーン内頻度とシーン外頻度の比によって顕著領域を特定する。例えば、図10の1001に含まれる候補領域のパターンは、1002に比べてシーン内に高頻度で出現するが、シーン外の頻度も高いため、顕著性は低いと判定する。これに対して、1003は、シーン外での頻度は低いが、シーン内頻度が高いため、このシーンを検索するのに有用な情報が顕著であると判定する。これにより、適切に特定のシーンを検出することのできる領域を検索用データとして登録することができる。
Next, a plurality of similar and temporally continuous image groups are set as one scene, the appearance frequency outside the scene is obtained, and the remarkable area is specified by the ratio of the frequency inside the scene and the frequency outside the scene. For example, the pattern of the candidate area included in 1001 in FIG. 10 appears more frequently in the scene than 1002, but since the frequency outside the scene is high, it is determined that the saliency is low. On the other hand, 1003 determines that information useful for searching this scene is remarkable because the frequency outside the scene is low but the frequency inside the scene is high. Thereby, an area where a specific scene can be appropriately detected can be registered as search data.
以上を踏まえると、以下の判定基準に基づき、顕著領域を特定することができる。
・第1特徴量を特徴量空間内の分布に基づいてクラスタリングし、小さいクラスタであれば出現頻度が低いと判定し、顕著領域とする。例えば判定方法として,クラスタ内のデータ数が、特徴量空間内の全データ数の数10~数100分の1以下となるクラスタを、出現頻度がひくと判定し、このクラスタを選択することで、1フレームから数個~数10個の顕著領域が抽出される。
・第1領域をシーン内外に分けて、特徴量空間内の分布に基づいてクラスタリングし、シーン外での出現頻度が低く、シーン内での出現頻度が高いものを顕著領域とする
・任意の第1特徴量を用いて他の第1特徴量との類似度を求め、類似度が高い第1特徴量の数がしきい値より少ない場合には出現頻度が低いと判定し、顕著領域とする
なお、これ以外でも、出現頻度の高低を判定する方法であれば、他の公知技術を採用することができる。 Based on the above, it is possible to identify the salient region based on the following criteria.
The first feature amount is clustered based on the distribution in the feature amount space, and if it is a small cluster, it is determined that the appearance frequency is low, and is set as a remarkable region. For example, as a determination method, a cluster in which the number of data in a cluster is less than several tens to one hundredth of the total number of data in the feature amount space is determined to have a low appearance frequency, and this cluster is selected. Several to several tens of remarkable regions are extracted from one frame.
The first area is divided into the inside and outside of the scene, and clustering is performed based on the distribution in the feature amount space, and the appearance area that has a low appearance frequency outside the scene and a high appearance frequency within the scene is defined as a remarkable area. Using one feature amount, the degree of similarity with another first feature amount is obtained, and when the number of first feature amounts having a high degree of similarity is less than a threshold value, it is determined that the appearance frequency is low, and the region is defined as a remarkable region. In addition to this, other known techniques can be adopted as long as the method determines the appearance frequency.
・第1特徴量を特徴量空間内の分布に基づいてクラスタリングし、小さいクラスタであれば出現頻度が低いと判定し、顕著領域とする。例えば判定方法として,クラスタ内のデータ数が、特徴量空間内の全データ数の数10~数100分の1以下となるクラスタを、出現頻度がひくと判定し、このクラスタを選択することで、1フレームから数個~数10個の顕著領域が抽出される。
・第1領域をシーン内外に分けて、特徴量空間内の分布に基づいてクラスタリングし、シーン外での出現頻度が低く、シーン内での出現頻度が高いものを顕著領域とする
・任意の第1特徴量を用いて他の第1特徴量との類似度を求め、類似度が高い第1特徴量の数がしきい値より少ない場合には出現頻度が低いと判定し、顕著領域とする
なお、これ以外でも、出現頻度の高低を判定する方法であれば、他の公知技術を採用することができる。 Based on the above, it is possible to identify the salient region based on the following criteria.
The first feature amount is clustered based on the distribution in the feature amount space, and if it is a small cluster, it is determined that the appearance frequency is low, and is set as a remarkable region. For example, as a determination method, a cluster in which the number of data in a cluster is less than several tens to one hundredth of the total number of data in the feature amount space is determined to have a low appearance frequency, and this cluster is selected. Several to several tens of remarkable regions are extracted from one frame.
The first area is divided into the inside and outside of the scene, and clustering is performed based on the distribution in the feature amount space, and the appearance area that has a low appearance frequency outside the scene and a high appearance frequency within the scene is defined as a remarkable area. Using one feature amount, the degree of similarity with another first feature amount is obtained, and when the number of first feature amounts having a high degree of similarity is less than a threshold value, it is determined that the appearance frequency is low, and the region is defined as a remarkable region. In addition to this, other known techniques can be adopted as long as the method determines the appearance frequency.
さらに、上記のように判定した顕著領域に対し、以下の方法で絞り込みを行うことで、登録データを削減することができる。
・検索対象が他の物体等で遮られていないか、すなわち当該候補領域と他の候補領域とが重なった領域がないかを評価
・検索対象がピンボケになっていないか、すなわち当該候補領域のフォーカスについてエッジ等を用いて評価
・検索対象がぶれていないか、すなわちシーン内における当該候補領域の移動量の大小で評価
・検索対象が高解像度か、すなわちフレーム内における当該候補領域の大きさおよびフレーム内における候補領域の位置で評価
また、時間的に連続する複数枚の画像を使い、候補領域の追跡を行ったのち顕著領域を特定しても良い。この場合には、まず候補領域の追跡を行う。たとえば、候補領域から抽出した画像特徴量を用いて他のフレームを検索し、類似度が閾値以上の別フレームの候補領域を、同一オブジェクトの追跡結果として特定する。 Furthermore, registration data can be reduced by narrowing down the saliency areas determined as described above by the following method.
・ Evaluate whether the search target is not obstructed by other objects, that is, whether there is an area where the candidate area overlaps with the other candidate area. ・ Check whether the search target is out of focus, that is, the candidate area. Whether the evaluation / search target is not blurred by using an edge or the like for the focus, that is, the amount of movement of the candidate area in the scene is high and the evaluation / search target is a high resolution, that is, the size of the candidate area in the frame and Evaluation based on the position of the candidate area in the frame In addition, the remarkable area may be specified after tracking the candidate area using a plurality of temporally continuous images. In this case, the candidate area is first tracked. For example, another frame is searched using an image feature amount extracted from the candidate area, and a candidate area of another frame having a similarity equal to or higher than a threshold is specified as a tracking result of the same object.
・検索対象が他の物体等で遮られていないか、すなわち当該候補領域と他の候補領域とが重なった領域がないかを評価
・検索対象がピンボケになっていないか、すなわち当該候補領域のフォーカスについてエッジ等を用いて評価
・検索対象がぶれていないか、すなわちシーン内における当該候補領域の移動量の大小で評価
・検索対象が高解像度か、すなわちフレーム内における当該候補領域の大きさおよびフレーム内における候補領域の位置で評価
また、時間的に連続する複数枚の画像を使い、候補領域の追跡を行ったのち顕著領域を特定しても良い。この場合には、まず候補領域の追跡を行う。たとえば、候補領域から抽出した画像特徴量を用いて他のフレームを検索し、類似度が閾値以上の別フレームの候補領域を、同一オブジェクトの追跡結果として特定する。 Furthermore, registration data can be reduced by narrowing down the saliency areas determined as described above by the following method.
・ Evaluate whether the search target is not obstructed by other objects, that is, whether there is an area where the candidate area overlaps with the other candidate area. ・ Check whether the search target is out of focus, that is, the candidate area. Whether the evaluation / search target is not blurred by using an edge or the like for the focus, that is, the amount of movement of the candidate area in the scene is high and the evaluation / search target is a high resolution, that is, the size of the candidate area in the frame and Evaluation based on the position of the candidate area in the frame In addition, the remarkable area may be specified after tracking the candidate area using a plurality of temporally continuous images. In this case, the candidate area is first tracked. For example, another frame is searched using an image feature amount extracted from the candidate area, and a candidate area of another frame having a similarity equal to or higher than a threshold is specified as a tracking result of the same object.
次に、追跡した複数フレームにわたる候補領域を用いて、そのフレーム間における候補領域の移動量を求め、これを第1特徴量とする。あとは上述の様に移動量の分布から出現頻度を判定し顕著領域を特定する。たとえば、同じフレームにおける複数の候補領域において、移動量の大小でクラスタリングすることで、小さいクラスタに該当するもの(具体的には、周囲の移動量に比べ著しく大きい・小さいもの、周囲の移動量とは逆方向の移動量をもつものなど)を顕著領域として特定する。
Next, using the tracked candidate areas over a plurality of frames, the movement amount of the candidate areas between the frames is obtained, and this is set as the first feature quantity. After that, as described above, the appearance frequency is determined from the movement amount distribution, and the saliency area is specified. For example, in a plurality of candidate areas in the same frame, by clustering with the amount of movement, it corresponds to a small cluster (specifically, it is significantly larger or smaller than the surrounding movement amount, Identifies a region having a movement amount in the opposite direction) as a salient region.
いずれの場合においても、シーン内で酷似する候補領域はひとつの顕著領域に縮約される。この結果、例えば、シーン内に数千~数万存在した候補領域から、数個~数10個の顕著領域が得られ、登録データの削減が可能となる。
In any case, the candidate areas that are very similar in the scene are reduced to one remarkable area. As a result, for example, several to several tens of salient regions are obtained from candidate regions existing in the scene from several thousand to several tens of thousands, and registration data can be reduced.
第2特徴量抽出部108は、領域判定部107で得られた顕著領域から広範囲の類似画像検索に適した検索用の第2特徴量を抽出し、映像データベース109に登録する。第2特徴量は、色と形状を組みわせ、構図分割を行うことで、シーン数、登録データ数が増えた場合でも異なる領域間の判別が可能な特徴量とし、例えば輝度勾配分布を用いて求める画像特徴量などが考えられる。
The second feature quantity extraction unit 108 extracts a second feature quantity for search suitable for a wide range of similar image searches from the saliency area obtained by the area determination unit 107 and registers it in the video database 109. The second feature amount is a feature amount that can be distinguished between different regions even when the number of scenes and the number of registered data is increased by combining colors and shapes and performing composition division. For example, using a luminance gradient distribution A desired image feature amount can be considered.
また、大規模な画像検索システムにおいては、登録時に検索しやすいデータ構造を構築しておくことで検索処理を高速化することができる。例えば、類似するデータをまとめたクラスタを形成しておくことで(クラスタリング処理)、検索時に類似クラスタのみを対象とした探索処理を行うことができる。
Also, in a large-scale image search system, it is possible to speed up the search process by building a data structure that is easy to search during registration. For example, by forming a cluster in which similar data is collected (clustering process), a search process for only similar clusters can be performed during a search.
第2特徴量は、検索時にユーザが指定して切り替えられるように、1つの領域に対して2つ以上登録しておくこともできる。例えば、形状を重視した特徴量と、色を重視した特徴量を抽出、登録しておいてもよい。なお、顕著領域を登録する際に、シーン内でその顕著領域に関わる候補領域(以下、関連領域と呼ぶ)を関連付けて登録することもできる。例えば、パターンの頻度を用いて顕著性を判定した場合は、類似するパターンの中から顕著領域を1つだけ選び、残りの候補領域をその関連領域として顕著領域に紐づけて登録する。また、例えば、領域の追跡を用いて顕著性を判定する場合は、各追跡に関して1フレームの候補領域のみを顕著領域とし、別のフレームの候補領域を関連領域としてその顕著領域に紐づけて登録する。このように関連領域を登録しておくことで、後の検索処理におけるクエリ追加を容易に行うことができる。
Two or more second feature values can be registered for one area so that the user can specify and switch during the search. For example, a feature value that emphasizes the shape and a feature value that emphasizes the color may be extracted and registered. When registering a saliency area, a candidate area (hereinafter referred to as a related area) related to the saliency area in the scene can also be registered in association with it. For example, when the saliency is determined using the frequency of the pattern, only one saliency area is selected from similar patterns, and the remaining candidate areas are registered as associated areas with the saliency area. Also, for example, when determining saliency using tracking of a region, only one frame candidate region is regarded as a saliency region for each tracking, and a candidate region of another frame is registered as a related region in association with the saliency region. To do. By registering the related area in this way, it is possible to easily add a query in a later search process.
映像データベース109は、映像検索に必要な、映像、フレーム、シーン、候補領域、顕著領域の情報を管理するためのデータベースである。映像データベース109は、画像特徴量を保存し、その画像特徴量を用いた類似画像検索行うことができる。類似画像検索は、クエリと画像特徴量が近い順にデータを並び替えて出力する機能である。画像特徴量の比較には、例えば、ベクトル間のユークリッド距離を用いることができる。映像データベース109の構造について、詳しくは図3の説明として後述する。
The video database 109 is a database for managing information on video, frames, scenes, candidate areas, and salient areas necessary for video search. The video database 109 can store image feature amounts and perform a similar image search using the image feature amounts. The similar image search is a function of rearranging and outputting data in the order in which the query and the image feature amount are close to each other. For comparison of image feature amounts, for example, the Euclidean distance between vectors can be used. Details of the structure of the video database 109 will be described later with reference to FIG.
映像検索部110は、映像データベースからユーザ所望の映像を検索する。ユーザは、入力装置102を用いて検索クエリを指定する。検索クエリは、映像データベースの登録データであってもよいし、外部から入力した画像そのものであってもよい。外部から画像が入力された場合は、その画像から第1特徴量か第2特徴量を抽出し、抽出した特徴量を用いて画像検索を行う。検索結果は、表示装置103を介してユーザに提示される。検索では、情報量の多い第2特徴量(検索用特徴量)を用いた方がより精度の高い検索を行うことができるが、荒い検索であれば第1特徴量(比較用特徴量)でも十分に可能である。
The video search unit 110 searches for video desired by the user from the video database. The user specifies a search query using the input device 102. The search query may be registration data in the video database or an image input from the outside. When an image is input from the outside, the first feature amount or the second feature amount is extracted from the image, and an image search is performed using the extracted feature amount. The search result is presented to the user via the display device 103. In the search, it is possible to perform a search with higher accuracy by using the second feature amount (search feature amount) having a large amount of information. However, in the case of a rough search, even the first feature amount (comparison feature amount) is used. It is possible enough.
図2は、本発明の実施例1に係る映像検索システム100のハードウェア構成を示すブロック図である。映像検索装置104は、例えば一般的な計算機によって実現することができる。例えば、映像検索装置104は、相互に接続されたプロセッサ201および記憶装置202を有してもよい。記憶装置202は任意の種類の記憶媒体によって構成される。例えば、記憶装置202は、半導体メモリと、ハードディスクドライブとの組み合わせによって構成されてもよい。
FIG. 2 is a block diagram illustrating a hardware configuration of the video search system 100 according to the first embodiment of the present invention. The video search device 104 can be realized by a general computer, for example. For example, the video search device 104 may include a processor 201 and a storage device 202 that are connected to each other. The storage device 202 is configured by any type of storage medium. For example, the storage device 202 may be configured by a combination of a semiconductor memory and a hard disk drive.
この例において、図1に示した映像入力部105、第1特徴量抽出部106、顕著領域判定部107、第2特徴量抽出部108、映像データベース109の検索機能、映像検索部110といった機能部は、プロセッサ201が記憶装置202に格納された処理プログラム203を実行することによって実現される。言い換えると、この例において、上記の各機能部が実行する処理は、実際には、上記の処理プログラム203に基づいて、プロセッサ201によって実行される。また、画像データベース109のデータは、記憶装置202に含まれる。
In this example, functional units such as the video input unit 105, the first feature amount extraction unit 106, the saliency area determination unit 107, the second feature amount extraction unit 108, the video database 109 search function, and the video search unit 110 shown in FIG. Is realized by the processor 201 executing the processing program 203 stored in the storage device 202. In other words, in this example, the processing executed by each functional unit is actually executed by the processor 201 based on the processing program 203 described above. The data of the image database 109 is included in the storage device 202.
映像検索装置104は、さらに、プロセッサに接続されたネットワークインターフェース装置(NIF)204を含む。映像記憶装置101は、ネットワークインターフェース装置204を介して映像検索装置104に接続されたNASまたはSANであってもよい。あるいは、映像記憶装置101は、記憶装置202に含まれてもよい。
The video search device 104 further includes a network interface device (NIF) 204 connected to the processor. The video storage device 101 may be a NAS or a SAN connected to the video search device 104 via the network interface device 204. Alternatively, the video storage device 101 may be included in the storage device 202.
図3は、本発明の実施例1に係る映像データベース109の構成およびデータ例を示す説明図である。ここではテーブル形式の構成例を示すが、映像データベース109のデータ形式は任意でよい。
FIG. 3 is an explanatory diagram showing a configuration and a data example of the video database 109 according to the first embodiment of the present invention. Here, a configuration example of a table format is shown, but the data format of the video database 109 may be arbitrary.
映像データベース108は、映像テーブル300、シーンテーブル310、フレームテーブル320、候補領域テーブル330、および顕著領域テーブル340からなる。図3のテーブル構成および各テーブルのフィールド構成は、本発明を実施する上で必要となる構成であり、アプリケーションに応じてテーブルおよびフィールドを追加しても良い。
The video database 108 includes a video table 300, a scene table 310, a frame table 320, a candidate area table 330, and a salient area table 340. The table configuration in FIG. 3 and the field configuration of each table are configurations necessary for implementing the present invention, and tables and fields may be added according to the application.
映像テーブル300は、映像IDフィールド301、ファイルパスフィールド302、フレームIDリストフィールド303を有する。映像IDフィールド301は、各映像データの識別番号を保持する。ファイルパスフィールド302は、映像記憶装置101上の場所を保持する。フレームIDリストフィールド303は、映像から抽出されたフレームのリストを管理するためのフィールドであり、フレームテーブル320で管理されるIDのリストを保持する。
The video table 300 has a video ID field 301, a file path field 302, and a frame ID list field 303. The video ID field 301 holds the identification number of each video data. The file path field 302 holds a location on the video storage device 101. The frame ID list field 303 is a field for managing a list of frames extracted from the video, and holds a list of IDs managed by the frame table 320.
シーンテーブル310は、シーンIDフィールド311、フレームIDリストフィールド312を有する。シーンIDフィールド311は、各シーンデータの識別番号を保持する。フレームIDリストフィールド312は、シーンに属する連続フレームを管理するためのフィールドであり、フレームテーブル320で管理されるIDのリストを保持する。
The scene table 310 has a scene ID field 311 and a frame ID list field 312. The scene ID field 311 holds the identification number of each scene data. The frame ID list field 312 is a field for managing continuous frames belonging to the scene, and holds a list of IDs managed by the frame table 320.
フレームテーブル320は、フレームIDフィールド321、映像IDフィールド322,シーンIDフィールド323、候補領域IDリストフィールド324、顕著領域IDリストフィールド325、フレーム特徴量フィールド326を有する。フレームIDフィールド321は、各フレームデータの識別番号を保持する。映像IDフィールド322は、フレームの抽出元である映像の映像IDを保持する。シーンIDはフィールド323、フレームの属するシーンのシーンIDを保持する。候補領域IDリストフィールドは、フレームから検出された候補領域を管理するためのフィールドであり、候補領域テーブル330で管理されるIDのリストを保持する。顕著領域IDリストフィールド325は、フレームから検出された候補領域の中で、領域判定部107によって顕著であると判定された領域を管理するフィールドであり、顕著領域テーブル340で管理されるIDのリストを保持する。フレーム特徴量フィールド326は、フレームの全領域から抽出された画像特徴量を保持する。画像特徴量は、例えば、固定長のベクトルデータで与えられる。
The frame table 320 includes a frame ID field 321, a video ID field 322, a scene ID field 323, a candidate area ID list field 324, a salient area ID list field 325, and a frame feature amount field 326. The frame ID field 321 holds an identification number of each frame data. The video ID field 322 holds a video ID of a video from which a frame is extracted. The scene ID field 323 holds the scene ID of the scene to which the frame belongs. The candidate area ID list field is a field for managing candidate areas detected from the frame, and holds a list of IDs managed by the candidate area table 330. The saliency area ID list field 325 is a field for managing areas determined to be prominent by the area determination unit 107 among the candidate areas detected from the frame, and is a list of IDs managed by the saliency area table 340. Hold. The frame feature amount field 326 holds image feature amounts extracted from the entire region of the frame. The image feature amount is given by, for example, fixed-length vector data.
候補領域テーブル330は、候補領域IDフィールド331、フレームIDフィールド332、座標フィールド333、第1特徴量フィールド334を有する。候補領域IDフィールド331は、各候補領域データの識別番号を保持する。フレームIDフィールド332は、候補領域の検出元のフレームのIDを保持する。座標フィールド333は、検出元フレームにおける、候補領域の座標を保持する。座標は、例えば、領域矩形の「左上隅の水平座標、左上隅の垂直座標、右下隅の水平座標、矩形の右下隅の垂直座標」という形式で表現される。なお、説明を容易にするため領域を矩形として与えているが、任意の領域表現をとることができる。第1特徴量フィールド334は、第1特徴量抽出部106で抽出される候補領域の特徴量を保持する。
The candidate area table 330 has a candidate area ID field 331, a frame ID field 332, a coordinate field 333, and a first feature quantity field 334. The candidate area ID field 331 holds the identification number of each candidate area data. The frame ID field 332 holds the ID of the frame from which the candidate area is detected. The coordinate field 333 holds the coordinates of the candidate area in the detection source frame. The coordinates are expressed, for example, in the form of “horizontal coordinates of the upper left corner, vertical coordinates of the upper left corner, horizontal coordinates of the lower right corner, and vertical coordinates of the lower right corner of the rectangle” of the area rectangle. In addition, although the area | region is given as a rectangle for easy description, arbitrary area | region expressions can be taken. The first feature quantity field 334 holds the feature quantity of the candidate area extracted by the first feature quantity extraction unit 106.
顕著領域テーブル340は、顕著領域IDフィールド341、代表候補領域IDフィールド342、関連候補領域IDリストフィールド343、第2特徴量フィールド344を有する。顕著領域IDフィールド341は、各顕著領域データの識別番号を保持する。代表候補領域IDフィールド342は、顕著領域として選ばれた候補領域のIDを保持する。関連候補領域IDフィールド343は、顕著領域に関連する候補領域のIDのリストを保持する。第2特徴量フィールド344は、第2特徴量抽出部108で抽出される顕著領域の検索用の特徴量を保持する。
The saliency area table 340 includes a saliency area ID field 341, a representative candidate area ID field 342, a related candidate area ID list field 343, and a second feature quantity field 344. The saliency area ID field 341 holds an identification number of each saliency area data. The representative candidate area ID field 342 holds the ID of the candidate area selected as the salient area. The related candidate area ID field 343 holds a list of IDs of candidate areas related to the salient area. The second feature quantity field 344 holds a feature quantity for searching a saliency area extracted by the second feature quantity extraction unit 108.
以上を踏まえ、本実施例に記載の画像検索装置は、複数の画像が入力される入力部と、複数の画像から複数の第1領域を抽出し、それぞれの前記第1領域から第1特徴量を抽出する第1抽出部と、複数の画像から抽出した複数の第1特徴量の分布から、出現頻度が低い第1特徴量を選択し、選択した第1特徴量を含む第1領域を第2領域として特定する領域判定部と、第2領域から抽出した第1特徴量と、第2領域と、第2領域を抽出した画像と、を記憶する記憶部と、第1特徴量を用いて検索を行う検索部と、を有することを特徴とする。
Based on the above, the image search apparatus described in the present embodiment extracts a plurality of first regions from the input unit that inputs a plurality of images and a plurality of images, and first feature amounts from the respective first regions. And a first feature amount having a low appearance frequency is selected from a plurality of first feature amount distributions extracted from a plurality of images, and a first region including the selected first feature amount is selected as a first region. A region determination unit that identifies two regions, a first feature amount extracted from the second region, a second region, and an image from which the second region is extracted, and a first feature amount And a search unit for performing a search.
先に第1特徴量の分布から出現頻度を用いて評価することで、出現頻度が高く検索ノイズになるような部分領域を除外し、検索に有用な部分領域を特定することができる。このように特定された部分領域(第2領域)からのみ抽出した特徴量を蓄積し、検索に用いることで、登録データ数が減り、検索速度を向上させることができる。
First, by evaluating using the appearance frequency from the distribution of the first feature value, it is possible to exclude a partial area that has a high appearance frequency and causes search noise, and to specify a partial area that is useful for the search. By storing the feature quantities extracted only from the partial area (second area) specified in this way and using it for the search, the number of registered data can be reduced and the search speed can be improved.
<処理フロー>
図5は、本発明の実施例1に係る映像検索装置104が、映像蓄積装置101から入力された映像から領域を検出し、映像データベース109に登録する処理を説明するフローチャートである。以下、図5の各ステップについて説明する。 <Processing flow>
FIG. 5 is a flowchart for explaining processing in which thevideo search device 104 according to the first embodiment of the present invention detects a region from the video input from the video storage device 101 and registers it in the video database 109. Hereinafter, each step of FIG. 5 will be described.
図5は、本発明の実施例1に係る映像検索装置104が、映像蓄積装置101から入力された映像から領域を検出し、映像データベース109に登録する処理を説明するフローチャートである。以下、図5の各ステップについて説明する。 <Processing flow>
FIG. 5 is a flowchart for explaining processing in which the
(図5:ステップS501)
映像入力部105は、映像記憶装置101から映像を取得し、システム内部で利用可能な形式に変換する。具体的には、映像入力部105は、映像をデコードしてフレーム(静止画)を抽出する。 (FIG. 5: Step S501)
Thevideo input unit 105 acquires a video from the video storage device 101 and converts it into a format that can be used inside the system. Specifically, the video input unit 105 decodes the video and extracts a frame (still image).
映像入力部105は、映像記憶装置101から映像を取得し、システム内部で利用可能な形式に変換する。具体的には、映像入力部105は、映像をデコードしてフレーム(静止画)を抽出する。 (FIG. 5: Step S501)
The
(図5:ステップS502)
映像入力部105は、ステップS501で得られたフレームから画像特徴量を抽出する。 (FIG. 5: Step S502)
Thevideo input unit 105 extracts an image feature amount from the frame obtained in step S501.
映像入力部105は、ステップS501で得られたフレームから画像特徴量を抽出する。 (FIG. 5: Step S502)
The
(図5:ステップS503)
第1特徴量抽出部106は、ステップS501で得られたフレームから物体が含まれる可能性の高い領域を検出し候補領域とする。 (FIG. 5: Step S503)
The first featurequantity extraction unit 106 detects a region that is highly likely to contain an object from the frame obtained in step S501 and sets it as a candidate region.
第1特徴量抽出部106は、ステップS501で得られたフレームから物体が含まれる可能性の高い領域を検出し候補領域とする。 (FIG. 5: Step S503)
The first feature
(図5:ステップS504)
第1特徴量抽出部106は、ステップS503で得られた各候補領域から顕著性判定に使用することを目的とした第1特徴量を抽出する。 (FIG. 5: Step S504)
The first featureamount extraction unit 106 extracts a first feature amount intended to be used for saliency determination from each candidate region obtained in step S503.
第1特徴量抽出部106は、ステップS503で得られた各候補領域から顕著性判定に使用することを目的とした第1特徴量を抽出する。 (FIG. 5: Step S504)
The first feature
(図5:ステップS505)
領域判定部107は、ステップS502で抽出されたフレームの特徴量、または、ステップS504で抽出された候補領域の第1特徴量を用いて、シーンチェンジを判定する。シーンチェンジが発生した場合、それまでのシーンのデータを対象としてステップS506以降を実行し、そうでなければ、ステップS508に移動する。 (FIG. 5: Step S505)
Thearea determination unit 107 determines a scene change using the frame feature value extracted in step S502 or the first feature value of the candidate area extracted in step S504. If a scene change has occurred, step S506 and subsequent steps are executed for the data of the previous scene, and if not, the process moves to step S508.
領域判定部107は、ステップS502で抽出されたフレームの特徴量、または、ステップS504で抽出された候補領域の第1特徴量を用いて、シーンチェンジを判定する。シーンチェンジが発生した場合、それまでのシーンのデータを対象としてステップS506以降を実行し、そうでなければ、ステップS508に移動する。 (FIG. 5: Step S505)
The
(図5:ステップS506)
領域判定部107は、当該シーンに含まれる全ての候補領域を対象として、顕著領域を判定する。 (FIG. 5: Step S506)
Thearea determination unit 107 determines a saliency area for all candidate areas included in the scene.
領域判定部107は、当該シーンに含まれる全ての候補領域を対象として、顕著領域を判定する。 (FIG. 5: Step S506)
The
(図5:ステップS507)
第2特徴量抽出部108は、ステップS506で特定された顕著領域に対して、検索に使用することを目的とした第2特徴量を抽出する。 (FIG. 5: Step S507)
The second featurequantity extraction unit 108 extracts a second feature quantity that is intended to be used for search with respect to the saliency area specified in step S506.
第2特徴量抽出部108は、ステップS506で特定された顕著領域に対して、検索に使用することを目的とした第2特徴量を抽出する。 (FIG. 5: Step S507)
The second feature
(図5:ステップS508)
映像検索装置104は、映像、フレーム、シーン、候補領域、顕著領域の情報を関連付けて映像データベース109に登録する。なお、データ登録に関しては、先行する各機能部の処理毎に逐次、映像データベース109に登録してもよいし、フレームに対する一連の処理が終わってから一括で映像データベース109に登録してもよい。 (FIG. 5: Step S508)
Thevideo search apparatus 104 registers information on video, frames, scenes, candidate areas, and salient areas in the video database 109 in association with each other. Regarding data registration, registration may be sequentially performed in the video database 109 for each process of each preceding functional unit, or may be registered in the video database 109 in a lump after a series of processing for frames is completed.
映像検索装置104は、映像、フレーム、シーン、候補領域、顕著領域の情報を関連付けて映像データベース109に登録する。なお、データ登録に関しては、先行する各機能部の処理毎に逐次、映像データベース109に登録してもよいし、フレームに対する一連の処理が終わってから一括で映像データベース109に登録してもよい。 (FIG. 5: Step S508)
The
(図5:ステップS509)
映像検索装置104は、映像記憶装置101に次のフレームが存在すれば、ステップS501に戻り前述の一連の登録処理を繰り返し、そうでなければ、登録処理を終了する。 (FIG. 5: Step S509)
If there is a next frame in thevideo storage device 101, the video search apparatus 104 returns to step S501 and repeats the series of registration processes described above. If not, the video search apparatus 104 ends the registration process.
映像検索装置104は、映像記憶装置101に次のフレームが存在すれば、ステップS501に戻り前述の一連の登録処理を繰り返し、そうでなければ、登録処理を終了する。 (FIG. 5: Step S509)
If there is a next frame in the
図6は、本発明の実施例1に係る映像検索システム100において、映像検索装置104が、ユーザから指定されたクエリを用いて映像データベース109に登録された映像を検索する処理を説明するための図である。
FIG. 6 is a diagram for explaining processing in which the video search apparatus 104 searches for videos registered in the video database 109 using a query designated by the user in the video search system 100 according to the first embodiment of the present invention. FIG.
ユーザは、映像データベース109から所望の映像を検索するために、手がかりとなる情報を入力する。類似画像検索を用いると、ユーザが与えた画像の特徴を用いて、それと似た特徴を持つ画像をデータベースから見つけることができる。検索対象が画像の一部に物体が映っている場合は、ユーザに検索対象の領域を指定させても良い(601)。また、例えば、特定の物体を表すテキスト情報と画像を紐付けて管理しておくことで、ユーザが入力したテキストから類似画像検索に使用する画像を与えることもできる。
The user inputs information as a clue to search for a desired video from the video database 109. When similar image search is used, an image having a similar feature can be found from the database using the feature of the image given by the user. When an object is shown in a part of an image as a search target, the user may specify a search target area (601). Further, for example, by managing text information representing a specific object and an image in association with each other, an image used for a similar image search can be given from text input by the user.
このようにしてユーザから与えられたクエリ画像から、第2特徴量を抽出する(602)。得られた特徴量ベクトル603を用いて、映像データベース109に対して類似画像検索を実行する(604)。類似画像検索は、特徴の近い画像を探索する処理であり、特徴量ベクトル間の距離を非類似度とみなすことができる。また、距離dを使用して、exp(-d)×100を計算すると、0~100までの値をとるため、これを類似度として使用してもよい。検索結果605は、例えば類似度の高い順に並び替えられて、ユーザに提示される。
The second feature value is extracted from the query image given by the user in this way (602). Using the obtained feature vector 603, a similar image search is executed for the video database 109 (604). The similar image search is a process of searching for images with similar features, and the distance between feature quantity vectors can be regarded as dissimilarity. Further, when exp (−d) × 100 is calculated using the distance d, it takes a value from 0 to 100, and this may be used as the similarity. For example, the search results 605 are rearranged in the descending order of similarity and presented to the user.
上記の検索処理は、顕著領域の情報のみを用いた検索であるが、本発明の映像検索装置104は、候補領域の情報を保持しているため、これを活用した再検索を行うことができる。シーン内の候補領域を対象とした再検索は、オプションによって切り替えることができる(610)。
The search processing described above is a search using only information on the saliency area, but since the video search apparatus 104 of the present invention holds information on the candidate area, it is possible to perform a search again using this information. . The re-search for the candidate area in the scene can be switched by an option (610).
シーン内で再検索を行うためには、まず、ユーザが指定したクエリから第1特徴量612を再抽出する(611)。得られた第1特徴量を用いて、第2特徴量を用いて得られた検索結果605の顕著領域に関連する候補領域を対象として、検索を行う(613)。第1特徴量に対しては、検索高速化のためのクラスタリング処理を行っていないが、検索結果605に関連する候補領域の数は限定的であるため、大きな負荷なく実行できる。この結果、第1特徴量から計算した候補領域の類似性を加味した検索結果614をユーザに提示することができる。
In order to perform a re-search in the scene, first, the first feature quantity 612 is re-extracted from the query designated by the user (611). Using the obtained first feature value, a search is performed for candidate regions related to the salient region of the search result 605 obtained using the second feature value (613). Although the clustering process for speeding up the search is not performed on the first feature quantity, the number of candidate areas related to the search result 605 is limited, and therefore, the first feature quantity can be executed without a heavy load. As a result, it is possible to present to the user a search result 614 that takes into account the similarity of candidate areas calculated from the first feature amount.
図7は、本発明の実施例1に係る映像検索装置104が、ユーザから指定されたクエリを用いて映像データベース109に登録された映像を検索する処理を説明するフローチャートである。以下、図7の各ステップについて説明する。
FIG. 7 is a flowchart for explaining processing in which the video search apparatus 104 according to the first embodiment of the present invention searches for videos registered in the video database 109 using a query designated by the user. Hereinafter, each step of FIG. 7 will be described.
(図7:ステップS701)
ユーザは、入力装置102を用いて、検索クエリを指定する。 (FIG. 7: Step S701)
The user designates a search query using theinput device 102.
ユーザは、入力装置102を用いて、検索クエリを指定する。 (FIG. 7: Step S701)
The user designates a search query using the
(図7:ステップS702)
映像検索部110は、ユーザによって指定された画像から第2特徴量を抽出する。第2特徴量は、登録時と同じ処理手順によって抽出される。 (FIG. 7: Step S702)
Thevideo search unit 110 extracts the second feature amount from the image specified by the user. The second feature amount is extracted by the same processing procedure as that at the time of registration.
映像検索部110は、ユーザによって指定された画像から第2特徴量を抽出する。第2特徴量は、登録時と同じ処理手順によって抽出される。 (FIG. 7: Step S702)
The
(図7:ステップS703)
映像検索部110は、ステップS702において得られた第2特徴量を用いて、映像データベース109から、特徴量の近い顕著領域を検索する。 (FIG. 7: Step S703)
Thevideo search unit 110 uses the second feature value obtained in step S702 to search the video database 109 for a saliency area with a close feature value.
映像検索部110は、ステップS702において得られた第2特徴量を用いて、映像データベース109から、特徴量の近い顕著領域を検索する。 (FIG. 7: Step S703)
The
(図7:ステップS704)
映像検索装置104は、ユーザによってシーン内再検索が指示されていれば、ステップS705以降の処理を実行し、そうでなければステップS707に移動する。 (FIG. 7: Step S704)
If the user has instructed the re-search in the scene, thevideo search device 104 executes the processing from step S705 onward, and otherwise moves to step S707.
映像検索装置104は、ユーザによってシーン内再検索が指示されていれば、ステップS705以降の処理を実行し、そうでなければステップS707に移動する。 (FIG. 7: Step S704)
If the user has instructed the re-search in the scene, the
(図7:ステップS705)
映像検索部110は、ステップS701でユーザによって指定された画像から、第1特徴量を抽出する。 (FIG. 7: Step S705)
Thevideo search unit 110 extracts the first feature amount from the image designated by the user in step S701.
映像検索部110は、ステップS701でユーザによって指定された画像から、第1特徴量を抽出する。 (FIG. 7: Step S705)
The
(図7:ステップS706)
映像検索部110は、ステップS705で得られた第1特徴量を用いて、ステップS703の検索結果の顕著領域に関連する候補領域を対象として、特徴量の近い領域を検索する。この結果を、検索結果に反映させる。 (FIG. 7: Step S706)
Thevideo search unit 110 uses the first feature amount obtained in step S705 to search for a region having a close feature amount, targeting candidate regions related to the salient region of the search result in step S703. This result is reflected in the search result.
映像検索部110は、ステップS705で得られた第1特徴量を用いて、ステップS703の検索結果の顕著領域に関連する候補領域を対象として、特徴量の近い領域を検索する。この結果を、検索結果に反映させる。 (FIG. 7: Step S706)
The
(図7:ステップS707)
映像検索装置104は、表示装置103に検索結果を出力し、処理を終了する。 (FIG. 7: Step S707)
Thevideo search device 104 outputs the search result to the display device 103 and ends the process.
映像検索装置104は、表示装置103に検索結果を出力し、処理を終了する。 (FIG. 7: Step S707)
The
図8は、本発明の実施例1に係る映像検索装置104を用いて、映像データを登録し、フレーム中の物体に着目した映像検索を行うための操作画面の構成例を表す図である。本画面は、表示装置103上でユーザに提示される。ユーザは、入力装置102を用いて、画面上に標示されたカーソル801を操作することで、映像検索装置104に処理の指示を与える。
FIG. 8 is a diagram illustrating a configuration example of an operation screen for registering video data and performing video search focusing on an object in a frame using the video search device 104 according to the first embodiment of the present invention. This screen is presented to the user on the display device 103. The user gives a processing instruction to the video search device 104 by operating the cursor 801 marked on the screen using the input device 102.
図8の操作画面は、データ登録ボタン802、登録オプション指定領域803、クエリ読込ボタン804、クエリ画像表示領域805、検索オプション指定領域806、検索ボタン807、検索結果表示領域808を有する。
8 includes a data registration button 802, a registration option designation area 803, a query read button 804, a query image display area 805, a search option designation area 806, a search button 807, and a search result display area 808.
ユーザがデータ登録ボタンをクリックすると、映像検索装置104は、映像蓄積装置102に蓄積された映像を読みだして映像データベース109に登録する。全データを登録してもよいし、登録する映像ファイルをユーザに指定させてもよい。また、登録オプション指定領域803で、従来どおり顕著性判定を行わず全データを登録できるようにしてもよい。
When the user clicks the data registration button, the video search device 104 reads the video stored in the video storage device 102 and registers it in the video database 109. All data may be registered, or the user may designate a video file to be registered. Further, in the registration option designation area 803, all data may be registered without performing the saliency determination as in the past.
登録処理を終えた後、ユーザはクエリ読込ボタン804をクリックし、検索の手がかりとなる画像を読み込む。読み込まれた画像はクエリ画像表示領域805に表示される。ユーザは必要に応じて、画像中の物体領域を選択する。ユーザは、検索オプション指定領域806を用いて、例えば、検索対象をフレーム全領域、顕著領域、候補領域に切り替えることができる。ここで指定された領域に応じて、クエリ画像から抽出される特徴量と検索対象が変わる。ユーザが検索ボタン807をクリックすると、映像検索装置104は映像データベース109から類似映像を検索する。検索結果は、検索結果表示領域808に表示される。検索結果表示領域808は、さらに映像のサムネイル、類似度、動画中の時間、再生や外部アプリへのデータ出力を行うための操作ボタンを備えることで、検索結果の利用しやすさを向上できる。
After completing the registration process, the user clicks on a query reading button 804 to read an image serving as a clue to search. The read image is displayed in the query image display area 805. The user selects an object region in the image as necessary. Using the search option designation area 806, for example, the user can switch the search target to an entire frame area, a salient area, or a candidate area. The feature amount extracted from the query image and the search target change according to the region specified here. When the user clicks the search button 807, the video search device 104 searches for a similar video from the video database 109. The search result is displayed in a search result display area 808. The search result display area 808 can further improve the ease of use of the search results by providing operation buttons for performing video thumbnails, similarity, time during moving images, reproduction and data output to an external application.
図9は、本発明の実施例1に係る映像検索システム100の処理シーケンスを説明する図であり、具体的には、以上で説明した映像検索システム100の映像登録および映像検索処理における、ユーザ900、映像記憶装置101、計算機901、画像データベース109の処理シーケンスを示す。なお、計算機901は、映像検索装置104を実現する計算機である。以下、図9の各ステップについて説明する。
FIG. 9 is a diagram for explaining the processing sequence of the video search system 100 according to the first embodiment of the present invention. Specifically, the user 900 in the video registration and video search processing of the video search system 100 described above. The processing sequence of the video storage device 101, the computer 901, and the image database 109 is shown. The computer 901 is a computer that implements the video search device 104. Hereinafter, each step of FIG. 9 will be described.
図9のシーケンス図において、S910は映像登録処理を、S930は映像検索処理を表す。
In the sequence diagram of FIG. 9, S910 represents a video registration process, and S930 represents a video search process.
映像登録処理S910において、ユーザ900が登録開始要求を出すと(S911)、計算機901は映像記憶装置101から映像データを取得する(S912、S913)。以降の処理は、図5の説明として前述した一連の登録処理に相当する。計算機901は、映像からフレームを切り出し(S914)、フレームの特徴量を抽出した後(S915)、フレームから多数の候補領域を抽出する(S916)。計算機901は、得られた各候補領域から第1特徴量を抽出する(S917)。計算機901は、シーンチェンジを検出し(S918)、シーン内の候補領域に対して顕著領域判定を行う(S919)。得られた顕著領域に対して第2特徴量を抽出し(S920)、映像データベース109に映像、シーン、フレーム、候補領域、顕著領域の情報をそれぞれ関連付けて登録する(S921)。登録対象の全ての映像、フレームの登録が終了すると、計算機901はユーザ900に対して登録処理の終了を通知する(S922)。
In the video registration process S910, when the user 900 issues a registration start request (S911), the computer 901 acquires video data from the video storage device 101 (S912, S913). The subsequent processing corresponds to the series of registration processing described above with reference to FIG. The computer 901 cuts out a frame from the video (S914), extracts the feature amount of the frame (S915), and then extracts a large number of candidate areas from the frame (S916). The computer 901 extracts the first feature amount from each obtained candidate region (S917). The computer 901 detects a scene change (S918), and performs a saliency area determination for a candidate area in the scene (S919). A second feature amount is extracted for the obtained saliency area (S920), and information on the video, scene, frame, candidate area, and saliency area is associated and registered in the video database 109 (S921). When registration of all the registration target videos and frames is completed, the computer 901 notifies the user 900 of the end of the registration process (S922).
映像検索処理S930は、図7の説明として前述した一連の検索処理に相当する。ユーザ900が、計算機901に対して検索要求を出すと(S931)、計算機900は、与えられたクエリ画像から第2特徴量を抽出する(S932)。抽出された第2特徴量を用いて映像データベース109に対して類似画像検索を行う(S933)。ユーザ900から、シーン内再検索の要求があった場合、計算機900は、クエリ画像から第1特徴量を抽出する(S934)。得られた第1特徴量を用いて、ステップS933で得られた顕著領域に関連する候補領域を対象とした類似画像検索を行う(S935)。この結果を統合し、検索結果の画面を生成し(S936)、ユーザ900に対して検索結果を提示する(S937)。
The video search process S930 corresponds to the series of search processes described above with reference to FIG. When the user 900 issues a search request to the computer 901 (S931), the computer 900 extracts a second feature amount from the given query image (S932). A similar image search is performed on the video database 109 using the extracted second feature amount (S933). When there is a request for re-search in the scene from the user 900, the computer 900 extracts the first feature amount from the query image (S934). Using the obtained first feature amount, a similar image search is performed on candidate areas related to the saliency area obtained in step S933 (S935). These results are integrated to generate a search result screen (S936), and the search result is presented to the user 900 (S937).
図11は領域判定部における顕著性判定の処理フローを説明するフローチャートである。以下、図11の各ステップについて説明する。
FIG. 11 is a flowchart for explaining a processing flow of saliency determination in the region determination unit. Hereinafter, each step of FIG. 11 will be described.
(図11:ステップS1101)
顕著領域判定部107は、シーン内の候補領域に対してクラスタリング処理を行う。クラスタリング処理には、K―meansクラスタリングなどの既知のアルゴリズムを適用できる。 (FIG. 11: Step S1101)
The saliencyarea determination unit 107 performs clustering processing on candidate areas in the scene. A known algorithm such as K-means clustering can be applied to the clustering process.
顕著領域判定部107は、シーン内の候補領域に対してクラスタリング処理を行う。クラスタリング処理には、K―meansクラスタリングなどの既知のアルゴリズムを適用できる。 (FIG. 11: Step S1101)
The saliency
(図11:ステップS1102)
顕著領域判定部107は、ステップS1102で得られた各クラスタから代表ベクトルを計算する。代表ベクトルには、例えば、クラスタに属する特徴量ベクトルの平均値を用いることができる。 (FIG. 11: Step S1102)
The saliencyarea determination unit 107 calculates a representative vector from each cluster obtained in step S1102. As the representative vector, for example, an average value of the feature vector belonging to the cluster can be used.
顕著領域判定部107は、ステップS1102で得られた各クラスタから代表ベクトルを計算する。代表ベクトルには、例えば、クラスタに属する特徴量ベクトルの平均値を用いることができる。 (FIG. 11: Step S1102)
The saliency
(図11:ステップS1103)
顕著領域判定部107は、ステップ1102で得られた代表ベクトルを用いて、登録済みのデータに対して類似画像検索をかける。この時、第1特徴量は高速化の前処理を行っていないため、全登録データとの類似度計算が難しい場合がある。そこで、例えば、ランダムサンプリングを行い、所定数の登録データのみと比較する。また、本処理で必要な情報は、類似する登録データの数であるため、例えば、予め特徴量空間を分割しておき、候補領域の第1特徴量を登録する際に、どの部分空間に属するかを判定し、空間に属する候補領域の数をカウントしておく。このカウントを参照するだけで、代表ベクトルに類似する登録済みデータの頻度を求めることができる。 (FIG. 11: Step S1103)
The saliencyarea determination unit 107 performs a similar image search on registered data using the representative vector obtained in step 1102. At this time, since the first feature amount is not subjected to pre-processing for speeding up, it may be difficult to calculate the similarity with all registered data. Therefore, for example, random sampling is performed and compared with only a predetermined number of registered data. In addition, since the information necessary for this processing is the number of similar registration data, for example, when the feature amount space is divided in advance and the first feature amount of the candidate region is registered, which partial space belongs to And the number of candidate areas belonging to the space is counted. By only referring to this count, the frequency of registered data similar to the representative vector can be obtained.
顕著領域判定部107は、ステップ1102で得られた代表ベクトルを用いて、登録済みのデータに対して類似画像検索をかける。この時、第1特徴量は高速化の前処理を行っていないため、全登録データとの類似度計算が難しい場合がある。そこで、例えば、ランダムサンプリングを行い、所定数の登録データのみと比較する。また、本処理で必要な情報は、類似する登録データの数であるため、例えば、予め特徴量空間を分割しておき、候補領域の第1特徴量を登録する際に、どの部分空間に属するかを判定し、空間に属する候補領域の数をカウントしておく。このカウントを参照するだけで、代表ベクトルに類似する登録済みデータの頻度を求めることができる。 (FIG. 11: Step S1103)
The saliency
(図11:ステップS1104)
顕著領域判定部107は、各代表ベクトルに対して、ステップ1101で求めたシーン内の頻度(クラスタのメンバ数)、ステップ1103で求めたシーン外頻度(類似度が所定値以上の検索結果数)の比から、顕著性を判定する。顕著性が所定値以上のクラスタに関して、代表ベクトルと最も特徴量の近い候補領域を顕著領域として出力する。 (FIG. 11: Step S1104)
For each representative vector, the saliencyarea determination unit 107 calculates the frequency in the scene (number of cluster members) obtained in step 1101 and the frequency outside the scene obtained in step 1103 (number of search results with similarity equal to or greater than a predetermined value). The saliency is determined from the ratio. For a cluster whose saliency is equal to or greater than a predetermined value, a candidate area having the closest feature quantity to the representative vector is output as a saliency area.
顕著領域判定部107は、各代表ベクトルに対して、ステップ1101で求めたシーン内の頻度(クラスタのメンバ数)、ステップ1103で求めたシーン外頻度(類似度が所定値以上の検索結果数)の比から、顕著性を判定する。顕著性が所定値以上のクラスタに関して、代表ベクトルと最も特徴量の近い候補領域を顕著領域として出力する。 (FIG. 11: Step S1104)
For each representative vector, the saliency
図12は、領域の追跡に基づく顕著性判定の説明図である。同一シーン内で時間方向に連続したフレームには、重複した画像情報が含まれる可能性が高い。そこで、フレーム間で、候補領域の対応付けを行うことにより、物体を追跡し、変化のない場合には重複して登録しないように制御する。また、候補領域の移動量を求め、画面全体の動きに対して移動の少ない物体に関しては顕著性が低いと判定し、最小限の領域のみを登録し、逆に相対的な動きの大きい物体に関しては顕著性が高いと判定し、より多くの領域を登録する。
FIG. 12 is an explanatory diagram of saliency determination based on region tracking. There is a high possibility that duplicate frames are included in frames that are continuous in the time direction in the same scene. Therefore, by associating candidate areas between frames, the object is tracked, and if there is no change, control is performed so that registration is not repeated. Also, determine the amount of movement of the candidate area, determine that the object with little movement relative to the movement of the entire screen is less noticeable, register only the minimum area, and conversely with the object with large relative movement Is determined to have a high saliency, and more areas are registered.
図13は、領域の追跡に基づく顕著性判定の処理フローを説明するフローチャートである。以下、図13の各ステップについて説明する。
FIG. 13 is a flowchart for explaining a processing flow of saliency determination based on region tracking. Hereinafter, each step of FIG. 13 will be described.
(図13:ステップS1301)
顕著領域判定部107は、シーン内の候補領域に対して、隣接フレーム間での対応付けを行う。対応付けには例えば、第1特徴量の類似度を用いても良いし、座標値を用いてもよい。また、オクルージョンなどにより追跡が途切れる場合を考慮して、所定数の欠落フレームを許容した対応付けを行っても良い。 (FIG. 13: Step S1301)
The saliencyarea determination unit 107 associates adjacent frames with the candidate areas in the scene. For example, the similarity of the first feature amount may be used for the association, or a coordinate value may be used. In addition, in consideration of a case where tracking is interrupted due to occlusion or the like, association that allows a predetermined number of missing frames may be performed.
顕著領域判定部107は、シーン内の候補領域に対して、隣接フレーム間での対応付けを行う。対応付けには例えば、第1特徴量の類似度を用いても良いし、座標値を用いてもよい。また、オクルージョンなどにより追跡が途切れる場合を考慮して、所定数の欠落フレームを許容した対応付けを行っても良い。 (FIG. 13: Step S1301)
The saliency
(図13:ステップS1302)
顕著領域判定部107は、ステップS1301によって得られた候補領域の軌跡から、軌跡の継続時間と移動経路、移動量を計算する。 (FIG. 13: Step S1302)
The saliencyarea determination unit 107 calculates the duration, movement path, and movement amount of the locus from the locus of the candidate region obtained in step S1301.
顕著領域判定部107は、ステップS1301によって得られた候補領域の軌跡から、軌跡の継続時間と移動経路、移動量を計算する。 (FIG. 13: Step S1302)
The saliency
(図13:ステップS1303)
顕著領域判定部107は、ステップS1302で得られた全軌跡の平均移動量を計算し、平均移動量からの差分と継続時間から顕著性を求める。顕著性が所定値以上の軌跡において、1つ以上の候補領域を顕著領域として選択する。例えば、領域のサイズや、エッジの強度、ブレが少ない(移動量の変化の少ないフレーム)、などの情報を用いて軌跡内から候補領域を選択する。 (FIG. 13: Step S1303)
The saliencyarea determination unit 107 calculates the average movement amount of all the trajectories obtained in step S1302, and obtains the saliency from the difference from the average movement amount and the duration. One or more candidate areas are selected as saliency areas in a trajectory having a saliency greater than or equal to a predetermined value. For example, a candidate region is selected from the trajectory using information such as the size of the region, edge strength, and less blur (a frame with little change in movement amount).
顕著領域判定部107は、ステップS1302で得られた全軌跡の平均移動量を計算し、平均移動量からの差分と継続時間から顕著性を求める。顕著性が所定値以上の軌跡において、1つ以上の候補領域を顕著領域として選択する。例えば、領域のサイズや、エッジの強度、ブレが少ない(移動量の変化の少ないフレーム)、などの情報を用いて軌跡内から候補領域を選択する。 (FIG. 13: Step S1303)
The saliency
以上に説明した映像検索装置104では、第1特徴量はシーン内の候補領域を分類するために使用される。そのため、シーンの特性に応じて、第1特徴量の抽出アルゴリズムを変えてもよい。例えば、暗所の映像であれば、色の特徴ではなく、形状や動きのみの情報を用いたほうが、シーン内の候補領域を効果的に分類できる。特徴量抽出のアルゴリズム自体を変えなくても、例えば、輝度補正などのパラメータをシーンに応じて変更してもよい。図14は、シーン判定による第1特徴量の切り替えを表すフローチャートである。以下、図14の各ステップについて説明する。
In the video search device 104 described above, the first feature amount is used to classify candidate areas in the scene. Therefore, the first feature amount extraction algorithm may be changed in accordance with the scene characteristics. For example, in the case of an image in a dark place, the candidate areas in the scene can be effectively classified by using only the information on the shape and movement, not the color characteristics. For example, parameters such as luminance correction may be changed according to the scene without changing the algorithm for extracting the feature quantity. FIG. 14 is a flowchart showing switching of the first feature amount by scene determination. Hereinafter, each step of FIG. 14 will be described.
(図14:ステップS1401)
映像入力部105は、フレームから抽出した画像特徴量を用いて、シーン判別を行う。シーンの種別と、それに対応するパラメータ、第1特徴量の抽出方法はシステム構築時に設定しておく。例えば、以下の通り、形状、色、動きを重視した第1特徴量の抽出処理に分岐する。 (FIG. 14: Step S1401)
Thevideo input unit 105 performs scene discrimination using the image feature amount extracted from the frame. The scene type, the corresponding parameter, and the first feature extraction method are set when the system is constructed. For example, the process branches to a first feature amount extraction process in which shape, color, and movement are emphasized as follows.
映像入力部105は、フレームから抽出した画像特徴量を用いて、シーン判別を行う。シーンの種別と、それに対応するパラメータ、第1特徴量の抽出方法はシステム構築時に設定しておく。例えば、以下の通り、形状、色、動きを重視した第1特徴量の抽出処理に分岐する。 (FIG. 14: Step S1401)
The
(図14:ステップS1411、S1412)
第1特徴量抽出部106は、候補領域から形状特徴量を抽出する。顕著領域判定部107は、形状に着目した顕著領域判定処理を行う。 (FIG. 14: Steps S1411, S1412)
The first featureamount extraction unit 106 extracts shape feature amounts from the candidate areas. The saliency area determination unit 107 performs a saliency area determination process focusing on the shape.
第1特徴量抽出部106は、候補領域から形状特徴量を抽出する。顕著領域判定部107は、形状に着目した顕著領域判定処理を行う。 (FIG. 14: Steps S1411, S1412)
The first feature
(図14:ステップS1421、S1422)
第1特徴量抽出部106は、候補領域から色特徴量を抽出する。顕著領域判定部107は、色に着目した顕著領域判定処理を行う。 (FIG. 14: Steps S1421, S1422)
The first featureamount extraction unit 106 extracts a color feature amount from the candidate area. The saliency area determination unit 107 performs saliency area determination processing focusing on color.
第1特徴量抽出部106は、候補領域から色特徴量を抽出する。顕著領域判定部107は、色に着目した顕著領域判定処理を行う。 (FIG. 14: Steps S1421, S1422)
The first feature
(図14:ステップS1431、S1432)
第1特徴量抽出部106は、候補領域から動き特徴量を抽出する。顕著領域判定部107は、動きに着目した顕著領域判定処理を行う。
なお、シーン判別を用いて第1特徴量を切り替えた場合、図10の説明で述べた頻度に基づく顕著性判別におけるシーン外の候補領域は、同一の抽出方法を用いた領域のみが対象となる。 (FIG. 14: Steps S1431 and S1432)
The first featurequantity extraction unit 106 extracts a motion feature quantity from the candidate area. The saliency area determination unit 107 performs a saliency area determination process focusing on movement.
Note that when the first feature value is switched using scene discrimination, candidate regions outside the scene in the saliency discrimination based on the frequency described in the description of FIG. 10 are targeted only for regions using the same extraction method. .
第1特徴量抽出部106は、候補領域から動き特徴量を抽出する。顕著領域判定部107は、動きに着目した顕著領域判定処理を行う。
なお、シーン判別を用いて第1特徴量を切り替えた場合、図10の説明で述べた頻度に基づく顕著性判別におけるシーン外の候補領域は、同一の抽出方法を用いた領域のみが対象となる。 (FIG. 14: Steps S1431 and S1432)
The first feature
Note that when the first feature value is switched using scene discrimination, candidate regions outside the scene in the saliency discrimination based on the frequency described in the description of FIG. 10 are targeted only for regions using the same extraction method. .
以上を踏まえ、本実施例に記載の画像検索方法は、複数の画像が入力される第1ステップと、複数の画像から複数の第1領域を抽出し、それぞれの第1領域から第1特徴量を抽出する第2ステップと、複数の画像から抽出した複数の第1特徴量の分布から、出現頻度が低い第1特徴量を選択し、選択した第1特徴量を含む第1領域を第2領域として特定する第3ステップと、第2領域から抽出した第1特徴量と、第2領域と、第2領域を抽出した画像と、を記憶部に記憶する第5ステップと、第2特徴量を用いて検索を行う第6ステップと、を有することを特徴とする。
Based on the above, the image search method described in the present embodiment includes a first step in which a plurality of images are input, a plurality of first regions extracted from the plurality of images, and a first feature amount from each first region. A first feature amount having a low appearance frequency is selected from a plurality of first feature amount distributions extracted from a plurality of images, and a first region including the selected first feature amount is selected as a second region. A third step of specifying as a region, a first feature amount extracted from the second region, a second region, and an image from which the second region has been extracted, a fifth step, and a second feature amount And a sixth step of performing a search using.
先に第1特徴量の分布から出現頻度を用いて評価することで、出現頻度が高く検索ノイズになるような部分領域を除外し、検索に有用な部分領域を特定することができる。このように特定された部分領域(第2領域)からのみ抽出した特徴量を蓄積し、検索に用いることで、登録データ数が減り、検索速度を向上させることができる。
First, by evaluating using the appearance frequency from the distribution of the first feature value, it is possible to exclude a partial area that has a high appearance frequency and causes search noise, and to specify a partial area that is useful for the search. By storing the feature quantities extracted only from the partial area (second area) specified in this way and using it for the search, the number of registered data can be reduced and the search speed can be improved.
100:映像検索システム、101:映像記憶装置、102:入力装置、103:表示装置、104:映像検索装置、105:映像入力部、106:第1特徴量抽出部、107:領域判定部、108:第2特徴量抽出部、109:映像データベース、201:プロセッサ、202:記憶装置、203:処理プログラム、204:ネットワークインターフェース装置、802:データ登録ボタン、803:登録オプション指定領域、804:クエリ読み込みボタン、805:クエリ画像表示領域、806:検索オプション指定領域、807:検索ボタン、808:検索結果表示領域。
100: Video search system, 101: Video storage device, 102: Input device, 103: Display device, 104: Video search device, 105: Video input unit, 106: First feature amount extraction unit, 107: Area determination unit, 108 : Second feature amount extraction unit, 109: video database, 201: processor, 202: storage device, 203: processing program, 204: network interface device, 802: data registration button, 803: registration option designation area, 804: query read Button 805: Query image display area 806: Search option designation area 807: Search button 808: Search result display area
Claims (13)
- 複数の画像が入力される入力部と、
複数の画像から複数の第1領域を抽出し、それぞれの前記第1領域から第1特徴量を抽出する第1抽出部と、
複数の画像から抽出された複数の前記第1特徴量の分布から、出現頻度が低い前記第1特徴量を選択し、選択した前記第1特徴量を含む前記第1領域を第2領域として特定する領域判定部と、
前記第2領域から抽出した前記第1特徴量と、前記第2領域と、前記第2領域を抽出した画像と、を記憶する記憶部と、
前記第1特徴量を用いて検索を行う検索部と、を有することを特徴とする画像検索装置。 An input unit for inputting a plurality of images;
A first extraction unit that extracts a plurality of first regions from a plurality of images and extracts a first feature amount from each of the first regions;
The first feature amount having a low appearance frequency is selected from a plurality of distributions of the first feature amount extracted from a plurality of images, and the first region including the selected first feature amount is specified as a second region. An area determination unit to perform,
A storage unit for storing the first feature amount extracted from the second region, the second region, and an image obtained by extracting the second region;
An image search apparatus comprising: a search unit that performs a search using the first feature amount. - 請求項1に記載の画像検索装置であって、
前記第2領域から第2特徴量として画像特徴量を抽出する第2抽出部を、さらに有し、
前記記憶部は、前記第2特徴量を記憶し、
前記検索部は、前記第1特徴量に代えて前記第2特徴量を用いて検索を行うことを特徴とする画像検索装置。 The image search device according to claim 1,
A second extraction unit that extracts an image feature amount as a second feature amount from the second region;
The storage unit stores the second feature amount,
The image search apparatus, wherein the search unit performs a search using the second feature quantity instead of the first feature quantity. - 請求項2に記載の画像検索装置であって、
前記領域判定部は、時間的に連続する複数の画像から構成されるシーンを検出するシーン検出部、を含み、
前記領域判定部は、第1シーンに含まれる複数の前記第1領域から抽出した複数の前記第1特徴量である第1シーン画像特徴量と、前記第1シーンとは異なる第2シーンに含まれる複数の前記第1領域から抽出した複数の前記第1画像特徴量である第2シーン特徴量とを比較することにより、前記第2領域を、前記第1シーンの中から特定することを特徴とする画像検索装置。 The image search device according to claim 2,
The region determination unit includes a scene detection unit that detects a scene composed of a plurality of temporally continuous images,
The region determination unit includes a first scene image feature amount, which is a plurality of the first feature amounts extracted from the plurality of first regions included in the first scene, and a second scene different from the first scene. The second area is identified from the first scene by comparing with a second scene feature quantity which is a plurality of the first image feature quantities extracted from the plurality of first areas. An image search device. - 請求項2に記載の画像検索装置であって、
前記領域判定部は、時間的に連続する複数の前記第1領域を検出し、第1特徴量として前記第1領域の移動量を算出する領域追跡部を含み、
前記領域判定部は、前記第1領域のうち、前記第1領域を含む画像全体の移動量よりも大きい移動量を有する前記第1領域を、前記第2領域として特定することを特徴とする画像検索装置。 The image search device according to claim 2,
The region determination unit includes a region tracking unit that detects a plurality of first regions that are temporally continuous and calculates a movement amount of the first region as a first feature amount,
The region determining unit identifies, as the second region, the first region having a movement amount larger than a movement amount of the entire image including the first region among the first regions. Search device. - 請求項3に記載の画像検索装置であって、
前記領域判定部は、シーン内の前記第1領域の移動量を算出する領域追跡部と、をさらに含み、
前記領域判定部は、第1シーンに含まれる前記第1領域の移動量である第1移動量と、前記第1シーンとは異なる第2シーンに含まれる前記第1領域の移動量である第2移動量とを比較することにより、前記第2領域を特定することを特徴とする画像検索装置。 The image search device according to claim 3,
The region determination unit further includes a region tracking unit that calculates a movement amount of the first region in the scene,
The area determination unit includes a first movement amount that is a movement amount of the first area included in the first scene, and a movement amount of the first area that is included in a second scene different from the first scene. 2. The image search device characterized in that the second area is specified by comparing the amount of movement. - 請求項5に記載の画像検索装置であって、
前記記憶部はさらに、前記第1移動量を算出するのに用いた複数の前記第1領域のうち、第2領域として特定されなかった前記第1領域を、前記第2領域と対応する第3領域として記憶し、
前記検索部は、クエリとして入力されたクエリ画像から抽出したクエリ画像特徴量と、前記第3領域から抽出した第3画像特徴量とを用いて検索を行い、
前記第3画像特徴量は前記第2画像特徴量よりも情報量が少ないことを特徴とする画像検索装置。 The image search device according to claim 5,
The storage unit further includes a third region corresponding to the second region, the first region not specified as the second region among the plurality of first regions used to calculate the first movement amount. Remember as a region,
The search unit performs a search using a query image feature amount extracted from a query image input as a query and a third image feature amount extracted from the third region,
The image search apparatus according to claim 3, wherein the third image feature amount has a smaller amount of information than the second image feature amount. - 複数の画像が入力される第1ステップと、
複数の画像から複数の第1領域を抽出し、それぞれの前記第1領域から第1特徴量を抽出する第2ステップと、
複数の画像から抽出された複数の前記第1特徴量の分布から、出現頻度が低い前記第1特徴量を選択し、選択した前記第1特徴量を含む前記第1領域を第2領域として特定する第3ステップと、
前記第2領域から抽出した前記第1特徴量と、前記第2領域と、前記第2領域を抽出した画像と、を記憶部に記憶する第4ステップと、
前記第1特徴量を用いて検索を行う第5ステップと、を有することを特徴とする画像検索方法。 A first step in which a plurality of images are input;
A second step of extracting a plurality of first regions from a plurality of images and extracting a first feature amount from each of the first regions;
The first feature amount having a low appearance frequency is selected from a plurality of distributions of the first feature amount extracted from a plurality of images, and the first region including the selected first feature amount is specified as a second region. And a third step
A fourth step of storing the first feature amount extracted from the second region, the second region, and an image obtained by extracting the second region in a storage unit;
And a fifth step of performing a search using the first feature amount. - 請求項7に記載の画像検索方法であって、
前記第2領域から第2特徴量として画像特徴量を抽出する第6ステップを、さらに有し、
前記第4ステップでは、前記第2特徴量を前記記憶部に記憶し、
前記第5ステップでは、前記第1特徴量に代えて前記第2特徴量を用いて検索を行うことを特徴とする画像検索装置。 The image search method according to claim 7,
A sixth step of extracting an image feature quantity as a second feature quantity from the second region;
In the fourth step, the second feature amount is stored in the storage unit,
In the fifth step, the image search device is characterized in that a search is performed using the second feature quantity instead of the first feature quantity. - 請求項8に記載の画像検索方法であって、
前記第3ステップにおける処理状態には、時間的に連続する複数の画像から構成されるシーンを検出する第1処理状態を含み、
前記第3ステップでは、第1シーンに含まれる複数の前記第1領域から抽出した複数の前記第1特徴量である第1シーン画像特徴量と、前記第1シーンとは異なる第2シーンに含まれる複数の前記第1領域から抽出した複数の前記第1画像特徴量である第2シーン特徴量とを比較することにより、前記第2領域を、前記第1シーンの中から特定することを特徴とする画像検索方法。 The image search method according to claim 8, comprising:
The processing state in the third step includes a first processing state for detecting a scene composed of a plurality of temporally continuous images,
In the third step, a first scene image feature amount that is a plurality of the first feature amounts extracted from a plurality of the first regions included in the first scene, and a second scene that is different from the first scene The second area is identified from the first scene by comparing with a second scene feature quantity which is a plurality of the first image feature quantities extracted from the plurality of first areas. Image search method. - 請求項8に記載の画像検索方法であって、
前記第3ステップにおける処理状態には、時間的に連続する複数の前記第1領域を検出し、第1特徴量として前記第1領域の移動量を算出する第2処理状態を含み、
前記第3ステップでは、前記第1領域のうち、前記第1領域を含む画像全体の移動量よりも大きい移動量を有する前記第1領域を、前記第2領域として特定することを特徴とする画像検索方法。 The image search method according to claim 8, comprising:
The processing state in the third step includes a second processing state in which a plurality of the first regions that are temporally continuous are detected and a movement amount of the first region is calculated as a first feature amount,
In the third step, the first area having a movement amount larger than the movement amount of the entire image including the first area is specified as the second area in the first area. retrieval method. - 請求項9に記載の画像検索方法であって、
前記第3ステップにおける処理状態には、シーン内の前記第1領域の移動量を算出する第1処理状態を、さらに含み、
前記第3ステップでは、第1シーンに含まれる前記第1領域の移動量である第1移動量と、前記第1シーンとは異なる第2シーンに含まれる前記第1領域の移動量である第2移動量とを比較することにより、前記第2領域を特定することを特徴とする画像検索方法。 The image search method according to claim 9, comprising:
The processing state in the third step further includes a first processing state for calculating a movement amount of the first region in the scene,
In the third step, a first movement amount that is a movement amount of the first area included in the first scene and a movement amount of the first area that is included in a second scene different from the first scene. 2. The image search method characterized in that the second region is specified by comparing the two movement amounts. - 請求項11に記載の画像検索方法であって、
前記第4ステップではさらに、前記第1移動量を算出するのに用いた複数の前記第1領域のうち、第2領域として特定されなかった前記第1領域を、前記第2領域と対応する第3領域として前記記憶部に記憶し、
前記第5ステップでは、クエリとして入力されたクエリ画像から抽出したクエリ画像特徴量と、前記第3領域から抽出した第3画像特徴量とを用いて検索を行うことを特徴とする画像検索装置。 The image search method according to claim 11,
In the fourth step, among the plurality of first regions used for calculating the first movement amount, the first region that is not specified as the second region is associated with the second region. Memorize | store in the said memory | storage part as 3 area | regions,
In the fifth step, the search is performed using a query image feature amount extracted from a query image input as a query and a third image feature amount extracted from the third region. - コンピュータに、
複数の画像を受け取る第1手段と、
複数の画像から複数の第1領域を抽出し、それぞれの前記第1領域から第1特徴量を抽出する第2手段と、
複数の画像から抽出した複数の前記第1特徴量の分布から出現頻度が低い前記第1特徴量を選択し、選択した前記第1特徴量を含む前記第1領域を第2領域として特定する第3手段と、
前記第2領域から抽出した前記第1特徴量と、前記第2領域と、前記第2領域を抽出した画像と、を記憶部に記憶する第4手段と、
前記第2特徴量を用いて検索を行う第5手段と、を実行させるプログラムが記録されていることを特徴とする情報記録媒体。 On the computer,
A first means for receiving a plurality of images;
Second means for extracting a plurality of first regions from a plurality of images and extracting a first feature value from each of the first regions;
The first feature amount having a low appearance frequency is selected from a plurality of distributions of the first feature amount extracted from a plurality of images, and the first region including the selected first feature amount is specified as a second region. 3 means,
A fourth means for storing the first feature amount extracted from the second region, the second region, and an image obtained by extracting the second region in a storage unit;
And a fifth means for performing a search using the second feature amount. A program for executing the information is recorded.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/051433 WO2016117039A1 (en) | 2015-01-21 | 2015-01-21 | Image search device, image search method, and information storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/051433 WO2016117039A1 (en) | 2015-01-21 | 2015-01-21 | Image search device, image search method, and information storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016117039A1 true WO2016117039A1 (en) | 2016-07-28 |
Family
ID=56416605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/051433 WO2016117039A1 (en) | 2015-01-21 | 2015-01-21 | Image search device, image search method, and information storage medium |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016117039A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304506A (en) * | 2018-01-18 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Search method, device and equipment |
WO2019156043A1 (en) * | 2018-02-06 | 2019-08-15 | 日本電信電話株式会社 | Content determination device, content determination method, and program |
WO2021131343A1 (en) * | 2019-12-26 | 2021-07-01 | 株式会社ドワンゴ | Content distribution system, content distribution method, and content distribution program |
-
2015
- 2015-01-21 WO PCT/JP2015/051433 patent/WO2016117039A1/en active Application Filing
Non-Patent Citations (2)
Title |
---|
SIVIC, JOSEF ET AL.: "Video Google: Efficient Visual Search of Videos", TOWARD CATEGORY-LEVEL OBJECT RECOGNITION (LNCS 4170, 2006, pages 127 - 144 * |
YUSUKE UCHIDA ET AL.: "Daikibo Tokutei Buttai Ninshiki Gijutsu Oyobi sono Saishin Kenkyu Jirei", IMAGE LAB, vol. 24, no. 12, 10 December 2013 (2013-12-10), pages 61 - 68 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304506A (en) * | 2018-01-18 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Search method, device and equipment |
CN108304506B (en) * | 2018-01-18 | 2022-08-26 | 腾讯科技(深圳)有限公司 | Retrieval method, device and equipment |
WO2019156043A1 (en) * | 2018-02-06 | 2019-08-15 | 日本電信電話株式会社 | Content determination device, content determination method, and program |
JP2019139326A (en) * | 2018-02-06 | 2019-08-22 | 日本電信電話株式会社 | Content determination device, content determination method, and program |
WO2021131343A1 (en) * | 2019-12-26 | 2021-07-01 | 株式会社ドワンゴ | Content distribution system, content distribution method, and content distribution program |
JP2021106324A (en) * | 2019-12-26 | 2021-07-26 | 株式会社ドワンゴ | Content distribution system, content distribution method, and content distribution program |
JP2021106378A (en) * | 2019-12-26 | 2021-07-26 | 株式会社ドワンゴ | Content distribution system, content distribution method, and content distribution program |
JP7408506B2 (en) | 2019-12-26 | 2024-01-05 | 株式会社ドワンゴ | Content distribution system, content distribution method, and content distribution program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102560308B1 (en) | System and method for exterior search | |
Shen et al. | Multiobject tracking by submodular optimization | |
JP5358083B2 (en) | Person image search device and image search device | |
JP4725690B2 (en) | Video identifier extraction device | |
CN104520875B (en) | It is preferred for searching for and retrieving the method and apparatus that the slave video content of purpose extracts descriptor | |
CN106156693B (en) | Robust error correction method based on multi-model representation for face recognition | |
JP5097280B2 (en) | Method and apparatus for representing, comparing and retrieving images and image groups, program, and computer-readable storage medium | |
US9934423B2 (en) | Computerized prominent character recognition in videos | |
CN102117313A (en) | Video retrieval method and system | |
JP2016095849A (en) | Method and device for dividing foreground image, program, and recording medium | |
JP5180922B2 (en) | Image search system and image search method | |
JP4907938B2 (en) | Method of representing at least one image and group of images, representation of image or group of images, method of comparing images and / or groups of images, method of encoding images or group of images, method of decoding images or sequence of images, code Use of structured data, apparatus for representing an image or group of images, apparatus for comparing images and / or group of images, computer program, system, and computer-readable storage medium | |
Li et al. | Structuring lecture videos by automatic projection screen localization and analysis | |
Omidyeganeh et al. | Video keyframe analysis using a segment-based statistical metric in a visually sensitive parametric space | |
WO2016117039A1 (en) | Image search device, image search method, and information storage medium | |
JP5356289B2 (en) | Image search system | |
Lee et al. | Hierarchical active shape model with motion prediction for real-time tracking of non-rigid objects | |
JP5644505B2 (en) | Collation weight information extraction device | |
JP2010257267A (en) | Device, method and program for detecting object area | |
Ji et al. | News videos anchor person detection by shot clustering | |
Xu et al. | Fast and accurate object detection using image cropping/resizing in multi-view 4K sports videos | |
Naveen Kumar et al. | High performance algorithm for content-based video retrieval using multiple features | |
JP6948787B2 (en) | Information processing equipment, methods and programs | |
JP2011053952A (en) | Image-retrieving device and image-retrieving method | |
JP6789175B2 (en) | Image recognizers, methods, and programs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15878734 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15878734 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |