CN117743633A - Target retrieval method and device - Google Patents
Target retrieval method and device Download PDFInfo
- Publication number
- CN117743633A CN117743633A CN202311792281.4A CN202311792281A CN117743633A CN 117743633 A CN117743633 A CN 117743633A CN 202311792281 A CN202311792281 A CN 202311792281A CN 117743633 A CN117743633 A CN 117743633A
- Authority
- CN
- China
- Prior art keywords
- target
- time
- target information
- video
- progress bar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000012216 screening Methods 0.000 claims description 110
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 8
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims 1
- 230000001427 coherent effect Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the application provides a target retrieval method and device, which are used for storing all target information in advance in a classified manner, retrieving any target information meeting the post retrieval condition according to the post retrieval condition input by a user during video playback, and changing the retrieval condition at any time to perform new retrieval meeting new conditions even if a certain retrieval rule is set before; because all target information is stored in advance, the target information meeting the post-search condition can be searched without traversing all video recordings, after the target information is obtained, whether targets meeting the rule of the target information belong to appear or not is marked on the same progress bar in different display modes, so that video data meeting the rule, which are consulted by a user, are coherent, the user does not need to spend effort to check the time sequence of scattered video clips, and the user can skip or accelerate or decelerate to play a certain video clip according to the requirement, thereby reducing the search time, improving the search efficiency of the targets and improving the user experience.
Description
Technical Field
The present disclosure relates to the field of video monitoring technologies, and in particular, to a target retrieval method and apparatus.
Background
In the related art, there are video search scenes in which: for video data, the user may set some region rules to find objects that appear in the region, such as finding scenes such as lost objects. For example, patent document CN116821412a discloses a technical solution that receives a rule area set by a user and obtains a target activity detection event of a pre-stored monitoring area, and determines whether there is an intersection between target track information in the target acquisition detection event and the rule area, so as to obtain an intersection-related target activity detection event, and obtains target feature information in the related target activity detection event as a search target, that is, finally, the solution still needs to locate a feature of a target conforming to the rule area, so as to reuse the target feature for searching. There is no description in this patent document of how the retrieval and presentation of targets is subsequently performed based on target features. According to the applicant's knowledge of the related art, the retrieved video frames with the targets are typically displayed in an array distributed manner in the form of pictures or short video clips, such as displaying the pictures or video clips in an array of 4*6 or 5*5. But this way is very sporadic, and the user needs to click on the video clips or pictures one by one to watch the image content, which is very inconvenient to use.
Disclosure of Invention
The embodiment of the application aims to provide a target retrieval method and device, so as to reduce target retrieval time, improve target retrieval efficiency and improve user experience. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a target retrieval method, where target information of each video frame of a target in a video captured by an image capturing device and the video is obtained in advance, where the target information includes: the method comprises the steps of identifying a target, classifying the target, locating the target in a video frame and shooting time of the video frame; storing each piece of target information correspondingly according to the category of the included target;
the method comprises the following steps:
displaying a playback window with a first time progress bar and playing the video through the playback window;
receiving a rule configuration operation input for a video frame displayed by the playback window, and identifying a post-screening condition indicated by the rule configuration operation;
searching target information meeting the post-screening condition in target information stored in advance in the searching equipment, and acquiring target identifiers contained in the searched target information as screening identifiers;
searching target information comprising the screening identification, and acquiring shooting time corresponding to the searched target information as target time;
Displaying the target time by using a first display mode and displaying other times except the target time by using a second display mode on the first time progress bar, wherein the first display mode is different from the second display mode;
under the condition that a first configuration operation is received, responding to the first configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information; wherein the play configuration information includes at least one of: accelerating and playing video clips conforming to rules according to specified multiples, decelerating and playing video clips conforming to rules according to specified multiples, accelerating and playing video clips not conforming to rules according to specified multiples, decelerating and playing video clips not conforming to rules according to specified multiples, skipping video clips not conforming to rules, and not skipping video clips not conforming to rules, wherein the video clips conforming to rules are video clips shot at the target time, and the video clips not conforming to rules are video clips shot at other times except the target time; and/or under the condition that a first trigger operation is received, responding to the first trigger operation, and switching the current playing picture to play from the first time corresponding to the first trigger operation.
In a second aspect, an embodiment of the present application provides a target retrieval device, where target information of each video frame of a target in a video captured by an image capturing device and the video is obtained in advance, where the target information includes: the method comprises the steps of identifying a target, classifying the target, locating the target in a video frame and shooting time of the video frame; storing each piece of target information correspondingly according to the category of the included target;
the device comprises:
the playback module is used for displaying a playback window with a first time progress bar and playing the video through the playback window;
the identification module is used for receiving rule configuration operation input for the video frames displayed by the playback window and identifying post-screening conditions indicated by the rule configuration operation;
the screening module is used for searching target information meeting the post-screening conditions in target information stored in the searching equipment in advance, and acquiring target identifiers contained in the searched target information to serve as screening identifiers;
the retrieval module is used for searching the target information comprising the screening identification, and acquiring shooting time corresponding to the searched target information as target time;
The first display module is used for displaying the target time in a first display mode and displaying other times except the target time in a second display mode on the first time progress bar, wherein the first display mode is different from the second display mode;
the first playing module is used for responding to the first configuration operation under the condition of receiving the first configuration operation, obtaining input playing configuration information and playing video according to the playing configuration information; wherein the play configuration information includes at least one of: accelerating and playing video clips conforming to rules according to specified multiples, decelerating and playing video clips conforming to rules according to specified multiples, accelerating and playing video clips not conforming to rules according to specified multiples, decelerating and playing video clips not conforming to rules according to specified multiples, skipping video clips not conforming to rules, and not skipping video clips not conforming to rules, wherein the video clips conforming to rules are video clips shot at the target time, and the video clips not conforming to rules are video clips shot at other times except the target time; and/or under the condition that a first trigger operation is received, responding to the first trigger operation, and switching the current playing picture to play from the first time corresponding to the first trigger operation.
In one possible implementation, the rule configuration operation includes a line drawing operation;
the identification module comprises:
and identifying a first sub-module, wherein the first sub-module is used for obtaining post-screening conditions based on the contour line of the region of interest indicated by the line drawing operation, and the post-screening conditions are as follows: a track formed by the position and shooting time included in each piece of target information of the target is intersected with the contour line;
the screening module comprises:
and the screening first sub-module is used for searching target information, which is formed by the position and shooting time and is intersected with the contour line, in target information stored in the searching equipment in advance, and acquiring a target mark contained in the searched target information as a screening mark.
In one possible implementation, the rule configuration operation further includes a category selection operation;
the screening first sub-module comprises:
a screening first unit, configured to identify a category indicated by the category selection operation as a target category;
a screening second unit, configured to search target information stored in advance corresponding to the target category with the target category as an index, as candidate target information;
And a screening third unit, configured to search target information, which is formed by the position and the shooting time and is intersected with the contour line, in the candidate target information, and acquire a target identifier included in the searched target information as a screening identifier.
In one possible implementation manner, the first timeline is a timeline for indicating a duration in which the video has been played, and the first display module includes:
a first submodule is displayed and used for determining the time length from the playback window to the video clip shot at the target time to be played as a first time length;
a second sub-module is displayed and used for determining a sub-progress bar used for representing the first duration on the first time progress bar as a first sub-progress bar;
and the third sub-module is used for displaying the first sub-progress bar by using a first display mode and displaying a second sub-progress bar by using a second display mode on the first time progress bar, wherein the second sub-progress bar is other sub-progress bars except the first sub-progress bar on the first time progress bar.
In one possible embodiment, the apparatus further comprises:
The second display module is used for displaying a second time progress bar, wherein the time displayed by the second time progress bar only comprises the target time and does not comprise the time beyond the target time;
the second playing module is used for responding to the second triggering operation under the condition that the second triggering operation aiming at the second time progress bar is received, and only video clips meeting the rule are played in the playback window; and/or under the condition that a second configuration operation aiming at the second time progress bar is received, responding to the second configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information; and/or synchronizing the current playing time on the first time progress bar to the second time progress bar, or synchronizing the current playing time on the second time progress bar to the first time progress bar, so that the icons used for indicating the current playing time on the two time progress bars indicate the same time.
In one possible embodiment, the apparatus further comprises:
the sequencing module is used for sequencing the target time according to the time sequence to obtain a time sequence;
A combination module, configured to combine adjacent times in the time sequence to obtain a plurality of continuous time periods;
the merging module is used for merging the overlapped time periods in the continuous time periods to obtain a target time period;
the first display module includes:
and displaying a fourth sub-module, configured to display the target time period using a first display manner on the first timeline of the playback window.
In one possible embodiment, the apparatus further comprises:
the positioning module is used for acquiring the position of the target where the target information meeting the post-screening condition belongs in the video frame and the target time;
and the frame selection module is used for selecting the target in the video medium frame by using the target frame based on the position of the target in the video frame corresponding to the target time.
In one possible embodiment, the apparatus further comprises:
the first storage module is used for storing all target information existing in the video if the full analysis mode is started;
and the second storage module is used for only storing target information meeting a pre-screening condition in the video if the full analysis mode is not started, wherein the pre-screening condition is a screening condition pre-configured before the target information is acquired.
In a possible implementation manner, the rule configuration operation further comprises a search type configuration operation, wherein the search type configuration operation is used for indicating to start an alarm search mode or a track search mode, and the search module comprises:
a fourth sub-module for searching for target information meeting the post-screening condition from the target information meeting the pre-screening condition if the search type configuration operation indicates to start the alarm search mode;
and a fifth searching sub-module, configured to search all target information for the target information meeting the post-screening condition if the search type configuration operation indicates to start the track search mode.
In one possible implementation manner, when the object in the video frame cannot be classified, the class of the object included in the object information of the video frame is unclassified, and the search module includes:
a sixth sub-module for searching for target information meeting the post-screening condition in first target information if the search mode is non-full type search, wherein the first target information is target information stored in advance by the search device and does not include target information of unclassified targets;
And a seventh sub-module, configured to search for target information that satisfies the post-screening condition in second target information if the search mode is full-type search, where the second target information is target information stored in advance in the search device and includes the first target information and target information of an unclassified target.
In one possible embodiment, the apparatus further comprises:
the classification module is used for searching a video frame comprising the unclassified target if the target to which the target information belongs is the unclassified target; classifying the unclassified targets based on the searched video frames to obtain first post-classification categories, and updating the categories included in the target information of the unclassified targets from unclassified targets to the first post-classification categories;
and/or the number of the groups of groups,
the image acquisition equipment is used for shooting to obtain video, the video contains a plurality of video frames, and each time the video frames are shot, if the load of the image acquisition equipment is smaller than a preset load threshold value, the targets in the shot video frames are classified to obtain the categories of the targets; if the load of the image acquisition equipment is not smaller than the preset load threshold, determining the category of the target existing in the shot video frame as unclassified; the image acquisition equipment is also used for periodically polling the targets in the video frames, if the types of the polled targets are unclassified, acquiring the video frames with the polled targets, classifying the polled targets based on the acquired video frames to obtain a second post-classification type, and transmitting the second post-classification type and a target identifier for representing the polled targets to the retrieval equipment; the method further comprises the steps of: identifying a target represented by a target identifier as a target to be updated in response to a second post-classification category and the target identifier sent by the image acquisition equipment; and searching target information stored corresponding to the target to be updated by taking the target to be updated as an index, and updating the category included in the searched target information into the second post-classification category.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing any one of the target retrieval methods when executing the program stored in the memory.
The beneficial effects of the embodiment of the application are that:
according to the target retrieval method provided by the embodiment of the application, through a mode of storing each target information in the video in a pre-sorting way, when a user checks the playback video, any target information meeting the post-retrieval condition can be retrieved according to the post-retrieval condition input by the user during playback video, so that even if a certain retrieval rule is set before, the retrieval condition can be changed at any time to perform new retrieval meeting the new condition by using the target retrieval method provided by the application; in addition, because the target information of all targets is stored in advance, the target information meeting the post-search condition can be searched without traversing all videos, after the target information is searched, whether the targets meeting the rule target information belong to occur or not is marked on the same progress bar by using different display modes, so that video data meeting the rule, which are consulted by a user, are coherent, the user does not need to spend effort to check the time sequence of scattered video clips, and the user can skip or accelerate or decelerate to play a certain video clip according to the requirement, the search time is shortened, the search efficiency of the targets is improved, and the user experience is improved.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other embodiments may also be obtained according to these drawings to those skilled in the art.
Fig. 1 is a flow chart of a search method in the prior art according to an embodiment of the present application;
FIG. 2 is a schematic diagram of target alarm information storage of a prior art search method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a target retrieval method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of video recording and target information storage of the target retrieval method according to the embodiment of the present application;
FIG. 5 is a flow chart of a new search method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a target retrieval operation interface according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a target retrieval configuration interface provided in an embodiment of the present application;
FIG. 8 is a schematic flow chart of a cyclic search target according to an embodiment of the present disclosure;
fig. 9 is a first schematic diagram of a search result display manner provided in the embodiment of the present application;
fig. 10 is a second schematic diagram of a search result display manner provided in the embodiment of the present application;
fig. 11 is a third schematic diagram of a search result display manner provided in the embodiment of the present application;
fig. 12 is a schematic structural diagram of a target retrieval device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
For a clearer description of the target retrieval method provided in the present application, the following will explain the related terms referred to herein:
and (5) post-searching: and storing the information of the target into the video frame, and subsequently acquiring the target through the designated area.
Motion trail: the precedence information of the occurrence of the target within a certain specific range.
Object classification: the detected objects are classified by a certain dimension, such as by human body, vehicle, cat, etc.
Retrieval device: the retrieval device in the text may be a network video recorder NVR, an analog video recorder, or other video recorders with the same function.
Image acquisition equipment: the image acquisition device in the text can be a network camera or other video recording devices with the same function.
With the expansion of intelligent algorithms, a large amount of video information is generated every day, wherein most of video information is video without targets, users are not concerned, and users are more concerned about generating video near an event. Taking a perimeter detection algorithm as an example, the algorithm supports outputting target information of people and vehicles, and records alarm information (target information) triggering preset alarm rules. The user can select a specific area afterwards to carry out video searching on the target triggering the alarm rule so as to acquire effective video information before and after the occurrence of an event.
Fig. 1 and fig. 2 show a video searching scheme in the prior art, fig. 1 is a schematic flow chart of a searching method in the prior art provided by an embodiment of the present application, and fig. 2 is a schematic diagram of target alarm information storage of the searching method in the prior art provided by the embodiment of the present application. As shown in fig. 1, after the video is acquired, the intelligent algorithm detects the target triggering the preset alarm rule, and after the media processing, the target code stream containing the alarm information is stored in the hard disk space. The storage format is shown in fig. 2, and the video record of the alarm target and the alarm information (i.e. the target information) are respectively stored, wherein the video record is stored in a form of time and code stream, and the target information is stored in a form of time and coordinates. When a user plays back a video, and needs to search for a newly defined area, the user needs to traverse and search for all videos and play back the searched videos. The method has the main disadvantage that when a newly defined area is needed in the follow-up process, the target information triggering the alarm rule in the new area cannot be quickly searched, and all videos need to be traversed in each search, so that the search rate is slower.
And after the retrieved target information meeting the post-retrieval condition is retrieved, the corresponding video frame is obtained, and the array distributed display is performed in a mode of pictures or short video clips, for example, the pictures or the video clips are displayed in an array of 4*6 or 5*5. But this way is very sporadic, and the user needs to click on the video clips or the picture queries one by one, which is very inconvenient to use.
Based on the above, the present application provides a target retrieval method, in which a video shot by an image acquisition device and target information of each video frame in which a target exists in the video are acquired in advance, the target information includes: the method comprises the steps of identifying a target, classifying the target, locating the target in a video frame and shooting time of the video frame; storing each piece of target information correspondingly according to the category of the included target;
as shown in fig. 3, fig. 3 is a flow chart of a target retrieval method provided in an embodiment of the present application, where the method includes:
s301, displaying a playback window with a first time progress bar and playing the video through the playback window.
S302, receiving rule configuration operation input for video frames displayed on a playback window, and identifying post-screening conditions indicated by the rule configuration operation.
S303, searching target information meeting the post-screening condition in target information stored in advance in the searching equipment, and acquiring target identifiers contained in the searched target information as screening identifiers.
S304, searching target information comprising the screening identification, and acquiring shooting time corresponding to the searched target information as target time.
S305, displaying the target time by using a first display mode and displaying other times except the target time by using a second display mode on the first time progress bar.
Wherein the first display mode is different from the second display mode.
S306, under the condition that the first configuration operation is received, responding to the first configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information; wherein the play configuration information includes at least one of: accelerating and playing video clips conforming to rules according to specified multiples, decelerating and playing video clips conforming to rules according to specified multiples, accelerating and playing video clips not conforming to rules according to specified multiples, decelerating and playing video clips not conforming to rules according to specified multiples, skipping video clips not conforming to rules, and not skipping video clips not conforming to rules.
The video clips which accord with the rules are video clips shot at the target time, and the video clips which do not accord with the rules are video clips shot at other times except the target time;
and/or the number of the groups of groups,
under the condition that the first trigger operation is received, responding to the first trigger operation, and switching the current playing picture to play from the first time corresponding to the first trigger operation.
By using the target retrieval method provided by the application, through a mode of storing each target information in the video in advance in a classified manner, when a user checks the playback video, any target information meeting the post retrieval condition can be retrieved according to the post retrieval condition input by the user during playback video, so that even if a certain retrieval rule is set before, the target retrieval method provided by the application can be used for changing the retrieval condition at any time to perform new retrieval meeting the new condition; in addition, because the target information of all targets is stored in advance, the target information meeting the post-search condition can be searched without traversing all videos, after the target information is searched, whether the targets meeting the rule target information belong to occur or not is marked on the same progress bar by using different display modes, so that video data meeting the rule, which are consulted by a user, are coherent, the user does not need to spend effort to check the time sequence of scattered video clips, and the user can skip or accelerate or decelerate to play a certain video clip according to the requirement, the search time is shortened, the search efficiency of the targets is improved, and the user experience is improved.
For convenience of explanation of the foregoing S301 to S306, first, explanation will be given of the origin of the target information based on the target search method provided in the present application:
the image acquisition equipment is in communication connection with the retrieval equipment, after the image acquisition equipment shoots and obtains the video, the video is sent to the retrieval equipment, and the retrieval equipment detects targets appearing in video frames of the video to obtain target information of the targets. The detection can be performed by using a neural network or a traditional algorithm model.
In one possible embodiment, only the target may be detected, and target information of the target is obtained, where the target information includes a target identifier of the target existing in the video frame, a category of the target, a position of the target in the video frame, and a capturing time of the video frame. The object may be any object that appears in the video frame, such as a person, a vehicle, an animal, etc. The target identifier may be obtained according to a predetermined naming rule, for example, the target identifier of the target a may be ID1, and the target identifier of the target B may be ID2. One object may appear only once or a plurality of times in a video frame, and when there are a plurality of positions in the plurality of occurrences, the search device may store the plurality of positions separately. The position of each object in the video frame may be represented by coordinates, for example, coordinates of four vertices of the minimum bounding box of the object, or may be represented by coordinates of the center of the minimum bounding box.
The objects may be classified by the image capturing device, or by the retrieving device, or by the video analyzing device, and how the objects are classified is described in detail below:
in one possible embodiment, the targets are classified by an image capturing device, where the image capturing device is configured to capture a video, and the video includes a plurality of video frames. And when the video frame is shot, if the load of the image acquisition equipment is smaller than a preset load threshold value, classifying targets existing in the shot video frame to obtain the types of the targets, such as people, vehicles, animals and the like. And if the load of the image acquisition equipment is not smaller than the preset load threshold, determining the category of the target existing in the shot video frame as unclassified. The preset load threshold is set by the skilled worker according to the working experience, for example, the preset load threshold is that the image acquisition device can only classify the object existing in 1000 video frames, and the classification of the object in the video frame is determined to be unclassified when the 1001 st frame is processed.
In addition to the above method, the image capturing device may also periodically patrol the targets existing in the captured video frames when idle, if the class of the polled targets is unclassified, acquire the video frame in which the polled targets exist, classify the polled targets based on the acquired video frame, obtain a second post-classification class, and send the second post-classification class and the target identifier for representing the polled targets to the retrieving device, where the retrieving device identifies the targets represented by the target identifiers as the targets to be updated in response to the second post-classification class and the target identifier sent by the image capturing device. And then searching target information stored corresponding to the target to be updated by taking the target to be updated as an index, and updating the category contained in the searched target information into a second post-classification category to realize the updating of the category. The time of the round can be formulated by a skilled worker according to working experience or related rules.
In another possible embodiment, the target may also be classified by the searching device, and in the post-searching process, if the searched target is an unclassified target, the video frame including the target is searched. Based on the found video frame, the video frame should be a frame clear video frame capable of reflecting the target feature information. The searching device may detect the video frame by using a neural network for classification, so as to obtain a class of the target in the video frame, as a first post-classification class, and update a class included in the pre-stored target information of the unclassified target from an unclassified class to a first post-classification class.
By applying the embodiment, the image acquisition equipment and the retrieval equipment can classify the targets in the video frames, and the image acquisition equipment can also determine whether each target in the video frames is classified or not through regular round inspection, so that the classification efficiency is improved, the condition that more unclassified targets exist is avoided, and the subsequent retrieval efficiency is further improved.
In the process of detecting and classifying the video frames, the data which is irrelevant to the target in the video frames can be removed, and then the data which is relevant to the target can be reserved. How to distinguish between data related to a target or data not related to a target in a video frame may be specifically defined according to the scene to which it is applied. For example, in the target traffic or community management, the data in the video frame which is irrelevant to the target may be trees, garbage cans or street lamps, and the data in the video frame which is relevant to the target may be people, vehicles or animals. In the express sorting, the data in the video frame which is irrelevant to the target can be people or light shadow, and the data which is relevant to the target can be express or other articles on the sorting table.
In one possible embodiment, each target information may be stored separately, but this storage may make subsequent retrieval less efficient.
Based on this, in another possible embodiment, each target information may be stored correspondingly according to a category of the target, where one target category corresponds to a set of target information.
The search device stores a plurality of target information, each target information corresponds to one target and one video frame (i.e. video recording), so that when the target information is stored, the video frame corresponding to the target information is also stored, i.e. the video recording is stored, the storage mode is shown in fig. 4, and fig. 4 is a schematic diagram of the video recording and the target information storage of the target search method provided by the embodiment of the application. And storing each code stream according to the time of shooting video. The target information comprises a target identification of the target, a category of the target, a position (coordinate) of the target in the video frame and shooting time of the video frame.
In order to facilitate post-search and improve the search efficiency of post-search, when storing target information, a full-analysis mode may be set, which may be configured by a professional technician before the user performs the search, and may be in a closed or open state by default. When the full analysis mode is on, all target information existing in the video is stored. And if the full analysis mode is not started, only storing target information meeting the prior screening condition in the video. Wherein the pre-screening condition is a screening condition that is pre-configured before the target information is acquired.
After the target information is stored, the processing flow of the search method according to the present application is shown in fig. 5 based on the storage manner, and fig. 5 is a schematic flow diagram of the new search method according to the embodiment of the present application. After the video is acquired, an intelligent algorithm detects a target triggering a preset alarm rule (pre-screening condition), and after the track is recorded, the track (a plurality of position point information), time, category and code stream after media processing of the alarm target are stored in the hard disk space. During searching, a user demarcates a searching area and the category of a searching target to search, after target information meeting post-screening conditions is obtained, shooting time corresponding to the target information is found, and video is directly positioned according to the shooting time.
The specific search flow is as described in the foregoing S301 to S306, and the following description will describe the foregoing S301 to S306, respectively:
in S301, a user plays back a video through a playback window with a first timeline in a target search interface, referring to fig. 6, fig. 6 is a schematic diagram of a target search operation interface provided in an embodiment of the present application. Where the playback window may include video playback controls for controlling video playback, such as controls that control video "start", "pause", "forward", "reverse", "speed-doubling", and so forth. The control capable of switching interfaces such as preview, playback, backup, configuration and the like is arranged above the playback window, the calendar in the lower left corner can support a user to select and play back video of any date, and the IPC in the upper left corner can support the user to select and play back video shot by the image acquisition equipment interested by the user.
In S302, the user may be interested in one or more objects in the video that appear in a certain area while viewing the playback video, thereby generating search requirements for those objects. At this time, the recording being played back can be paused, and the image can be retrieved.
The retrieval device, in response to a pause operation for the playback window, includes an operation that can pause the video being played back by the playback window, and for example, may include a click operation for a "pause" control, a double click operation for a video screen, and the like. The user inputs a rule configuration operation in a video frame displayed in the current playback window, and the retrieval device identifies a post-screening condition indicated by the rule configuration operation input by the user.
In S303, since the target information of each target existing in the video frame is stored in the search device in advance, it can be determined whether the currently acquired target information satisfies the post-screening condition according to the pre-stored target information including the position of the target in the video frame and the shooting time of the video frame, and after the target information satisfying the post-screening condition is found, the target identifier included in the found target information is acquired as the screening identifier.
In S303, in order to enable the user to perform post-search based on the input search condition when viewing the playback video, the target information satisfying the post-search condition is searched, and the rule configuration operation includes a line drawing operation, and the step S302 includes:
s3021, obtaining post-screening conditions based on the contour lines of the region of interest indicated by the line drawing operation, wherein the post-screening conditions are as follows: the locus formed by the position and the shooting time included in each piece of target information of the target intersects with the contour line.
The step S303 includes:
s3031, target information, which is formed by intersecting the profile line with the track formed by the included position and the shooting time, is searched in target information stored in advance in the search equipment, and target identifiers included in the searched target information are obtained and used as screening identifiers.
The dashed box in fig. 6 is a contour line of the region of interest obtained based on a line of the region of interest drawn by the user on the playback window, and it is determined for each target whether a trajectory formed by a position and shooting time included in the target information of the target intersects the dashed box.
It should be noted that the line drawing operation may be drawing one or more lines, and these lines may be intersecting or may be disjoint; the drawing may be performed by drawing a closed line, such as a regular line frame or an irregularly shaped line frame, and the present application is not limited to the shape, number, and other factors of the line drawn by the drawing operation. Correspondingly, the contour lines of the region of interest are the contour lines represented by the lines or the wire frames.
When the user does not want a target to approach the object placing cabinet on the right side in the playback window picture from the sofa side, the user can draw a line in the area between the sofa and the object placing cabinet in the playback window (namely, the left side of the object placing cabinet), the searching device is based on the dotted line of the interested area obtained by the line drawn by the user on the playback window, when the target is searched to cross the dotted line (namely, the track formed by the position and shooting time included by the target information of the target is intersected with the dotted line), the target can be considered to approach the object placing cabinet from the sofa side, the target also enters the area of interest of the user, the target is the target which the user wants to search, and the target information of the target meets the post-screening condition. In the scenario shown in fig. 6, the area of interest of the user may be considered as the area where the cabinet is located, and a line crossing the left side of the cabinet may be considered as entering the area, so the line may be considered as the contour line of the area.
By applying the embodiment, the post-screening condition can be obtained based on the contour line of the region of interest input by the user during playback video recording, and any target information meeting the post-searching condition is searched, so that even if a certain searching rule is set before, the searching condition can be changed at any time to perform new searching meeting the new condition, and the searching experience of the user is improved.
When searching for the target information meeting the post-screening condition, whether the target information meets the post-screening condition or not can be determined for each target information in sequence, but in this way, the calculated amount is large, based on this, in a possible embodiment, the targets can be initially screened based on some conditions, then the targets after the initial screening are further screened, and further the target information meeting the post-screening condition is determined, so that the calculated amount can be reduced. Illustratively, the rule configuration operations further include a category selection operation,
the step S3031 includes:
s30311, identifying the category indicated by the category selection operation as a target category;
s30312, searching target information stored in advance corresponding to the target category by taking the target category as an index, and taking the target information as candidate target information;
s30313, searching target information of intersection of the track formed by the included position and the shooting time and the contour line in the candidate target information, and acquiring a target identifier contained in the searched target information as a screening identifier.
As shown in fig. 7, fig. 7 is a schematic diagram of a target retrieval configuration interface provided in an embodiment of the present application, where the configuration interface includes a plurality of configuration controls, including a configuration target type control, a configuration retrieval type control, and a full target analysis control, and a user may select a configuration type by means of touch screen or mouse click according to his own needs. The configuration interface in fig. 7 is only an example, and in other possible embodiments, the configuration interface may be of other styles. For example, only the target type control, the configuration search type control, the target type control, and the like may be included. The type selection operation corresponds to a configuration target type in the drawing, for example, a user may select a person or a car.
As described above, each piece of target information is stored according to a different category of a target to which the piece of target information belongs, for example, according to a person, a vehicle, or an unclassified category, and at the time of searching, the piece of target information included in the category can be searched according to the category of the target to be searched by the user.
By applying the embodiment, the searching device stores the target information according to different categories of the targets when storing the target information, so that when searching, candidate target information can be determined according to the category indicated by the category selection operation, then target information, formed by the included position and shooting time, of which the track is intersected with the contour line is searched in the candidate target information, is subjected to preliminary screening in the process, and does not need to compare the positions and the shooting time included in all the stored target information, so that the calculated amount is small, and the searching efficiency is further improved.
In the foregoing step S30313, a part of the target information may be retrieved, or all the target information may be retrieved, and in order to retrieve all the target information as far as possible, the retrieving all the target information may be traversed, so, based on this, the embodiment of the present application provides a method for retrieving the target information, as shown in fig. 8, fig. 8 is a schematic flow diagram of circularly retrieving the target information provided in the embodiment of the present application, where the foregoing step S30313 includes:
Until all the target information is traversed, the steps S1 to S4 are circularly executed:
step S1, obtaining target information which is not traversed;
step S2, determining whether the acquired target information comprises a screening identifier, if so, turning to step S1, and if not, turning to step S3;
step S3, determining whether a track formed by the position and shooting time included in the acquired target information is intersected with a contour line, if so, turning to step S4, and if not, turning to step S1;
and S4, acquiring a target identifier contained in the searched target information as a screening identifier.
By applying the embodiment, the contour line matching is performed on each item of target information, so that the contour line matching on certain target information is avoided, the search result is more accurate, and the search accuracy is improved.
In order to improve the retrieval efficiency, the rule configuration operation further includes a retrieval type configuration operation, wherein the retrieval type configuration operation is used for indicating to turn on an alarm retrieval mode or a track retrieval mode, and still as shown in fig. 7, the configuration retrieval type in the figure corresponds to the retrieval type configuration operation, including an alarm and a track. If the user selects the alarm or starts the alarm search mode, and if the user selects the track or starts the track search mode, the search device may configure the full analysis mode as described above, and the step S303 includes:
S3032, if the search type configuration operation indicates to start the alarm search mode, searching the target information meeting the post-screening condition from the target information meeting the pre-screening condition.
S3033, if the search type configuration operation indicates to start the track search mode, searching all target information for the target information meeting the post-screening condition.
By applying the embodiment, when the alarm search is selected, the target information meeting the post-search condition can be searched only in the target information meeting the pre-screening condition, so that the search efficiency is improved. When track retrieval is selected, the target information meeting the post-retrieval conditions is retrieved from all the target information stored in the retrieval equipment, so that more target information can be retrieved, and the retrieval result is enriched.
When the image capturing apparatus is overloaded and the retrieving apparatus has not classified the unclassified objects at the time of classifying the objects, the category of the objects included in the object information is unclassified, in which case the retrieving may be performed by the following manner, the above step S303 includes:
s3034, if the search mode is non-full type search, searching target information meeting the post-screening condition in first target information, wherein the first target information is target information stored in advance by the search equipment and does not comprise target information of unclassified targets;
S3035, if the search mode is full-type search, searching target information meeting the post-screening condition in second target information, wherein the second target information is target information stored in the search equipment in advance and comprises the first target information and target information of unclassified targets.
That is, when searching for target information, a search mode may be configured to search for all target information and unclassified target information. For example, the user may select non-full-type search when he wants to search for target information of a known target category appearing in the area a, and may select full-type search when he wants to search for all target information appearing in the area a.
By applying the embodiment, a user can configure any retrieval mode according to own retrieval requirements, so that the retrieval efficiency is improved, and the user experience is improved.
In S304, the target information including the target identifier as the screening identifier is searched in all the target information, and it may be determined that all the target information satisfies the post-screening condition. After target information meeting the post-screening conditions is found, shooting time corresponding to each piece of target information is obtained, and the shooting time is used as target time.
In S305, after obtaining the corresponding target time according to the target information meeting the post-search condition, displaying the target time by using a first display mode and then displaying other times except the target time by using a second display mode on the first time progress bar of the playback window, wherein the first display mode is different from the second display mode.
The first display mode and the second display mode may be different in color, different in texture, different in text marks, and the like, and in other possible embodiments, the first display mode and the second display mode may be different in other aspects.
In a possible embodiment, each time point in the first timeline is the same as the shooting time of its corresponding video frame, in which case the target time may be ordered in chronological order to obtain the time sequence. Combining adjacent times in the time series to obtain a plurality of continuous time periods; and merging the overlapped time periods in the plurality of continuous time periods to obtain the target time period. The step S305 includes:
and S3051, displaying the target time period by using a first display mode on a first time progress bar of the playback window.
The determination of the adjacent time may be set in advance by the skilled worker according to the working experience, and may be, for example, 30 minutes, 40 minutes, or 1 hour, and is not particularly limited herein.
For example, the target times are 7:00, 8:00, 12:00, 8:30, 7:30, 11:00, 9:30, respectively, and the time sequences obtained after sequencing are 7:00, 7:30, 8:00, 8:30, 9:00, 11:00, 12:00, the adjacent time can be preset to be 1 hour, and the obtained multiple continuous time periods are 7:00-8:00, 7:30-8:30, 8:00-9:00 and 11:00-12:00. The overlapping time periods of the multiple consecutive time periods are combined to obtain target time periods of 7:00-9:00 and 11:00-12:00. The target time period is then presented on a first timeline of the playback window using a first presentation.
By applying the embodiment, the target time and other times except the target time can be displayed on the same progress bar, so that the user can visually distinguish the target time and the other times, and the user does not need to spend energy to check the time sequence of scattered video clips, thereby improving the retrieval experience of the user.
In another possible embodiment, the first timeline may be a timeline for indicating a duration of a video that has been played, that is, a progress of a currently playing video segment in all video segments, where step S305 includes:
And S3052, determining the time length required for playing the video clip from the playback window to the shooting target time as a first time length.
And S3053, determining a sub-progress bar used for representing the first duration on the first time progress bar as a first sub-progress bar.
S3054, on the first time progress bar, a first sub-progress bar is displayed by using a first display mode, and a second sub-progress bar is displayed by using a second display mode, wherein the second sub-progress bar is other sub-progress bars except the first sub-progress bar on the first time progress bar.
The sub-progress bar herein refers to any progress bar segment on the first time progress bar, and the second sub-progress bar refers to all progress bar segments except the first sub-progress bar.
As shown in fig. 9, fig. 9 is a first schematic diagram of a search result display manner provided in the embodiment of the present application, in fig. 9, a black portion 901 of a first time progress bar may be a first sub progress bar, a white portion 902 of the first time progress bar may be a second sub progress bar, or a black portion of the first time progress bar may be a second sub progress bar, and a white portion of the first time progress bar may be a first sub progress bar. For example, different textures may be used to distinguish the first sub-progress bar from the second sub-progress bar, or a text mark may be added above the first sub-progress bar and the second sub-progress bar to distinguish the first sub-progress bar from the second sub-progress bar, or the like.
By applying the embodiment, the first sub-progress bar and the second sub-progress bar can be displayed on the same progress bar in the mode of the progress bar segments, so that a user can visually distinguish the first sub-progress bar and the second sub-progress bar, and the user does not need to spend energy to check the time sequence of scattered video clips, thereby improving the search experience of the user.
In order to enable the user to more intuitively view the video clips conforming to the rule and the video clips not conforming to the rule, the search experience of the user is improved, and in a possible embodiment, the method further includes:
s401, displaying a second time progress bar, wherein the time displayed by the second time progress bar only comprises the target time and does not comprise the time beyond the target time.
S402, under the condition that a second trigger operation aiming at a second time progress bar is received, responding to the second trigger operation, and only playing video clips conforming to rules in a playback window;
and/or under the condition that a second configuration operation aiming at a second time progress bar is received, responding to the second configuration operation, obtaining input playing configuration information, and playing the video according to the playing configuration information;
and/or synchronizing the current playing time on the first time progress bar to the second time progress bar, or synchronizing the current playing time on the second time progress bar to the first time progress bar, so that the icons used for indicating the current playing time on the two time progress bars indicate the same time.
In the above step S401, the second timeline is displayed in the playback window after the search is ended to obtain the search result. The time displayed by the second time progress bar only comprises the target time, and does not comprise the time beyond the target time, namely the time length of the video clip conforming to the rule is represented.
As shown in fig. 10, fig. 10 is a second schematic diagram of a search result display manner provided in the embodiment of the present application, where 1001 is a first time progress bar, 1002 is a second time progress bar, and the second time progress bar may be displayed above the first time progress bar or may be displayed below the first time progress bar, which is not limited herein. In the figure, the duration of the second time progress bar is the sum of the durations of the first sub-progress bars in the first time progress bar, and the length of the first time progress bar is stretched for the attractive appearance of the interface, but the duration is not changed. In other cases, the search result may not be stretched, as shown in fig. 11, fig. 11 is a third schematic diagram of a search result display manner provided in the embodiment of the present application, where 1101 is a first timeline, 1102 is a second timeline, and similarly, the second timeline may be displayed above the first timeline or below the first timeline, which is not specifically limited herein. The duration of the second time progress bar in the figure is the sum of the duration of the first sub-progress bar in the first time progress bar. The manners of displaying the first timeline bar and the second timeline bar in fig. 10 and 11 are merely two examples, and in other possible embodiments, the manners of displaying the first timeline bar and the second timeline bar may be other manners.
In S402, "in response to the second trigger operation in the case where the second trigger operation for the second timeline is received, only the video clips that meet the rule are played in the playback window" is denoted as play mode one;
under the condition that a second configuration operation aiming at a second time progress bar is received, responding to the second configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information as a playing mode II;
synchronizing the current playing time on the first time progress bar to the second time progress bar as a first synchronizing mode;
and synchronizing the current playing time on the second time progress bar to the first time progress bar so that the icons used for indicating the current playing time on the two time progress bars indicate the same time, and recording as a synchronous mode II.
Based on the above-mentioned "first playing mode", "second playing mode", "first synchronizing mode" and "second synchronizing mode", in the display interface of the search result, the display mode may have the following multiple conditions:
display mode one: displaying a playing mode I;
and a second display mode: displaying a first playing mode and a second playing mode;
And a third display mode: displaying a playing mode one and a synchronizing mode one;
and a fourth display mode: displaying a first playing mode and a second synchronizing mode;
fifth display mode: displaying a first playing mode, a second playing mode and a first synchronous mode;
and a sixth display mode: displaying a first playing mode, a second playing mode and a second synchronizing mode;
seventh display mode: displaying a playing mode II;
display mode eight: displaying a playing mode II and a synchronizing mode I;
display mode nine: displaying a second playing mode and a second synchronous mode;
ten display modes: show "synchronization method one";
eleven display modes: "synchronization pattern two" is shown.
And the first playing mode is to only play the retrieved video clips meeting the rule in the playback window when responding to the second triggering operation of clicking the second time progress bar by the user. And in the second playing mode, when the user clicks the second configuration operation of the second time progress bar, the complete video can be played in the playback window, namely, the video clips which do not accord with the rules are not skipped.
The first synchronization mode is that when a complete video is played and a video clip conforming to a rule is played, a cursor in a first time progress bar points to a time point corresponding to a currently played video frame, and a cursor in a second time progress bar points to a time point corresponding to the currently played video frame.
The first synchronization mode is that when a video clip conforming to a rule is played, a cursor in the second time progress bar points to a time point corresponding to a currently played video frame, and at the same time, the cursor in the first time progress bar points to a time point corresponding to the currently played video frame.
By applying the embodiment, the time progress bars of the video clips conforming to the rules are independently displayed in the playback window, and the user can respond to the operation of the user, so that the playback window can only play the video clips conforming to the rules and also can play the complete video, and when the video clips conforming to the rules are played, the icons used for indicating the current playing time on the two time progress bars indicate the same time, so that the user can intuitively see the position of the video frame currently played in the complete video when watching the video clips conforming to the rules, the user can conveniently view the video within any time before and after the video frame, the flexibility is higher, and the retrieval experience of the user is improved.
In S306, the retrieving device may receive and respond to the first configuration operation, and play the video according to the play configuration information input by the user. For example, as shown in fig. 9, the black portion is a first sub-progress bar, the corresponding video clip is a video clip conforming to the rule, the white portion is a second sub-progress bar, and the corresponding video clip is a video clip not conforming to the rule. The user can select the playing speed of the video clip corresponding to the black part or the video clip corresponding to the white part, skip the video clip corresponding to the white part, and the like by clicking the preview button on the window. Specifically, when the user selects to accelerate playing of the video clip corresponding to the black portion, the video clip corresponding to the white portion may be accelerated or decelerated or skipped. When the user selects to play the video clip corresponding to the black part in a decelerating manner, the video clip corresponding to the white part can be accelerated or decelerated or skipped. The acceleration or deceleration multiple can be any multiple selected by a user according to the requirement of the user, the acceleration and deceleration multiple can be the same or different, and when the acceleration or deceleration multiple is 1, the video clip is played at the same speed as the original video.
Particularly, when the user needs to watch the regular video clips with emphasis, find details, analyze the behavior of the targets in the video clips, the video clips corresponding to the black part can be selected to be played at a reduced speed, further, in order to save the time of watching the video, the retrieval efficiency is improved, because the video clips corresponding to the white part are video clips not conforming to the rules, and the targets interesting to the user do not exist, so that the user can choose to accelerate or skip playing the video clips corresponding to the white part.
The retrieval device may also receive and respond to the first trigger operation to switch the currently playing picture in the playback window to begin playing from the first time corresponding to the first trigger operation. For example, as also shown in FIG. 9, the user clicking on the first black portion of the first timeline, the playback window switches the frame to the frame of the first time of the video that corresponds to the black portion.
After the target information meeting the post-search condition is searched, the position and the target time of the target which belongs to the target information meeting the post-screening condition in the video frame can be obtained, and the video frame corresponding to the target time is found according to the target time positioning. And in the video frames corresponding to the target time, selecting the target in the video by using the target frame based on the position of the target in the video frames.
The target frame may be any frame, for example, a solid frame, a dashed frame, a square frame, or a curved frame, and may also be selected according to a target category, which is not specifically limited herein. Some data may also be marked on the target frame, such as the category of the target, the target identification of the target, etc. As also shown in fig. 9, a solid line box or a dashed line box may be used to frame the object.
By applying the embodiment, when the target information of the target meets the post-screening rule, the target frame is used for selecting the target in the video middle frame, so that a user can intuitively see which target is the retrieved target conforming to the post-screening rule when viewing the video of the retrieval result, and can see all paths of the target, thereby ensuring the continuity of the target and improving the retrieval experience of the user.
Corresponding to the above method embodiment, the present application provides a target retrieval device, as shown in fig. 12, fig. 12 is a schematic structural diagram of the target retrieval device provided in the embodiment of the present application, and the target information of each video frame of a target in a video captured by an image capturing device is obtained in advance, where the target information includes: the method comprises the steps of identifying a target, classifying the target, locating the target in a video frame and shooting time of the video frame; storing each piece of target information correspondingly according to the category of the included target;
The device comprises:
a playback module 1201, configured to display a playback window with a first timeline and play the video through the playback window;
an identifying module 1202, configured to receive a rule configuration operation input for a video frame presented in the playback window, and identify a post-screening condition indicated by the rule configuration operation;
a screening module 1203, configured to search target information that satisfies the post-screening condition in target information stored in advance in the search device, and obtain a target identifier included in the searched target information as a screening identifier;
the retrieving module 1204 is configured to find target information including the screening identifier, and obtain a shooting time corresponding to the found target information as a target time;
a first display module 1205, configured to display the target time on the first timeline in a first display manner and display other times except the target time in a second display manner, where the first display manner is different from the second display manner;
a first playing module 1206, configured to obtain, in response to a first configuration operation, input playing configuration information, and play a video according to the playing configuration information, where the first playing module receives the first configuration operation; wherein the play configuration information includes at least one of: accelerating and playing video clips conforming to rules according to specified multiples, decelerating and playing video clips conforming to rules according to specified multiples, accelerating and playing video clips not conforming to rules according to specified multiples, decelerating and playing video clips not conforming to rules according to specified multiples, skipping video clips not conforming to rules, and not skipping video clips not conforming to rules, wherein the video clips conforming to rules are video clips shot at the target time, and the video clips not conforming to rules are video clips shot at other times except the target time; and/or under the condition that a first trigger operation is received, responding to the first trigger operation, and switching the current playing picture to play from the first time corresponding to the first trigger operation.
By using the target retrieval device provided by the application, through a mode of storing each target information in the video in advance in a classified manner, when a user checks the playback video, any target information meeting the post-retrieval condition can be retrieved according to the post-retrieval condition input by the user during playback video, so that even if a certain retrieval rule is set before, the target retrieval method provided by the application can be used for changing the retrieval condition at any time to perform new retrieval meeting the new condition; in addition, because the target information of all targets is stored in advance, the target information meeting the post-search condition can be searched without traversing all videos, after the target information is searched, whether the targets meeting the rule target information belong to occur or not is marked on the same progress bar by using different display modes, so that video data meeting the rule, which are consulted by a user, are coherent, the user does not need to spend effort to check the time sequence of scattered video clips, and the user can skip or accelerate or decelerate to play a certain video clip according to the requirement, the search time is shortened, the search efficiency of the targets is improved, and the user experience is improved.
In one possible implementation, the rule configuration operation includes a line drawing operation;
The identification module comprises:
and identifying a first sub-module, wherein the first sub-module is used for obtaining post-screening conditions based on the contour line of the region of interest indicated by the line drawing operation, and the post-screening conditions are as follows: a track formed by the position and shooting time included in each piece of target information of the target is intersected with the contour line;
the screening module comprises:
and the screening first sub-module is used for searching target information, which is formed by the position and the shooting time and is intersected with the contour line, in target information stored in the searching equipment in advance, and acquiring the shooting time corresponding to the searched target information as target time.
In one possible implementation, the rule configuration operation further includes a category selection operation;
the screening first sub-module comprises:
a screening first unit, configured to identify a category indicated by the category selection operation as a target category;
a screening second unit, configured to search target information stored in advance corresponding to the target category with the target category as an index, as candidate target information;
and a screening third unit, configured to search target information, which is formed by the included position and the shooting time and is intersected with the contour line, in the candidate target information, and acquire the shooting time corresponding to the searched target information as a target time.
In one possible implementation manner, the first timeline is a timeline for indicating a duration in which the video has been played, and the first display module includes:
a first submodule is displayed and used for determining the time length from the playback window to the video clip shot at the target time to be played as a first time length;
a second sub-module is displayed and used for determining a sub-progress bar used for representing the first duration on the first time progress bar as a first sub-progress bar;
and the third sub-module is used for displaying the first sub-progress bar by using a first display mode and displaying the second sub-progress bar by using a second display mode on the first time progress bar, wherein the second sub-progress bar is other sub-progress bars except the first sub-progress bar on the first time progress bar.
In one possible embodiment, the apparatus further comprises:
the second display module is used for displaying a second time progress bar, wherein the time displayed by the second time progress bar only comprises the target time and does not comprise the time beyond the target time;
the second playing module is used for responding to the second triggering operation under the condition that the second triggering operation aiming at the second time progress bar is received, and only video clips meeting the rule are played in the playback window; and/or under the condition that a second configuration operation aiming at the second time progress bar is received, responding to the second configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information; and/or synchronizing the current playing time on the first time progress bar to the second time progress bar, or synchronizing the current playing time on the second time progress bar to the first time progress bar, so that the icons used for indicating the current playing time on the two time progress bars indicate the same time.
In one possible embodiment, the apparatus further comprises:
the sequencing module is used for sequencing the target time according to the time sequence to obtain a time sequence;
a combination module, configured to combine adjacent times in the time sequence to obtain a plurality of continuous time periods;
the merging module is used for merging the overlapped time periods in the continuous time periods to obtain a target time period;
the first display module includes:
and displaying a fourth sub-module, configured to display the target time period using a first display manner on the first timeline of the playback window.
In one possible embodiment, the apparatus further comprises:
the positioning module is used for acquiring the position of the target where the target information meeting the post-screening condition belongs in the video frame and the target time;
and the frame selection module is used for selecting the target in the video medium frame by using the target frame based on the position of the target in the video frame corresponding to the target time.
In one possible embodiment, the apparatus further comprises:
the first storage module is used for storing all target information existing in the video if the full analysis mode is started;
And the second storage module is used for only storing target information meeting a pre-screening condition in the video if the full analysis mode is not started, wherein the pre-screening condition is a screening condition pre-configured before the target information is acquired.
In a possible implementation manner, the rule configuration operation further comprises a search type configuration operation, wherein the search type configuration operation is used for indicating to start an alarm search mode or a track search mode, and the search module comprises:
a fourth sub-module for searching for target information meeting the post-screening condition from the target information meeting the pre-screening condition if the search type configuration operation indicates to start the alarm search mode;
and a fifth searching sub-module, configured to search all target information for the target information meeting the post-screening condition if the search type configuration operation indicates to start the track search mode.
In one possible implementation manner, when the object in the video frame cannot be classified, the class of the object included in the object information of the video frame is unclassified, and the search module includes:
A sixth sub-module for searching for target information meeting the post-screening condition in first target information if the search mode is non-full type search, wherein the first target information is target information stored in advance by the search device and does not include target information of unclassified targets;
and a seventh sub-module, configured to search for target information that satisfies the post-screening condition in second target information if the search mode is full-type search, where the second target information is target information stored in advance in the search device and includes the first target information and target information of an unclassified target.
In one possible embodiment, the apparatus further comprises:
the classification module is used for searching a video frame comprising the unclassified target if the target to which the target information belongs is the unclassified target; classifying the unclassified targets based on the searched video frames to obtain first post-classification categories, and updating the categories included in the target information of the unclassified targets from unclassified targets to the first post-classification categories;
and/or the number of the groups of groups,
the image acquisition equipment is used for shooting to obtain video, the video contains a plurality of video frames, and each time the video frames are shot, if the load of the image acquisition equipment is smaller than a preset load threshold value, the targets in the shot video frames are classified to obtain the categories of the targets; if the load of the image acquisition equipment is not smaller than the preset load threshold, determining the category of the target existing in the shot video frame as unclassified; the image acquisition equipment is also used for periodically polling the targets in the video frames, if the types of the polled targets are unclassified, acquiring the video frames with the polled targets, classifying the polled targets based on the acquired video frames to obtain a second post-classification type, and transmitting the second post-classification type and a target identifier for representing the polled targets to the retrieval equipment; the method further comprises the steps of: identifying a target represented by a target identifier as a target to be updated in response to a second post-classification category and the target identifier sent by the image acquisition equipment; and searching target information stored corresponding to the target to be updated by taking the target to be updated as an index, and updating the category included in the searched target information into the second post-classification category.
Corresponding to the above method embodiment, the embodiment of the present application further provides an electronic device, as shown in fig. 13, and fig. 13 is a schematic structural diagram of an electronic device provided in the embodiment of the present application, including a memory 1301 and a processor 1302, where the memory 1301 is used to store a computer program; the processor 1302 is configured to implement any of the above-described target retrieval methods when executing a computer program stored on a memory.
And the electronic device may further include a communication bus and/or a communication interface, where the processor 1302, the communication interface, and the memory 1301 may communicate with each other via the communication bus.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided herein, there is also provided a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements any of the target retrieval methods of the above embodiments.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the method of any of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, tape), an optical medium (e.g., DVD), or a Solid State Disk (SSD), for example.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (12)
1. The object retrieval method is characterized by acquiring in advance object information of video recordings shot by image acquisition equipment and video frames with objects in the video recordings, wherein the object information comprises: the method comprises the steps of identifying a target, classifying the target, locating the target in a video frame and shooting time of the video frame; storing each piece of target information correspondingly according to the category of the included target;
the method comprises the following steps:
displaying a playback window with a first time progress bar and playing the video through the playback window;
receiving a rule configuration operation input for a video frame displayed by the playback window, and identifying a post-screening condition indicated by the rule configuration operation;
searching target information meeting the post-screening condition in target information stored in advance in the searching equipment, and acquiring target identifiers contained in the searched target information as screening identifiers;
Searching target information comprising the screening identification, and acquiring shooting time corresponding to the searched target information as target time;
displaying the target time by using a first display mode and displaying other times except the target time by using a second display mode on the first time progress bar, wherein the first display mode is different from the second display mode;
under the condition that a first configuration operation is received, responding to the first configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information; wherein the play configuration information includes at least one of: accelerating and playing video clips conforming to rules according to specified multiples, decelerating and playing video clips conforming to rules according to specified multiples, accelerating and playing video clips not conforming to rules according to specified multiples, decelerating and playing video clips not conforming to rules according to specified multiples, skipping video clips not conforming to rules, and not skipping video clips not conforming to rules, wherein the video clips conforming to rules are video clips shot at the target time, and the video clips not conforming to rules are video clips shot at other times except the target time; and/or under the condition that a first trigger operation is received, responding to the first trigger operation, and switching the current playing picture to play from the first time corresponding to the first trigger operation.
2. The method of claim 1, wherein the rule configuration operation comprises a line drawing operation;
the identifying post-screening conditions indicated by the rule configuration operation includes:
and obtaining post-screening conditions based on the contour lines of the region of interest indicated by the line drawing operation, wherein the post-screening conditions are as follows: a track formed by the position and shooting time included in each piece of target information of the target is intersected with the contour line;
searching target information meeting the post-screening condition in target information stored in the searching equipment in advance, and acquiring target identifiers contained in the searched target information as screening identifiers, wherein the method comprises the following steps:
and searching target information, which is formed by the included position and shooting time and is intersected with the contour line, in target information stored in advance in the searching equipment, and acquiring a target identifier included in the searched target information as a screening identifier.
3. The method of claim 2, wherein the rule configuration operation further comprises a category selection operation;
searching target information of intersecting the profile line with a track formed by the included position and shooting time in target information stored in advance in the searching equipment, and acquiring target identifiers included in the searched target information as screening identifiers, wherein the method comprises the following steps:
Identifying the category indicated by the category selection operation as a target category;
searching target information stored in advance corresponding to the target category by taking the target category as an index, and taking the target information as candidate target information;
and searching target information, which is formed by the included position and shooting time and is intersected with the contour line, in the candidate target information, and acquiring a target identifier included in the searched target information as a screening identifier.
4. The method of claim 1, wherein the first timeline is a timeline for indicating a duration in which the video has been played, wherein displaying the target time using a first display mode and displaying other times than the target time using a second display mode on the first timeline comprises:
determining the time length from the playback window to the video clip shot at the target time to be played as a first time length;
determining a sub-progress bar used for representing the first duration on the first time progress bar as a first sub-progress bar;
and displaying the first sub-progress bar by using a first display mode and displaying a second sub-progress bar by using a second display mode on the first time progress bar, wherein the second sub-progress bar is other sub-progress bars except the first sub-progress bar on the first time progress bar.
5. The method according to claim 1, wherein the method further comprises:
displaying a second time progress bar, wherein the time displayed by the second time progress bar only comprises the target time and does not comprise the time beyond the target time;
in case of receiving a second trigger operation for the second timeline, playing only regular video clips in the playback window in response to the second trigger operation;
and/or under the condition that a second configuration operation aiming at the second time progress bar is received, responding to the second configuration operation, obtaining input playing configuration information, and playing video according to the playing configuration information;
and/or synchronizing the current playing time on the first time progress bar to the second time progress bar, or synchronizing the current playing time on the second time progress bar to the first time progress bar, so that the icons used for indicating the current playing time on the two time progress bars indicate the same time.
6. The method according to claim 1, wherein the method further comprises:
sequencing the target time according to the time sequence to obtain a time sequence;
Combining adjacent times in the time series to obtain a plurality of continuous time periods;
combining overlapping time periods in the plurality of continuous time periods to obtain a target time period;
the displaying the target time by using a first display mode on the first time progress bar comprises the following steps:
and displaying the target time period by using a first display mode on a first time progress bar of the playback window.
7. The method according to claim 1, wherein the method further comprises:
acquiring the position of a target in a video frame, to which target information meeting the post-screening condition belongs, and the target time;
and in the video frame corresponding to the target time, selecting the target in the video medium frame by using a target frame based on the position of the target in the video frame.
8. The method according to claim 1, wherein the method further comprises:
if the full analysis mode is started, storing all target information existing in the video;
if the full analysis mode is not started, only storing target information meeting a pre-screening condition in the video, wherein the pre-screening condition is a screening condition pre-configured before the target information is acquired.
9. The method of claim 8, wherein the rule configuration operation further comprises a search type configuration operation, wherein the search type configuration operation is used to instruct to turn on an alarm search mode or a track search mode, and the searching for target information meeting the post-screening condition in target information pre-stored in a search device comprises:
if the search type configuration operation indicates to start the alarm search mode, searching target information meeting the post-screening condition in the target information meeting the pre-screening condition;
if the search type configuration operation indicates to start the track search mode, searching all target information for the target information meeting the post-screening condition.
10. The method according to claim 1, wherein when the object in the video frame cannot be classified, the classification of the object included in the object information of the video frame is unclassified, and the searching for the object information satisfying the post-filtering condition in the object information stored in advance in the searching device includes:
if the search mode is non-full type search, searching target information meeting the post-screening condition in first target information, wherein the first target information is target information stored in advance by search equipment and does not comprise target information of unclassified targets;
If the search mode is full-type search, searching target information meeting the post-screening condition in second target information, wherein the second target information is target information stored in advance by search equipment and comprises the first target information and target information of unclassified targets.
11. The method according to claim 10, wherein the method further comprises:
if the target to which the target information belongs is an unclassified target, searching a video frame comprising the unclassified target; classifying the unclassified targets based on the searched video frames to obtain first post-classification categories, and updating the categories included in the target information of the unclassified targets from unclassified targets to the first post-classification categories;
and/or the number of the groups of groups,
the image acquisition equipment is used for shooting to obtain video, the video contains a plurality of video frames, and each time the video frames are shot, if the load of the image acquisition equipment is smaller than a preset load threshold value, the targets in the shot video frames are classified to obtain the categories of the targets; if the load of the image acquisition equipment is not smaller than the preset load threshold, determining the category of the target existing in the shot video frame as unclassified; the image acquisition equipment is also used for periodically polling the targets in the video frames, if the types of the polled targets are unclassified, acquiring the video frames with the polled targets, classifying the polled targets based on the acquired video frames to obtain a second post-classification type, and transmitting the second post-classification type and a target identifier for representing the polled targets to the retrieval equipment; the method further comprises the steps of: identifying a target represented by a target identifier as a target to be updated in response to a second post-classification category and the target identifier sent by the image acquisition equipment; and searching target information stored corresponding to the target to be updated by taking the target to be updated as an index, and updating the category included in the searched target information into the second post-classification category.
12. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method of any of claims 1-11 when executing a program stored on a memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311792281.4A CN117743633A (en) | 2023-12-22 | 2023-12-22 | Target retrieval method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311792281.4A CN117743633A (en) | 2023-12-22 | 2023-12-22 | Target retrieval method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117743633A true CN117743633A (en) | 2024-03-22 |
Family
ID=90281156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311792281.4A Pending CN117743633A (en) | 2023-12-22 | 2023-12-22 | Target retrieval method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117743633A (en) |
-
2023
- 2023-12-22 CN CN202311792281.4A patent/CN117743633A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0729117B1 (en) | Method and apparatus for detecting a point of change in moving images | |
US8437508B2 (en) | Information processing apparatus and information processing method | |
US9269243B2 (en) | Method and user interface for forensic video search | |
AU2007345938B2 (en) | Method and system for video indexing and video synopsis | |
KR20080075091A (en) | Storage of video analysis data for real-time alerting and forensic analysis | |
US6487360B1 (en) | Method, apparatus, and computer program product for editing moving image and displaying selected groups of representative images of cuts of the moving image such that representative images having a predetermined feature can be distinguished from representative images not having the predetermined feature | |
US20100205203A1 (en) | Systems and methods for video analysis | |
US11308158B2 (en) | Information processing system, method for controlling information processing system, and storage medium | |
CN104335594A (en) | Automatic digital curation and tagging of action videos | |
JP2000224542A (en) | Image storage device, monitor system and storage medium | |
JP2001022792A (en) | Method for selecting candidate frame for key frame selection | |
US11037604B2 (en) | Method for video investigation | |
WO2020236949A1 (en) | Forensic video exploitation and analysis tools | |
CN105872452A (en) | System and method for browsing summary image | |
US7949208B2 (en) | Monitor system, monitor device, search method, and search program | |
US20110179003A1 (en) | System for Sharing Emotion Data and Method of Sharing Emotion Data Using the Same | |
JP3728775B2 (en) | Method and apparatus for detecting feature scene of moving image | |
CN111881320A (en) | Video query method, device, equipment and readable storage medium | |
EP0636994B1 (en) | Method of and apparatus for retrieving dynamic images and method of and apparatus for managing images | |
CN104935888A (en) | Video monitoring method capable of marking object and video monitoring system thereof | |
CN117743633A (en) | Target retrieval method and device | |
CN117743632A (en) | Image retrieval method, device and system | |
CN112437270B (en) | Monitoring video playing method and device and readable storage medium | |
CN117743634A (en) | Object retrieval method, system and equipment | |
CN111225250B (en) | Video extended information processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |