CN108932509A - A kind of across scene objects search methods and device based on video tracking - Google Patents
A kind of across scene objects search methods and device based on video tracking Download PDFInfo
- Publication number
- CN108932509A CN108932509A CN201810937002.1A CN201810937002A CN108932509A CN 108932509 A CN108932509 A CN 108932509A CN 201810937002 A CN201810937002 A CN 201810937002A CN 108932509 A CN108932509 A CN 108932509A
- Authority
- CN
- China
- Prior art keywords
- target
- track
- tracking
- scene
- scene objects
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
A kind of across scene objects search methods and device based on video tracking, which comprises the track that video tracking obtains target is carried out to multiple association cameras, using the track of the target as training sample;The disaggregated model training of target is carried out according to the training sample;Target characteristic database is constructed using the disaggregated model of the target;Existing target signature carries out similarity measurement calculating in the feature for extracting video object to be retrieved, with the target characteristic database, takes similarity highest or top n feature as retrieval character, carries out target across scene search.Accuracy and robustness of the target in retrieval can be improved in the present invention, allows target to provide the multiframe information of track when retrieving and comparing, improves target retrieval effect.
Description
Technical field
It is right more particularly, under the monitoring scenes for having multiple association cameras the present invention relates to field of video monitoring
The target of appearance is carried out across scene search.
Background technique
Target image under a given monitoring scene, retrieves the case where target occurs under other cameras, can be wide
It is general to be applied to the fields such as intelligent video monitoring, intelligent security.But target can also be sent out under single scene with movement or with scene etc.
Raw visual angle change, represents target with a frame therein and is retrieved, can not give expression to all situations of the target under the scene,
When selecting bad, it is not high less than the sequencing of similarity of the target of needs or same target to frequently result in retrieval.
The feature used of target retrieval is mostly hand-designed, such as color, texture or histogram of gradients etc. mostly at present,
When posture or dimensional variation occurring for target, or by scene block or the complex situations such as light variation under, these hand-designeds
Feature can not be characterized well, it is easy to retrieval be caused to fail.
In addition, the method retrieved using deep neural network training pattern, effect has promotion, but due to depth mould
Type training needs a large amount of data, and marks the target in video, and same target is found out in different video, such work
Be as amount it is huge, often expend a large amount of manpower and material resources.
Summary of the invention
In view of this, this patent proposes a kind of across scene objects search methods and device based on video tracking, by right
Target is tracked in association scene, the data source that when available target retrieval needs, and can be produced for model training
Raw sample.Target can provide the multiframe information of track when retrieving and comparing, and improve target retrieval effect.
In order to achieve the above objectives, technical solution of the present invention is accomplished by
The present invention provides a kind of across scene objects search methods based on video tracking, it is characterised in that:
The track that video tracking obtains target is carried out to multiple association cameras, using the track of the target as training sample
This;
The disaggregated model training of target is carried out according to the training sample;
Target characteristic database is constructed using the disaggregated model of the target;
The feature for extracting video object to be retrieved, it is similar to target signature progress existing in the target characteristic database
Property metric calculation, take similarity highest or top n feature as retrieval character, target carried out across scene search.
Preferably, generating training sample by video tracking includes carrying out detecting and tracking to the target under each camera;
Target trajectory is filtered, to remove the distracter in track;
The track of the targets all in video under multiple cameras is merged;
Obtain target trajectory sample set.
Preferably, carrying out model training according to the training sample of the generation includes the training sample for obtaining tape label;
The data of above-mentioned sample are pre-processed;
Using convolutional neural networks, characteristic present layer is extracted, classification output is carried out by classification layer, completes model training.
Preferably, building target characteristic database includes that the target under each camera carries out detection and tracking, obtains mesh
Mark track;
A frame or multiple image data are chosen in above-mentioned target trajectory track as object representations frame;
Each target signature is extracted using training pattern;
The target signature extracted and its manipulative indexing are sent into database, target characteristic database is obtained.
Preferably, the feature for extracting video object to be retrieved, with target existing in the target characteristic database
Feature carries out similarity measurement to calculate including that each target in target and database is calculated by L2 norm or COS distance
Similarity.
The present invention also provides a kind of across scene objects retrieval devices based on video tracking, characterized by comprising:
Training sample generating means, for carrying out the track that video tracking obtains target to multiple association cameras, by this
The track of target is as training sample;
Model training apparatus, for carrying out the disaggregated model training of target according to the training sample;
Target characteristic database device is constructed, constructs target characteristic database using the disaggregated model of the target;
Across scene search device, for extracting the feature of video object to be retrieved, in the target characteristic database
Some target signatures carry out similarity measurement calculating, take similarity highest or top n feature as retrieval character, carry out to target
Across scene search.
Preferably, training sample generating means include
Detecting and tracking device, for including carrying out detecting and tracking to the target under each camera;
Filter device, for being filtered to target trajectory, to remove the distracter in track;
Fusing device, for merging the track of the targets all in the video under multiple cameras;
Target trajectory sample device, for obtaining target trajectory sample set.
Preferably, carrying out model training according to the training sample of the generation includes the training sample for obtaining tape label;
The data of above-mentioned sample are pre-processed;
Using convolutional neural networks, characteristic present layer is extracted, classification output is carried out by classification layer, completes model training.
Preferably, building target characteristic database device includes
Detecting and tracking device obtains target trajectory for carrying out detection and tracking to the target under each camera;
Selecting representation frame device, for choosing in above-mentioned target trajectory track a frame or multiple image data as object representations
Frame;
Extraction element, for extracting each target signature using training pattern;
Feeder obtains target signature for the target signature extracted and its manipulative indexing to be sent into database
Database.
Preferably, the feature for extracting video object to be retrieved, with target existing in the target characteristic database
Feature carries out similarity measurement to calculate including that each target in target and database is calculated by L2 norm or COS distance
Similarity.
The present invention also provides a kind of computer readable storage mediums, it is characterised in that: the computer-readable storage medium
It is stored with computer program in matter, the computer program is realized above-mentioned across scene objects search method when being executed by processor
Method and step.
Across the scene objects searching systems based on video tracking that the present invention also provides a kind of, it is characterised in that: including place
Reason device and the memory for being stored with executable instruction, realize claim when the executable instruction is executed by processor
Any across the scene objects search methods of 1-5.
A kind of across scene objects search methods based on video tracking proposed by the present invention are generated by video tracking and are trained
Sample;Model training is carried out according to the training sample of the generation;Construct target characteristic database;Extract video object to be retrieved
Feature, in the target characteristic database have clarification of objective carry out similarity measurement calculating, take similarity highest or
Top n feature carries out across scene search target as retrieval character.Accuracy and robust of the target in retrieval can be improved
Property, allow target to provide the multiframe information of track when retrieving and comparing, improves target retrieval effect.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art
To obtain other drawings based on these drawings.
Fig. 1 is that video tracking described in the embodiment of the present invention generates training sample flow chart.
Fig. 2 is model training frame diagram described in the embodiment of the present invention.
Fig. 3 is building target characteristic database flow chart described in the embodiment of the present invention.
Fig. 4 is across the scene search flow chart of target described in the embodiment of the present invention.
Specific embodiment
Technical solution of the present invention is clearly and completely described with reference to the accompanying drawings and specific embodiments of the specification.
The present invention proposes a kind of across scene objects search methods based on video tracking, by association scene in target into
Line trace, the data source that when available target retrieval needs, and sample can be generated for model training.Target is being retrieved
The multiframe information of track can be provided when comparison, improve target retrieval effect.
Video tracking as described in Figure 1 generates training sample flow chart.
Training sample is generated by video tracking, including carrying out detecting and tracking to the target under each camera;To target
Track is filtered, to remove the distracter in track;By the track of the targets all in the video under multiple cameras into
Row fusion;Obtain target trajectory sample set.
ROI (Regions ofInterest), area-of-interest model training.In actual monitoring, people often only
Concern is compared to some specific regions in monitored picture, such as license plate in picture, face etc., and to the blue sky of background,
Simultaneously pay no attention on meadow etc..These specific regions, referred to as " area-of-interest ".ROI coding techniques is that ISO group is woven in 2000
One of maximum bright spot and current image code domain in the still image compression coding standard JPEG2000 of new generation of formulation
Research hotspot.Technique can to the area-of-interest (ROI) in image carry out low compression ratio lossless compression or nearly nothing
Damage compression carries out the lossy compression training sample of high compression ratio in background area.In this way in the case where code stream is constant, Ji Kebao
Card do not lose important information again can effectively amount of compressed data, well solved the contradiction between compression ratio and picture quality
Target detection is carried out using Faster-RCNN or SSD/Yolo even depth detection method to each video first, is mentioned
Take out each target.It selects Fast DSST based on the monotrack method for differentiating scale space, is estimated by constructing scale space
The dimensional variation of target in the video sequence is counted, accurate tracking can be carried out to target.By giving each Target Assignment one
A tracker, extends to multiple target tracking.
The track of target in video is obtained by tracking, manpower intervention is screened, and checking whether there is in the track
It is not belonging to the image ROI of the target, if there is being deleted, is then carried out with all tracks in other videos including the target
Fusion, as soon as particularly using the track of target same in different video source as a set, set each in this way represents one kind
Object instance can quickly obtain thousands of class object instances, and the speed than marking by hand is many fastly, and can save people
Power.
Model training frame diagram as described in Figure 2.
Carrying out model training according to the training sample of the generation includes the training sample for obtaining tape label;To above-mentioned sample
Data pre-processed;Using convolutional neural networks, characteristic present layer is extracted, classification output is carried out by classification layer, is completed
Model training.
After obtaining training sample, model training can be carried out by one identification network of design, it is specific to use
For DenseNet as core network, the example number obtained before selects SoftmaxWithLoss as classification as classification number
Loss function can be with inverted second layer feature or multi-scale feature fusion, as being enough area after training pattern is by verifying
Divide the fine granularity feature of all kinds of targets.Follow-up storage is generally reduced, 128 dimensions or 256 dimensional features can be set.
Model training frame diagram as described in Figure 3.
Building target characteristic database includes that the target under each camera carries out detection and tracking, obtains target trajectory;
A frame or multiple image data are chosen in above-mentioned target trajectory track as object representations frame;It is special that each target is extracted using training pattern
Sign;The target signature extracted and its manipulative indexing are sent into database, target characteristic database is obtained.
Target characteristic database is constructed, for the new video data of associated multi-cam acquisition, use is as before
Mode carry out the Detection and Extraction and tracking of target, after taking the track of target, it is representative to choose target trajectory track
One frame or multiple image data, using the feature of target ROI in the model extraction image instructed, for the ROI of target multiframe,
Fusion can be weighted after extracting feature respectively.
The target signature extracted and its manipulative indexing are sent into database.Specifically by the ID and feature of each target
It is put into list as a line, then discharges in order in the database, used convenient for subsequent comparison retrieval.
Across the scene search flow chart of target as described in Figure 4.
Across scene objects search methods based on video tracking, generate training sample by video tracking;According to the life
At training sample carry out model training;Construct target characteristic database;The feature for extracting video object to be retrieved, with the mesh
It marks in property data base and has clarification of objective progress similarity measurement calculating, take similarity highest or top n feature as inspection
Suo Tezheng carries out across scene search target.
Target ROI to be retrieved is sent into identical network, using the model instructed, extracts feature by unified format.Then
Similarity measurement calculating is carried out with having clarification of objective in database, mesh can be calculated by L2 norm or COS distance
The similarity of mark and each target in database.Measurement results are ranked up by size, take similarity highest or top n target
It is exported as search result, to realize to target across scene search.
Track of the same target under a scene is obtained according to tracking, and removes the sample of redundancy in track, obtains mesh
Then the sample set of the different postures and scale that are marked under a scene normalizes to identical size, calculate in sample set every
The feature of picture asks max pooling or average pooling (image classification processing, convolutional Neural net to these features
Network), retrieval character of the average characteristics used as the target.Accuracy of the target in retrieval can be improved in this method
And robustness.
By being tracked to target in association scene, the data source that when available target retrieval needs, and can
Think that model training generates sample.Target can provide the multiframe information of track when retrieving and comparing, and improve target retrieval effect
Fruit.
Corresponding to above-mentioned across the scene objects search methods based on video tracking, the present invention also provides one kind based on view
Across the scene objects retrieval devices of frequency tracking.It include: training sample generating means, for generating training sample by video tracking
This;Model training apparatus, for carrying out model training according to the training sample of the generation;Construct target characteristic database dress
It sets, for constructing target characteristic database;Across scene search device, it is and described for extracting the feature of video object to be retrieved
Have clarification of objective in target characteristic database and carry out similarity measurement calculating, takes similarity highest or top n feature conduct
Retrieval character carries out across scene search target.
Firstly, carrying out target detection using Faster-RCNN or SSD/Yolo even depth detection method to each video, mention
Take out each target.It selects Fast DSST based on the monotrack method for differentiating scale space, is estimated by constructing scale space
The dimensional variation of target in the video sequence is counted, accurate tracking can be carried out to target.By giving each Target Assignment one
A tracker, extends to multiple target tracking.
The track of target in video is obtained by tracking, manpower intervention is screened, and checking whether there is in the track
It is not belonging to the image ROI of the target, if there is being deleted, is then carried out with all tracks in other videos including the target
Fusion, as soon as particularly using the track of target same in different video source as a set, set each in this way represents one kind
Object instance can quickly obtain thousands of class object instances, and the speed than marking by hand is many fastly, and can save people
Power.
The training sample generating means include: detecting and tracking device, for include to the target under each camera into
Row detecting and tracking;Filter device, for being filtered to target trajectory, to remove the distracter in track;Fusing device is used
It is merged in by the track of the targets all in the video under multiple cameras;Target trajectory sample device, for obtaining mesh
Mark track sample set.
After obtaining training sample, model training can be carried out by one identification network of design, it is specific to use
For DenseNet as core network, the example number obtained before selects SoftmaxWithLoss as classification as classification number
Loss function can be with inverted second layer feature or multi-scale feature fusion, as being enough area after training pattern is by verifying
Divide the fine granularity feature of all kinds of targets.Follow-up storage is generally reduced, 128 dimensions or 256 dimensional features can be set.
Carrying out model training according to the training sample of the generation includes the training sample for obtaining tape label;To above-mentioned sample
Data pre-processed;Using convolutional neural networks, characteristic present layer is extracted, classification output is carried out by classification layer, is completed
Model training.
Target characteristic database is constructed, for the new video data of associated multi-cam acquisition, use is as before
Mode carry out the Detection and Extraction and tracking of target, after taking the track of target, it is representative to choose target trajectory track
One frame or multiple image data, using the feature of target ROI in the model extraction image instructed, for the ROI of target multiframe,
Fusion can be weighted after extracting feature respectively.
The target signature extracted and its manipulative indexing are sent into database.Specifically by the ID and feature of each target
It is put into list as a line, then discharges in order in the database, used convenient for subsequent comparison retrieval.
Constructing target characteristic database device includes detecting and tracking device, for examining to the target under each camera
It surveys and tracks, obtain target trajectory;Selecting representation frame device, for choosing a frame or multiple image number in above-mentioned target trajectory track
According to as object representations frame;Extraction element, for extracting each target signature using training pattern;Feeder, for that will extract
Target signature and its manipulative indexing out is sent into database, and target characteristic database is obtained.
The feature for extracting video object to be retrieved is carried out with having clarification of objective in the target characteristic database
Similarity measurement calculating includes the similarity that each target in target and database is calculated by L2 norm or COS distance.
Target ROI to be retrieved is sent into identical network, using the model instructed, extracts feature by unified format.Then
Similarity measurement calculating is carried out with having clarification of objective in database, mesh can be calculated by L2 norm or COS distance
The similarity of mark and each target in database.Measurement results are ranked up by size, take similarity highest or top n target
It is exported as search result, to realize to target across scene search.
Track of the same target under a scene is obtained according to tracking, and removes the sample of redundancy in track, obtains mesh
Then the sample set of the different postures and scale that are marked under a scene normalizes to identical size, calculate in sample set every
The feature of picture asks max pooling or average pooling (image classification processing, convolutional Neural net to these features
Network), retrieval character of the average characteristics used as the target.Accuracy of the target in retrieval can be improved in this method
And robustness.
The present invention also provides a kind of computer readable storage medium, meter is stored in the computer readable storage medium
Calculation machine program, the computer program realize the above-mentioned method and step across scene objects search method when being executed by processor.
Across the scene objects searching systems based on video tracking that the present invention also provides a kind of, including processor, Yi Jicun
The memory for containing executable instruction realizes above-mentioned across scene objects retrievals when the executable instruction is executed by processor
Method.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, the shape of hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the present invention
Formula.Moreover, the present invention, which can be used, can use storage in the computer that one or more wherein includes computer usable program code
The form for the computer program product implemented on medium (including but not limited to magnetic disk storage and optical memory etc.).
The above is only specific steps examples of the invention, are not limited in any way to protection scope of the present invention, all uses
Equivalent transformation or equivalent replacement and the technical solution formed, all fall within rights protection scope of the present invention.
Claims (12)
1. a kind of across scene objects search methods based on video tracking, it is characterised in that:
The track that video tracking obtains target is carried out to multiple association cameras, using the track of the target as training sample;
The disaggregated model training of target is carried out according to the training sample;
Target characteristic database is constructed using the disaggregated model of the target;
Existing target signature carries out similarity measurements in the feature for extracting video object to be retrieved, with the target characteristic database
Amount calculates, and takes similarity highest or top n feature as retrieval character, carries out target across scene search.
2. across scene objects search methods as described in claim 1, which is characterized in that obtain the rail of target by video tracking
Mark includes:
Detecting and tracking is carried out to the target under each camera;
Target trajectory is filtered, the distracter in track is removed;
The track of all targets in multiple camera videos is merged;
Obtain target trajectory sample set.
3. across scene objects search methods as described in claim 1, it is characterised in that:
Include: according to the disaggregated model training that the training sample carries out target
Obtain the training sample of tape label;
The data of above-mentioned sample are pre-processed;
Using convolutional neural networks, characteristic present layer is extracted, classification output is carried out by classification layer, completes model training.
4. across scene objects search methods as described in claim 1, it is characterised in that:
Constructing target characteristic database includes:
Detection and tracking is carried out to the target under each camera, obtains target trajectory;
A frame or multiple image data are chosen in above-mentioned target trajectory track as object representations frame;
Each target signature is extracted using the disaggregated model;
The target signature extracted and its manipulative indexing are sent into database, target characteristic database is obtained.
5. across scene objects search methods as described in claim 1, it is characterised in that:
The feature for extracting video object to be retrieved, it is similar to target signature progress existing in the target characteristic database
Property metric calculation includes:
The similarity of each target in target and database is calculated by L2 norm or COS distance.
6. a kind of across scene objects retrieval devices based on video tracking, characterized by comprising:
Training sample generating means, for carrying out the track that video tracking obtains target to multiple association cameras, by the target
Track as training sample;
Model training apparatus, for carrying out the disaggregated model training of target according to the training sample;
Target characteristic database device is constructed, constructs target characteristic database using the disaggregated model of the target;
Across scene search device, it is and existing in the target characteristic database for extracting the feature of video object to be retrieved
Target signature carries out similarity measurement calculating, takes similarity highest or top n feature as retrieval character, carries out target across field
Scape retrieval.
7. across scene objects retrieval devices as claimed in claim 6, it is characterised in that: training sample generating means include:
Detecting and tracking device, for carrying out detecting and tracking to the target under each camera;
Filter device removes the distracter in track for being filtered to target trajectory;
Fusing device, for merging the track of all targets in multiple camera videos;
Target trajectory sample device, for obtaining target trajectory sample set.
8. across scene objects retrieval devices as claimed in claim 6, which is characterized in that the model training apparatus is specifically used
In:
Obtain the training sample of tape label;
The data of above-mentioned sample are pre-processed;
Using convolutional neural networks, characteristic present layer is extracted, classification output is carried out by classification layer, completes model training.
9. across scene objects retrieval devices as claimed in claim 6, it is characterised in that: building target characteristic database device packet
It includes:
Detecting and tracking device obtains target trajectory for carrying out detection and tracking to the target under each camera;
Selecting representation frame device, for choosing in above-mentioned target trajectory track a frame or multiple image data as object representations frame;
Extraction element, for extracting each target signature using the disaggregated model;
Feeder obtains target signature data for the target signature extracted and its manipulative indexing to be sent into database
Library.
10. across scene objects retrieval devices as claimed in claim 6, it is characterised in that:
Across the scene search device is also used to be calculated each target in target and database by L2 norm or COS distance
Similarity.
11. a kind of computer readable storage medium, it is characterised in that:
Computer program, the realization when computer program is executed by processor are stored in the computer readable storage medium
Any method and step of claim 1-5.
12. a kind of across scene objects searching systems based on video tracking, it is characterised in that:
Including processor and it is stored with the memory of executable instruction, it is real when the executable instruction is executed by processor
Existing any across the scene objects search methods of claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810937002.1A CN108932509A (en) | 2018-08-16 | 2018-08-16 | A kind of across scene objects search methods and device based on video tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810937002.1A CN108932509A (en) | 2018-08-16 | 2018-08-16 | A kind of across scene objects search methods and device based on video tracking |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108932509A true CN108932509A (en) | 2018-12-04 |
Family
ID=64445893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810937002.1A Pending CN108932509A (en) | 2018-08-16 | 2018-08-16 | A kind of across scene objects search methods and device based on video tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108932509A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829936A (en) * | 2019-01-29 | 2019-05-31 | 青岛海信网络科技股份有限公司 | A kind of method and apparatus of target tracking |
CN109978918A (en) * | 2019-03-21 | 2019-07-05 | 腾讯科技(深圳)有限公司 | A kind of trajectory track method, apparatus and storage medium |
CN110490905A (en) * | 2019-08-15 | 2019-11-22 | 江西联创精密机电有限公司 | A kind of method for tracking target based on YOLOv3 and DSST algorithm |
CN110517495A (en) * | 2019-09-05 | 2019-11-29 | 四川东方网力科技有限公司 | Confirmation method, device, equipment and the storage medium of track of vehicle classification |
CN112417970A (en) * | 2020-10-22 | 2021-02-26 | 北京迈格威科技有限公司 | Target object identification method, device and electronic system |
CN112465869A (en) * | 2020-11-30 | 2021-03-09 | 杭州海康威视数字技术股份有限公司 | Track association method and device, electronic equipment and storage medium |
CN113378005A (en) * | 2021-06-03 | 2021-09-10 | 北京百度网讯科技有限公司 | Event processing method and device, electronic equipment and storage medium |
CN115439771A (en) * | 2022-07-22 | 2022-12-06 | 太原理工大学 | Improved DSST infrared laser spot tracking method |
CN118379662A (en) * | 2024-04-28 | 2024-07-23 | 北京卓鸷科技有限责任公司 | Method, system and monitoring equipment for re-identifying looking-around target |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413295A (en) * | 2013-07-12 | 2013-11-27 | 长沙理工大学 | Video multi-target long-range tracking method |
CN104318588A (en) * | 2014-11-04 | 2015-01-28 | 北京邮电大学 | Multi-video-camera target tracking method based on position perception and distinguish appearance model |
CN105224912A (en) * | 2015-08-31 | 2016-01-06 | 电子科技大学 | Based on the video pedestrian detection and tracking method of movable information and Track association |
CN106327502A (en) * | 2016-09-06 | 2017-01-11 | 山东大学 | Multi-scene multi-target recognition and tracking method in security video |
CN106599925A (en) * | 2016-12-19 | 2017-04-26 | 广东技术师范学院 | Plant leaf identification system and method based on deep learning |
CN106933861A (en) * | 2015-12-30 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | A kind of customized across camera lens target retrieval method of supported feature |
CN107122439A (en) * | 2017-04-21 | 2017-09-01 | 图麟信息科技(深圳)有限公司 | A kind of video segment querying method and device |
CN107169106A (en) * | 2017-05-18 | 2017-09-15 | 珠海习悦信息技术有限公司 | Video retrieval method, device, storage medium and processor |
CN107480178A (en) * | 2017-07-01 | 2017-12-15 | 广州深域信息科技有限公司 | A kind of pedestrian's recognition methods again compared based on image and video cross-module state |
CN108073933A (en) * | 2016-11-08 | 2018-05-25 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method and device |
US20180157899A1 (en) * | 2016-12-07 | 2018-06-07 | Samsung Electronics Co., Ltd. | Method and apparatus detecting a target |
CN108229407A (en) * | 2018-01-11 | 2018-06-29 | 武汉米人科技有限公司 | A kind of behavioral value method and system in video analysis |
CN108229475A (en) * | 2018-01-03 | 2018-06-29 | 深圳中兴网信科技有限公司 | Wireless vehicle tracking, system, computer equipment and readable storage medium storing program for executing |
-
2018
- 2018-08-16 CN CN201810937002.1A patent/CN108932509A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413295A (en) * | 2013-07-12 | 2013-11-27 | 长沙理工大学 | Video multi-target long-range tracking method |
CN104318588A (en) * | 2014-11-04 | 2015-01-28 | 北京邮电大学 | Multi-video-camera target tracking method based on position perception and distinguish appearance model |
CN105224912A (en) * | 2015-08-31 | 2016-01-06 | 电子科技大学 | Based on the video pedestrian detection and tracking method of movable information and Track association |
CN106933861A (en) * | 2015-12-30 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | A kind of customized across camera lens target retrieval method of supported feature |
CN106327502A (en) * | 2016-09-06 | 2017-01-11 | 山东大学 | Multi-scene multi-target recognition and tracking method in security video |
CN108073933A (en) * | 2016-11-08 | 2018-05-25 | 杭州海康威视数字技术股份有限公司 | A kind of object detection method and device |
US20180157899A1 (en) * | 2016-12-07 | 2018-06-07 | Samsung Electronics Co., Ltd. | Method and apparatus detecting a target |
CN106599925A (en) * | 2016-12-19 | 2017-04-26 | 广东技术师范学院 | Plant leaf identification system and method based on deep learning |
CN107122439A (en) * | 2017-04-21 | 2017-09-01 | 图麟信息科技(深圳)有限公司 | A kind of video segment querying method and device |
CN107169106A (en) * | 2017-05-18 | 2017-09-15 | 珠海习悦信息技术有限公司 | Video retrieval method, device, storage medium and processor |
CN107480178A (en) * | 2017-07-01 | 2017-12-15 | 广州深域信息科技有限公司 | A kind of pedestrian's recognition methods again compared based on image and video cross-module state |
CN108229475A (en) * | 2018-01-03 | 2018-06-29 | 深圳中兴网信科技有限公司 | Wireless vehicle tracking, system, computer equipment and readable storage medium storing program for executing |
CN108229407A (en) * | 2018-01-11 | 2018-06-29 | 武汉米人科技有限公司 | A kind of behavioral value method and system in video analysis |
Non-Patent Citations (1)
Title |
---|
吴迪: "《智能环境下基于音视频多模态融合的身份识别》", 31 March 2018 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109829936A (en) * | 2019-01-29 | 2019-05-31 | 青岛海信网络科技股份有限公司 | A kind of method and apparatus of target tracking |
CN109978918A (en) * | 2019-03-21 | 2019-07-05 | 腾讯科技(深圳)有限公司 | A kind of trajectory track method, apparatus and storage medium |
CN110490905A (en) * | 2019-08-15 | 2019-11-22 | 江西联创精密机电有限公司 | A kind of method for tracking target based on YOLOv3 and DSST algorithm |
CN110517495A (en) * | 2019-09-05 | 2019-11-29 | 四川东方网力科技有限公司 | Confirmation method, device, equipment and the storage medium of track of vehicle classification |
CN112417970A (en) * | 2020-10-22 | 2021-02-26 | 北京迈格威科技有限公司 | Target object identification method, device and electronic system |
CN112465869A (en) * | 2020-11-30 | 2021-03-09 | 杭州海康威视数字技术股份有限公司 | Track association method and device, electronic equipment and storage medium |
CN112465869B (en) * | 2020-11-30 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | Track association method and device, electronic equipment and storage medium |
CN113378005A (en) * | 2021-06-03 | 2021-09-10 | 北京百度网讯科技有限公司 | Event processing method and device, electronic equipment and storage medium |
CN115439771A (en) * | 2022-07-22 | 2022-12-06 | 太原理工大学 | Improved DSST infrared laser spot tracking method |
CN118379662A (en) * | 2024-04-28 | 2024-07-23 | 北京卓鸷科技有限责任公司 | Method, system and monitoring equipment for re-identifying looking-around target |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932509A (en) | A kind of across scene objects search methods and device based on video tracking | |
Zhu et al. | Detection and tracking meet drones challenge | |
CN109697435B (en) | People flow monitoring method and device, storage medium and equipment | |
Kalsotra et al. | Background subtraction for moving object detection: explorations of recent developments and challenges | |
CN109344736B (en) | Static image crowd counting method based on joint learning | |
CN109447169A (en) | The training method of image processing method and its model, device and electronic system | |
CN109190508A (en) | A kind of multi-cam data fusion method based on space coordinates | |
US20160260015A1 (en) | Sports formation retrieval | |
CN108256439A (en) | A kind of pedestrian image generation method and system based on cycle production confrontation network | |
Yadav et al. | An improved deep learning-based optimal object detection system from images | |
CN104166841A (en) | Rapid detection identification method for specified pedestrian or vehicle in video monitoring network | |
CN103345492A (en) | Method and system for video enrichment | |
CN109743547A (en) | A kind of artificial intelligence security monitoring management system | |
Ciampi et al. | Counting Vehicles with Cameras. | |
CN111723773A (en) | Remnant detection method, device, electronic equipment and readable storage medium | |
CN103853794B (en) | Pedestrian retrieval method based on part association | |
CN109271932A (en) | Pedestrian based on color-match recognition methods again | |
Luo et al. | Traffic analytics with low-frame-rate videos | |
CN113158891B (en) | Cross-camera pedestrian re-identification method based on global feature matching | |
CN109409250A (en) | A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning | |
CN109902550A (en) | The recognition methods of pedestrian's attribute and device | |
CN111899279A (en) | Method and device for detecting motion speed of target object | |
Fernández et al. | Robust Real‐Time Traffic Surveillance with Deep Learning | |
CN105574545A (en) | Environment image multi-view-angle meaning cutting method and device | |
Gupta et al. | Tree annotations in LiDAR data using point densities and convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181204 |
|
RJ01 | Rejection of invention patent application after publication |