CN107992826A - A kind of people stream detecting method based on the twin network of depth - Google Patents
A kind of people stream detecting method based on the twin network of depth Download PDFInfo
- Publication number
- CN107992826A CN107992826A CN201711250385.7A CN201711250385A CN107992826A CN 107992826 A CN107992826 A CN 107992826A CN 201711250385 A CN201711250385 A CN 201711250385A CN 107992826 A CN107992826 A CN 107992826A
- Authority
- CN
- China
- Prior art keywords
- detection
- target
- frame
- twin network
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of people stream detecting method based on the twin network of depth, include the following steps:(1)The journal file of one blank of establishment and queue first, object detection is carried out to the first frame of video flowing using Faster R CNN algorithm of target detection;(2)Species, time of occurrence and the score for belonging to the species of the object detected are obtained, target is added into queue with to be tracked, target is belonged to the daily record of its classification by the write-in of its species;(3)Target in queue is tracked using twin network is connected entirely.The present invention combines detection and tracing algorithm, first using the Faster R CNN object classifications identified and its position, the first frame using detection frame as tracing algorithm, the detection block that detection frame is recognized is input to the tracking that the twin network of full convolution carries out object, has been finally completed multiple target video identification and tracking.
Description
Technical field
The present invention relates to a kind of detection method, is specifically a kind of people stream detecting method based on the twin network of depth.
Background technology
As the development of deep learning and data volume increasingly increase, the good convolutional Neural net of many fitting effects has been emerged in large numbers
Network structure.Neutral net AlexNet, the VGG of quantity of parameters are but included by more shallow-layer, is but used up to 152 layers to depth
The ResNet of less parameter, its effect and precision are also greatly improved, and have applied among business.
What target detection mainly solved is the positional information and its generic for identifying object in picture.Traditional target
Detection is first to select candidate region in the picture, then to these extracted region features, recently enters in grader and is divided
Class.But the regional choice strategy based on sliding window does not have specific aim, window redundancy, causes computationally intensive.Based on candidate regions
The deep learning target detection method in domain greatly reduces calculation amount, and achieves higher-quality candidate window.It is represented
Property algorithm has R-CNN, Faster R-CNN etc..
Video frequency tracking technology needs to be accomplished that within a period of time to same object under complex background, as illumination becomes
Change, motion blur, object blocks, the similar interference of background, object dimensional variation etc., carries out precisely tracking in real time.Therefore, object
Tracking is in fields such as automatic Pilot, security protection, monitoring as a kind of technology of core.Traditional video frequency tracking technology, it is main to use
To characteristics of image plus the method for machine learning, such as remove to be input in SVM classifier after object edge feature using HOG detections and carry out
Differentiate.With the development of deep learning, there is the method with reference to correlation filtering combination convolutional neural networks, although in tracking essence
Gratifying effect is obtained on degree, numerous and jumbled yet with the parameter of neutral net, its real-time speed is often unsatisfactory.
The content of the invention
It is an object of the invention to provide a kind of people stream detecting method based on the twin network of depth, to solve above-mentioned background
The problem of being proposed in technology.
To achieve the above object, the present invention provides following technical solution:
A kind of people stream detecting method based on the twin network of depth, includes the following steps:(1)The daily record of a blank is created first
File and queue, object detection is carried out to the first frame of video flowing using Faster R-CNN algorithm of target detection;(2)Examined
Species, time of occurrence and the score for belonging to the species of the object measured, add queue with to be tracked by target, target are pressed it
Species writes the daily record for belonging to its classification;(3)Target in queue is tracked using twin network is connected entirely;(4)Judge
Current time apart from detection frame time whether specified threshold, if it does, be detected to present frame, otherwise, continue into
Line trace operation, wherein threshold value are set by first neural network prediction;(5)Object consistency detection, judgement detect again
The regional frame that tracks of object frame and previous frame calculate IOU, that is, take the intersection of two frames divided by the union of two frames, judge
Whether IOU is less than 0.1, if being less than, using the object that detection block detects as fresh target, is written in daily record and queue;(6)
Judge whether target present in journal file is more than threshold time, if more than the object removal then being gone out journal file, together
Reason, wherein threshold value are set by first neural network prediction.
As the further scheme of the present invention:In the step(3)In, it is twin using the connection entirely of double-deck MobileNet
Network object tracking algorithm.
As further scheme of the invention:Step(4)And step(6), it is by first neural network prediction threshold value
Method.
Compared with prior art, the beneficial effects of the invention are as follows:The present invention combines detection and tracing algorithm, first uses
The object classification and its position that Faster R-CNN are identified, the first frame using detection frame as tracing algorithm, will detect
The detection block that frame recognizes is input to the tracking that the twin network of full convolution carries out object, be finally completed multiple target video identification and
Tracking.
Brief description of the drawings
Fig. 1 is the video detection trace flow figure of the people stream detecting method based on the twin network of depth.
Fig. 2 is the structure of Faster R-CNN algorithm of target detection in the people stream detecting method based on the twin network of depth
Figure.
Fig. 3 is the twin convolutional network structure chart of product in the people stream detecting method based on the twin network of depth.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other without making creative work
Embodiment, belongs to the scope of protection of the invention.
Please refer to Fig.1~3, in the embodiment of the present invention, a kind of people stream detecting method based on the twin network of depth, including such as
Lower step:(1)The journal file of one blank of establishment and queue first, Faster R-CNN mesh is used to the first frame of video flowing
Mark detection algorithm and carry out object detection;(2)Species, time of occurrence and the score for belonging to the species of the object detected are obtained,
Target is added into queue with to be tracked, target is belonged to the daily record of its classification by the write-in of its species;(3)Target in queue is made
It is tracked with twin network is connected entirely;(4)Judge current time apart from detection frame time whether specified threshold, if
Words, are detected present frame, otherwise, continue tracking operation, wherein threshold value is set by first neural network prediction;
(5)Object consistency detection, the object frame for judging to detect again calculate IOU with the regional frame that previous frame tracks, that is, take
The intersection of two frames divided by the union of two frames, judge whether IOU is less than 0.1, if being less than, the object that detection block is detected
As fresh target, it is written in daily record and queue;(6)Judge whether target present in journal file is more than threshold time, if
It is more than, then the object removal is gone out into journal file, similarly, wherein threshold value is set by first neural network prediction.
In the step(3)In, use the twin network object tracking algorithm of the connection entirely of double-deck MobileNet.
Step(4)And step(6), it is the method by first neural network prediction threshold value.
The present invention combines currently all achieves the Faster R-CNN target detections calculation of better effects in real-time and precision
Method and the twin network trace algorithm of full convolution are come the exploitation that carries out multiple target video detection and follow the trail of platform.
Faster R-CNN are made of two big modules, PRN candidate frames extraction module and Fast R-CNN detection modules, PRN
Using a quick convolutional neural networks, ResNet is used herein as, direct generating region detection block obtains being probably object
Segment, its classification is detected and by modified detection block once being just input in Fast R-CNN after obtaining the segment.
The complete twin network of convolution mainly has two MobileNet neutral nets for being used for Feature Mapping, one of input
Be the first frame authentic signature object detection frame, another input is the region of search in tracking frame below.Both
It is that full convolutional network exports two two-dimentional characteristic patterns, two two-dimentional characteristic patterns are carried out similarity convolution just obtains one pair
Should be in the shot chart in tracking frame region of search space, its score is higher, and similarity is also higher, find corresponding to top score with
The position of track frame, just realizes the process of object tracking.
The present invention combines detection and tracing algorithm, first using the Faster R-CNN object classifications identified and its position
Put, the first frame using detection frame as tracing algorithm, the detection block that detection frame is recognized be input to the twin network of full convolution into
The tracking of row object.It has been finally completed multiple target video identification and tracking.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference numeral in claim should not be considered as to the involved claim of limitation.
Moreover, it will be appreciated that although the present specification is described in terms of embodiments, not each embodiment is only wrapped
Containing an independent technical solution, this narrating mode of specification is only that those skilled in the art should for clarity
Using specification as an entirety, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art
It is appreciated that other embodiment.
Claims (3)
1. a kind of people stream detecting method based on the twin network of depth, it is characterised in that include the following steps:(1)Create first
The journal file of one blank and queue, object is carried out to the first frame of video flowing using Faster R-CNN algorithm of target detection
Detection;(2)Species, time of occurrence and the score for belonging to the species of the object detected are obtained, target is added into queue to treat
Target, is belonged to the daily record of its classification by tracking by the write-in of its species;(3)To the target in queue using entirely connect twin network into
Row tracking;(4)Judge current time apart from detection frame time whether specified threshold, if it is, be detected to present frame,
Otherwise, tracking operation is continued, wherein threshold value is set by first neural network prediction;(5)Object consistency detection, sentences
The disconnected object frame detected again calculates IOU with the regional frame that previous frame tracks, that is, take two frames intersection divided by two frames
Union, judge IOU whether be less than 0.1, if being less than, using the object that detection block detects as fresh target, be written to daily record
In queue;(6)Judge whether target present in journal file is more than threshold time, if more than then the object removal is gone out
Journal file, similarly, wherein threshold value are set by first neural network prediction.
2. the people stream detecting method according to claim 1 based on the twin network of depth, it is characterised in that in the step
(3)In, use the twin network object tracking algorithm of the connection entirely of double-deck MobileNet.
3. the people stream detecting method according to claim 1 based on the twin network of depth, it is characterised in that step(4)With
Step(6), it is the method by first neural network prediction threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711250385.7A CN107992826A (en) | 2017-12-01 | 2017-12-01 | A kind of people stream detecting method based on the twin network of depth |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711250385.7A CN107992826A (en) | 2017-12-01 | 2017-12-01 | A kind of people stream detecting method based on the twin network of depth |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107992826A true CN107992826A (en) | 2018-05-04 |
Family
ID=62035112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711250385.7A Pending CN107992826A (en) | 2017-12-01 | 2017-12-01 | A kind of people stream detecting method based on the twin network of depth |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107992826A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734151A (en) * | 2018-06-14 | 2018-11-02 | 厦门大学 | Robust long-range method for tracking target based on correlation filtering and the twin network of depth |
CN108921099A (en) * | 2018-07-03 | 2018-11-30 | 常州大学 | Moving ship object detection method in a kind of navigation channel based on deep learning |
CN108921880A (en) * | 2018-06-11 | 2018-11-30 | 西安电子科技大学 | A kind of vision multi-object tracking method based on multiple single trackers |
CN109086648A (en) * | 2018-05-24 | 2018-12-25 | 同济大学 | A kind of method for tracking target merging target detection and characteristic matching |
CN109191491A (en) * | 2018-08-03 | 2019-01-11 | 华中科技大学 | The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion |
CN109446889A (en) * | 2018-09-10 | 2019-03-08 | 北京飞搜科技有限公司 | Object tracking method and device based on twin matching network |
CN109523476A (en) * | 2018-11-02 | 2019-03-26 | 武汉烽火众智数字技术有限责任公司 | License plate for video investigation goes motion blur method |
CN109766780A (en) * | 2018-12-20 | 2019-05-17 | 武汉理工大学 | A kind of ship smog emission on-line checking and method for tracing based on deep learning |
CN109800802A (en) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | Visual sensor and object detecting method and device applied to visual sensor |
CN110532886A (en) * | 2019-07-31 | 2019-12-03 | 国网江苏省电力有限公司 | A kind of algorithm of target detection based on twin neural network |
CN110634155A (en) * | 2018-06-21 | 2019-12-31 | 北京京东尚科信息技术有限公司 | Target detection method and device based on deep learning |
CN111340850A (en) * | 2020-03-20 | 2020-06-26 | 军事科学院系统工程研究院系统总体研究所 | Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss |
CN112183675A (en) * | 2020-11-10 | 2021-01-05 | 武汉工程大学 | Twin network-based tracking method for low-resolution target |
CN112581507A (en) * | 2020-12-31 | 2021-03-30 | 北京澎思科技有限公司 | Target tracking method, system and computer readable storage medium |
CN112597795A (en) * | 2020-10-28 | 2021-04-02 | 丰颂教育科技(江苏)有限公司 | Visual tracking and positioning method for motion-blurred object in real-time video stream |
CN112734737A (en) * | 2021-01-18 | 2021-04-30 | 天津大学 | Intelligent capture system for brain glioma key frames based on VGG twin network |
CN112906466A (en) * | 2021-01-15 | 2021-06-04 | 深圳云天励飞技术股份有限公司 | Image association method, system and equipment and image searching method and system |
CN116485758A (en) * | 2023-04-25 | 2023-07-25 | 什维新智医疗科技(上海)有限公司 | Method, system, electronic equipment and medium for determining number of nodules |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106875425A (en) * | 2017-01-22 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of multi-target tracking system and implementation method based on deep learning |
CN107122730A (en) * | 2017-04-24 | 2017-09-01 | 乐金伟 | Free dining room automatic price method |
-
2017
- 2017-12-01 CN CN201711250385.7A patent/CN107992826A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106875425A (en) * | 2017-01-22 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of multi-target tracking system and implementation method based on deep learning |
CN107122730A (en) * | 2017-04-24 | 2017-09-01 | 乐金伟 | Free dining room automatic price method |
Non-Patent Citations (2)
Title |
---|
ANDREW G. HOWARD 等: "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", 《ARXIV:1704.04861V1》 * |
胡寿松 等: "《粗糙决策理论与应用》", 30 April 2006, 北京航空航天大学出版社 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086648A (en) * | 2018-05-24 | 2018-12-25 | 同济大学 | A kind of method for tracking target merging target detection and characteristic matching |
CN108921880A (en) * | 2018-06-11 | 2018-11-30 | 西安电子科技大学 | A kind of vision multi-object tracking method based on multiple single trackers |
CN108921880B (en) * | 2018-06-11 | 2022-05-03 | 西安电子科技大学 | Visual multi-target tracking method based on multiple single trackers |
CN108734151A (en) * | 2018-06-14 | 2018-11-02 | 厦门大学 | Robust long-range method for tracking target based on correlation filtering and the twin network of depth |
CN108734151B (en) * | 2018-06-14 | 2020-04-14 | 厦门大学 | Robust long-range target tracking method based on correlation filtering and depth twin network |
CN110634155A (en) * | 2018-06-21 | 2019-12-31 | 北京京东尚科信息技术有限公司 | Target detection method and device based on deep learning |
CN108921099A (en) * | 2018-07-03 | 2018-11-30 | 常州大学 | Moving ship object detection method in a kind of navigation channel based on deep learning |
CN109191491A (en) * | 2018-08-03 | 2019-01-11 | 华中科技大学 | The method for tracking target and system of the twin network of full convolution based on multilayer feature fusion |
CN109191491B (en) * | 2018-08-03 | 2020-09-08 | 华中科技大学 | Target tracking method and system of full convolution twin network based on multi-layer feature fusion |
CN109446889A (en) * | 2018-09-10 | 2019-03-08 | 北京飞搜科技有限公司 | Object tracking method and device based on twin matching network |
CN109446889B (en) * | 2018-09-10 | 2021-03-09 | 苏州飞搜科技有限公司 | Object tracking method and device based on twin matching network |
CN109523476A (en) * | 2018-11-02 | 2019-03-26 | 武汉烽火众智数字技术有限责任公司 | License plate for video investigation goes motion blur method |
CN109523476B (en) * | 2018-11-02 | 2022-04-05 | 武汉烽火众智数字技术有限责任公司 | License plate motion blur removing method for video detection |
CN109766780A (en) * | 2018-12-20 | 2019-05-17 | 武汉理工大学 | A kind of ship smog emission on-line checking and method for tracing based on deep learning |
CN109800802A (en) * | 2019-01-10 | 2019-05-24 | 深圳绿米联创科技有限公司 | Visual sensor and object detecting method and device applied to visual sensor |
CN110532886A (en) * | 2019-07-31 | 2019-12-03 | 国网江苏省电力有限公司 | A kind of algorithm of target detection based on twin neural network |
CN111340850A (en) * | 2020-03-20 | 2020-06-26 | 军事科学院系统工程研究院系统总体研究所 | Ground target tracking method of unmanned aerial vehicle based on twin network and central logic loss |
CN112597795A (en) * | 2020-10-28 | 2021-04-02 | 丰颂教育科技(江苏)有限公司 | Visual tracking and positioning method for motion-blurred object in real-time video stream |
CN112183675A (en) * | 2020-11-10 | 2021-01-05 | 武汉工程大学 | Twin network-based tracking method for low-resolution target |
CN112183675B (en) * | 2020-11-10 | 2023-09-26 | 武汉工程大学 | Tracking method for low-resolution target based on twin network |
CN112581507A (en) * | 2020-12-31 | 2021-03-30 | 北京澎思科技有限公司 | Target tracking method, system and computer readable storage medium |
CN112906466A (en) * | 2021-01-15 | 2021-06-04 | 深圳云天励飞技术股份有限公司 | Image association method, system and equipment and image searching method and system |
CN112734737A (en) * | 2021-01-18 | 2021-04-30 | 天津大学 | Intelligent capture system for brain glioma key frames based on VGG twin network |
CN116485758A (en) * | 2023-04-25 | 2023-07-25 | 什维新智医疗科技(上海)有限公司 | Method, system, electronic equipment and medium for determining number of nodules |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107992826A (en) | A kind of people stream detecting method based on the twin network of depth | |
Varadarajan et al. | Topic models for scene analysis and abnormality detection | |
CN103347167B (en) | A kind of monitor video content based on segmentation describes method | |
CN107943837A (en) | A kind of video abstraction generating method of foreground target key frame | |
CN103440668B (en) | Method and device for tracing online video target | |
CN108665481A (en) | Multilayer depth characteristic fusion it is adaptive resist block infrared object tracking method | |
CN107705324A (en) | A kind of video object detection method based on machine learning | |
CN106529477A (en) | Video human behavior recognition method based on significant trajectory and time-space evolution information | |
Yang et al. | Improved lane detection with multilevel features in branch convolutional neural networks | |
CN110298297A (en) | Flame identification method and device | |
CN111738218B (en) | Human body abnormal behavior recognition system and method | |
CN103679189A (en) | Method and device for recognizing scene | |
CN109389185A (en) | Use the video smoke recognition methods of Three dimensional convolution neural network | |
CN110082821A (en) | A kind of no label frame microseism signal detecting method and device | |
CN110222579B (en) | Video object counting method combining motion law and target detection | |
CN105184229A (en) | Online learning based real-time pedestrian detection method in dynamic scene | |
CN110532862A (en) | Fusion Features group recognition methods based on gate integrated unit | |
CN110472608A (en) | Image recognition tracking processing method and system | |
Jiang et al. | A deep learning framework for detecting and localizing abnormal pedestrian behaviors at grade crossings | |
CN108830204B (en) | Method for detecting abnormality in target-oriented surveillance video | |
Mahurkar et al. | Real-time Covid-19 face mask detection with YOLOv4 | |
Cheng et al. | Visual fire detection using deep learning: A survey | |
Zhang et al. | Complementary networks for person re-identification | |
CN114332163B (en) | High-altitude parabolic detection method and system based on semantic segmentation | |
CN108171187A (en) | A kind of abnormal behaviour automatic identifying method and device based on the extraction of bone point |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180504 |