CN106920247A - A kind of method for tracking target and device based on comparison network - Google Patents
A kind of method for tracking target and device based on comparison network Download PDFInfo
- Publication number
- CN106920247A CN106920247A CN201710038541.7A CN201710038541A CN106920247A CN 106920247 A CN106920247 A CN 106920247A CN 201710038541 A CN201710038541 A CN 201710038541A CN 106920247 A CN106920247 A CN 106920247A
- Authority
- CN
- China
- Prior art keywords
- target
- object candidate
- region
- frame image
- tracked target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention discloses a kind of method for tracking target and device based on comparison network, wherein, based on the method for tracking target for comparing network, comprise the following steps:It is determined that the current frame image comprising tracked target;Tracked target region is obtained in current frame image;Obtain next two field picture of present frame;Multiple object candidate areas are obtained in next two field picture;Obtain the boundary rectangle comprising each object candidate area;Tracked target region and boundary rectangle input are compared into neural network model to compare;Target following region is determined in multiple object candidate areas according to comparison result;Determine tracked target in target following region.The present invention carries out Integrated comparative by comparing neural network model to tracked target region and object candidate area, and tracked target object can be recognized accurately, and is rapidly completed tracing task.
Description
Technical field
The present invention relates to image processing field, and in particular to a kind of based on the method for tracking target and device that compare network.
Background technology
The purpose of target following is the movement locus for obtaining specific objective in video sequence, recently as computer network
The fast propagation of video, the research of target following is always the heat subject of computer vision field, also in many practicality visions
Key player is play in system, and target following refers to the original state for providing target in the frame of video first is tracked, prediction
Target exact position in subsequent frames, at the same time, visual target tracking also serves as the basis of artificial intelligence, can simulate
The behavior of human vision.
Method for tracking target in currently available technology is mainly by the detection mode extraction target of learning classification task
Track target object after the characteristic information of object, but image information species in video streaming is a lot, during tracking, because
For Integrated comparative, institute cannot be carried out to the target area in the previous frame image region of interest within and next two field picture in video flowing
It is complex with the target that the characteristic information with tracked target is found in all of video image, while cannot be quickly accurate
Really complete tracing task.
The content of the invention
Therefore, the embodiment of the present invention technical problem to be solved is method for tracking target of the prior art using study
Target object is tracked after the characteristic information of the detection mode extraction target object of classification task, it is impossible to the previous frame in video flowing
Target area in image region of interest within and next two field picture carries out Integrated comparative, causes accurately obtain tracked target
Object.
Therefore, the embodiment of the invention provides following technical scheme:
The embodiment of the present invention provides a kind of based on the method for tracking target for comparing network, comprises the following steps:
It is determined that the current frame image comprising tracked target;
Tracked target region is obtained in the current frame image;
Obtain next two field picture of present frame;
Multiple object candidate areas are obtained in next two field picture;
Obtain the boundary rectangle comprising each object candidate area;
The tracked target region and boundary rectangle input are compared into neural network model to compare;
Target following region is determined in multiple object candidate areas according to comparison result.
Alternatively, also include:Determine tracked target in the target following region.
Alternatively, the multiple object candidate areas of the acquisition in next two field picture, including:
Obtain the positive sample and negative sample of the current frame image;
Detect the next frame image edge information;
The edge that the tracked target region overlaps with described image marginal information is detected in the current frame image
Information.
Alternatively, it is described to carry out the tracked target region and boundary rectangle input comparison neural network model
Compare, comprise the following steps:
The boundary rectangle of the object candidate area is mapped to the convolution feature of correspondence position in convolutional layer;
The dimension of the unification convolution feature in the layer of pond;
The convolution feature is comprehensively input into full articulamentum.
Alternatively, it is described to determine target following region in multiple object candidate areas according to comparison result, including:
Calculate the similarity in the tracked target region and the object candidate area;
Obtain maximum object candidate area score.
The embodiment of the present invention provides a kind of based on the target tracker for comparing network, including such as lower unit:
First determining unit, for determining the current frame image comprising tracked target;
First acquisition unit, for obtaining tracked target region in the current frame image;
Second acquisition unit, the next two field picture for obtaining present frame;
3rd acquiring unit, for obtaining multiple object candidate areas in next two field picture;
4th acquiring unit, for obtaining the boundary rectangle comprising each object candidate area;
Comparing unit, enters for the tracked target region and boundary rectangle input to be compared into neural network model
Row is compared;
Second determining unit, for determining target following region in multiple object candidate areas according to comparison result.
Alternatively, also include:3rd determining unit, for determining tracked target in the target following region.
Alternatively, the 3rd acquiring unit, including:
First acquisition module, positive sample and negative sample for obtaining the current frame image;
First detection module, for detecting the next frame image edge information;
Second detection module, for detecting the tracked target region and described image side in the current frame image
The marginal information that edge information overlaps.
Alternatively, the comparing unit, including:
Mapping block, the volume for the boundary rectangle of the object candidate area to be mapped to correspondence position in convolutional layer
Product feature;
Unified modules, for the dimension of the unification convolution feature in the layer of pond;
Input module, for being comprehensively input into the convolution feature in full articulamentum.
Alternatively, second determining unit, including:
Computing module, the similarity for calculating the tracked target region and the object candidate area;
Second acquisition module, the object candidate area score for obtaining maximum.
Embodiment of the present invention technical scheme, has the following advantages that:
The present invention provides a kind of based on the method for tracking target and device that compare network, wherein, based on the mesh for comparing network
Mark tracking, comprises the following steps:It is determined that the current frame image comprising tracked target;Obtained in current frame image by with
Track target area;Obtain next two field picture of present frame;Multiple object candidate areas are obtained in next two field picture;Acquisition is included
The boundary rectangle of each object candidate area;Tracked target region and boundary rectangle input are compared into neural network model is carried out
Compare;Target following region is determined in multiple object candidate areas according to comparison result;Determine quilt in target following region
Tracking target.The present invention carries out comprehensive ratio by comparing neural network model to tracked target region and object candidate area
Compared with, tracked target object can be recognized accurately, it is rapidly completed tracing task.
Brief description of the drawings
In order to illustrate more clearly of the specific embodiment of the invention or technical scheme of the prior art, below will be to specific
The accompanying drawing to be used needed for implementation method or description of the prior art is briefly described, it should be apparent that, in describing below
Accompanying drawing is some embodiments of the present invention, for those of ordinary skill in the art, before creative work is not paid
Put, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart based on the goal approach for comparing network in the embodiment of the present invention 1;
Fig. 2 is the flow chart based on acquisition edge peaks in the goal approach for comparing network in the embodiment of the present invention 1;
Fig. 3 be the embodiment of the present invention 1 in based on compare network goal approach in compare flow chart;
Fig. 4 is the flow chart based on determination target following region in the goal approach for comparing network in the embodiment of the present invention 1;
Fig. 5 is the structured flowchart based on the destination apparatus for comparing network in the embodiment of the present invention 2;
Fig. 6 is the structured flowchart based on the acquiring unit of destination apparatus the 3rd for comparing network in the embodiment of the present invention 2;
Fig. 7 is the structured flowchart based on the destination apparatus comparing unit for comparing network in the embodiment of the present invention 2;
Fig. 8 is the structured flowchart based on the determining unit of destination apparatus second for comparing network in the embodiment of the present invention 2.
Specific embodiment
The technical scheme of the embodiment of the present invention is clearly and completely described below in conjunction with accompanying drawing, it is clear that described
Embodiment be a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this area is general
The every other embodiment that logical technical staff is obtained under the premise of creative work is not made, belongs to present invention protection
Scope.
, it is necessary to explanation in the description of the embodiment of the present invention, term " " center ", " on ", D score, "left", "right",
The orientation or position relationship of the instruction such as " vertical ", " level ", " interior ", " outward " be based on orientation shown in the drawings or position relationship,
It is for only for ease of the description embodiment of the present invention and simplifies description, must has rather than the device or element for indicating or imply meaning
Have specific orientation, with specific azimuth configuration and operation, therefore be not considered as limiting the invention.Additionally, term " the
One ", " second ", " the 3rd " are only used for describing purpose, and it is not intended that indicating or implying relative importance.
, it is necessary to explanation, unless otherwise clearly defined and limited, term " is pacified in the description of the embodiment of the present invention
Dress ", " connected ", " connection " should be interpreted broadly, for example, it may be fixedly connected, or be detachably connected, or integratedly
Connection;Can mechanically connect, or electrically connect;Can be joined directly together, it is also possible to be indirectly connected to by intermediary,
Two connections of element internal are can also be, can be wireless connection, or wired connection.For the common skill of this area
For art personnel, above-mentioned term concrete meaning in the present invention can be understood with concrete condition.
As long as additionally, technical characteristic involved in invention described below different embodiments non-structure each other
Can just be combined with each other into conflict.
Embodiment 1
The present embodiment provides a kind of based on the method for tracking target for comparing network, as shown in figure 1, comprising the following steps:
The current frame image of S1, determination comprising tracked target;Being input into several continuous images in video streaming could structure
Into a complete video, and image is continuous sequential correlation data, thus only by obtain with present frame by with
Track target image could complete specific tracking, and the current frame image position comprising tracked target is only determined in video streaming
Tracked target can just be found.
Specifically, target following is often referred to provide original state of the target in the frame of video first is tracked, and mesh is estimated automatically
Mark object state in subsequent frames.Human eye can compare easily within a period of time with living certain specific objective, but to machine
For device, this task is simultaneously remarkable, occurs that target occurs drastic mechanical deformation during tracking, by other target occlusions or
There are the various complicated situations of similar object interference etc..Above-mentioned present frame is comprising the initial frame being input into from video flowing or upper
The image information of frame or next frame at current time, the information includes the positions and dimensions of present frame.
S2, the acquisition tracked target region in current frame image;Specifically quilt is as detected in current frame image
Tracking mesh target area (current patch), this region is the image block being made up of multiple pixels.
S3, the next two field picture for obtaining present frame;The purpose of tracking is exactly in order to catch up with object above, only by obtaining
Taking a frame frame image information could finally track this specific object, so need exist in next frame, using present frame
The tracked target region of middle acquisition completes further tracking.
S4, the multiple object candidate areas of acquisition in next two field picture;Using proposal generating modes, generation target is built
The purpose for discussing position is exactly to generate a relatively small choice box Candidate Set of quantity, as multiple object candidate areas.
As a kind of implementation, the method for tracking target based on target candidate in the present embodiment, as shown in Fig. 2 step
S4, obtains multiple object candidate areas in next two field picture, including:
S41, the positive sample and negative sample that obtain current frame image;Positive sample refers in picture the only a certain mesh of searching in need
Mark, that is, be often referred to the target sample information related to tracking, and negative sample refers to wherein not comprising the target for needing to find, that is, is often referred to
Unnecessary, incoherent sample information, it is also possible to using other candidate regions outside correct target area as negative sample.
S42, detection next frame image edge information, target possible position is obtained by the image edge information of next frame
Some candidate regions.
S43, the marginal information that detection tracked target region overlaps with image edge information in current frame image.
Specifically, the boundary response of each pixel in image is obtained using structuring edge detector, can be thus obtained
To a dense boundary response, then perform non-maxima suppression and find edge peaks, so and can obtain one it is sparse
Edge graph, then edge is grouped, it is believed that the edge pixel that straight border is connected has correlation higher, and not straight
Connect in succession or there is lower correlation by the edge pixel that the too high curve of curvature is connected, next, using sliding window
Method detects that the target more with the marginal information of the coincident of multiple object candidate areas is waited in current frame image region
Favored area, so as to obtain specific object candidate area.
The boundary rectangle of S5, acquisition comprising each object candidate area;Obtaining the target of the more marginal information that overlaps
The maximum boundary rectangle comprising each object candidate area of its composition, (rectangle patch) are obtained in candidate region.
S6, by tracked target region and boundary rectangle input compare neural network model compare;This compares nerve
Network model is mainly the similarity one-time calculation of tracked target region and object candidate area out.
As a kind of implementation, based on the method for tracking target for comparing network in the present embodiment, as shown in figure 3, step
S6, compares tracked target region and boundary rectangle input neural network model and compares, comprises the following steps:
S61, the convolution feature that the boundary rectangle of object candidate area is mapped in convolutional layer correspondence position;Roll up herein
The effect of lamination is mainly used in extracting convolution feature, i.e., some image blocks are extracted from video image, learns some features, then
By the use of these features as wave filter go it is inswept it is whole magnify figure, i.e., do convolution, the boundary rectangle of object candidate area line by line
Correspondence position is mapped to, while by sharing the target candidate area that convolution is compared in tracked target region and boundary rectangle
Domain.
S62, in the layer of pond unified convolution feature dimension;The effect of pond layer is mainly used in unifying the dimension of convolution feature
Degree, because after convolutional layer finishes convolution line by line, dimension can change, so need to reduce dimension, so as to reduce volume
The feature of lamination output, while improving result.
S63, convolution feature is comprehensively input into full articulamentum, full articulamentum is used to the characteristic synthetic for above extracting rise
Come.
S7, target following region is determined in multiple object candidate areas according to comparison result.Such as, a certain car is being tracked
, when acquiring all vehicles on highway for positive sample, after pedestrian is negative sample, and obtain the similar of tracked vehicle
Degree, the scope as target following region where now needing the tracked all much like vehicle of concern.
As a kind of implementation, based on the method for tracking target for comparing network in the present embodiment, as shown in figure 4, according to
Comparison result determines target following region in multiple object candidate areas, including:
S71, the similarity for calculating tracked target region and object candidate area;Calculate current frame image in obtain by with
Similarity between track target area and multiple object candidate areas, it is understood that to calculate previous frame patch and next frame
Target candidate patch between similarity, choose similarity score highest object candidate area traced into as present frame
Target location.
S72, the object candidate area score for obtaining maximum, using the object candidate area of highest scoring as tracking result.
S8, in target following region determine tracked target.Determined according to definite target following region tracked
Target object, completes tracing task.
Embodiment 2
The present embodiment provide it is a kind of based on compare network target tracker, as shown in figure 5, with embodiment 1 in base
It is corresponding in target tracker, including such as lower unit:
First determining unit 51, for determining the current frame image comprising tracked target;
First acquisition unit 52, for obtaining tracked target region in current frame image;
Second acquisition unit 53, the next two field picture for obtaining present frame;
3rd acquiring unit 54, for obtaining multiple object candidate areas in next two field picture;
4th acquiring unit 55, for obtaining the boundary rectangle comprising each object candidate area;
Comparing unit 56, is compared for tracked target region and boundary rectangle input to be compared into neural network model
It is right;
Second determining unit 57, for determining target following region in multiple object candidate areas according to comparison result.
As a kind of implementation, based on the target tracker for comparing network in the present embodiment, also include:3rd determines
Unit 58, for determining tracked target in target following region.
As a kind of implementation, based on the target tracker for comparing network in the present embodiment, as shown in fig. 6, the 3rd
Acquiring unit 54, including:
First acquisition module 541, positive sample and negative sample for obtaining current frame image;
First detection module 542, for detecting next frame image edge information;
Second detection module 543, for detecting tracked target region and image edge information weight in current frame image
The marginal information of conjunction.
As a kind of implementation, based on the target tracker for comparing network in the present embodiment, as shown in fig. 7, comparing
Unit 56, including:
Mapping block 561, the volume for the boundary rectangle of object candidate area to be mapped to correspondence position in convolutional layer
Product feature;
Unified modules 562, for the dimension of the unified convolution feature in the layer of pond;
Input module 563, for being comprehensively input into convolution feature in full articulamentum.
As a kind of implementation, based on the target tracker for comparing network in the present embodiment, as shown in figure 8, second
Determining unit 57, including:
Computing module 571, the similarity for calculating tracked target region and object candidate area;
Second acquisition module 572, the object candidate area score for obtaining maximum.
Obviously, above-described embodiment is only intended to clearly illustrate example, and not to the restriction of implementation method.It is right
For those of ordinary skill in the art, can also make on the basis of the above description other multi-forms change or
Change.There is no need and unable to be exhaustive to all of implementation method.And the obvious change thus extended out or
Among changing still in the protection domain of the invention.
Claims (10)
1. it is a kind of based on the method for tracking target for comparing network, it is characterised in that to comprise the following steps:
It is determined that the current frame image comprising tracked target;
Tracked target region is obtained in the current frame image;
Obtain next two field picture of present frame;
Multiple object candidate areas are obtained in next two field picture;
Obtain the boundary rectangle comprising each object candidate area;
The tracked target region and boundary rectangle input are compared into neural network model to compare;
Target following region is determined in multiple object candidate areas according to comparison result.
2. method according to claim 1, it is characterised in that also include:Determine in the target following region by with
Track target.
3. method according to claim 1, it is characterised in that described multiple targets are obtained in next two field picture to wait
Favored area, including:
Obtain the positive sample and negative sample of the current frame image;
Detect the next frame image edge information;
The marginal information that the tracked target region overlaps with described image marginal information is detected in the current frame image.
4. method according to claim 1, it is characterised in that described by the tracked target region and the external square
Shape input compares neural network model and compares, and comprises the following steps:
The boundary rectangle of the object candidate area is mapped to the convolution feature of correspondence position in convolutional layer;
The dimension of the unification convolution feature in the layer of pond;
The convolution feature is comprehensively input into full articulamentum.
5. method according to claim 1, it is characterised in that it is described according to comparison result in multiple object candidate areas
Determine target following region, including:
Calculate the similarity in the tracked target region and the object candidate area;
Obtain maximum object candidate area score.
6. it is a kind of based on the method for tracking target for comparing network, it is characterised in that including such as lower unit:
First determining unit, for determining the current frame image comprising tracked target;
First acquisition unit, for obtaining tracked target region in the current frame image;
Second acquisition unit, the next two field picture for obtaining present frame;
3rd acquiring unit, for obtaining multiple object candidate areas in next two field picture;
4th acquiring unit, for obtaining the boundary rectangle comprising each object candidate area;
Comparing unit, is compared for the tracked target region and boundary rectangle input to be compared into neural network model
It is right;
Second determining unit, for determining target following region in multiple object candidate areas according to comparison result.
7. device according to claim 6, it is characterised in that also include:3rd determining unit, for the target with
Tracked target is determined in track region.
8. device according to claim 6, it is characterised in that the 3rd acquiring unit, including:
First acquisition module, positive sample and negative sample for obtaining the current frame image;
First detection module, for detecting the next frame image edge information;
Second detection module, for detecting that the tracked target region is believed with described image edge in the current frame image
Cease the marginal information for overlapping.
9. device according to claim 6, it is characterised in that the comparing unit, including:
Mapping block, the convolution for the boundary rectangle of the object candidate area to be mapped into correspondence position in convolutional layer is special
Levy;
Unified modules, for the dimension of the unification convolution feature in the layer of pond;
Input module, for being comprehensively input into the convolution feature in full articulamentum.
10. device according to claim 6, it is characterised in that second determining unit, including:
Computing module, the similarity for calculating the tracked target region and the object candidate area;
Second acquisition module, the object candidate area score for obtaining maximum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710038541.7A CN106920247A (en) | 2017-01-19 | 2017-01-19 | A kind of method for tracking target and device based on comparison network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710038541.7A CN106920247A (en) | 2017-01-19 | 2017-01-19 | A kind of method for tracking target and device based on comparison network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106920247A true CN106920247A (en) | 2017-07-04 |
Family
ID=59454141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710038541.7A Pending CN106920247A (en) | 2017-01-19 | 2017-01-19 | A kind of method for tracking target and device based on comparison network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106920247A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107492115A (en) * | 2017-08-30 | 2017-12-19 | 北京小米移动软件有限公司 | The detection method and device of destination object |
CN107609485A (en) * | 2017-08-16 | 2018-01-19 | 中国科学院自动化研究所 | The recognition methods of traffic sign, storage medium, processing equipment |
CN107992819A (en) * | 2017-11-29 | 2018-05-04 | 青岛海信网络科技股份有限公司 | A kind of definite method and apparatus of vehicle attribute structured features |
CN108133197A (en) * | 2018-01-05 | 2018-06-08 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108596957A (en) * | 2018-04-26 | 2018-09-28 | 北京小米移动软件有限公司 | Object tracking methods and device |
WO2019042419A1 (en) * | 2017-09-04 | 2019-03-07 | 腾讯科技(深圳)有限公司 | Image tracking point acquisition method and device, and storage medium |
CN111709978A (en) * | 2020-05-06 | 2020-09-25 | 广东康云科技有限公司 | Cross-screen target tracking method, system, device and storage medium |
CN112985263A (en) * | 2021-02-09 | 2021-06-18 | 中国科学院上海微系统与信息技术研究所 | Method, device and equipment for detecting geometrical parameters of bow net |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101290681A (en) * | 2008-05-26 | 2008-10-22 | 华为技术有限公司 | Video frequency object tracking method, device and automatic video frequency following system |
CN104200210A (en) * | 2014-08-12 | 2014-12-10 | 合肥工业大学 | License plate character segmentation method based on parts |
CN104200236A (en) * | 2014-08-22 | 2014-12-10 | 浙江生辉照明有限公司 | Quick target detection method based on DPM (deformable part model) |
CN105224947A (en) * | 2014-06-06 | 2016-01-06 | 株式会社理光 | Sorter training method and system |
CN105260997A (en) * | 2015-09-22 | 2016-01-20 | 北京好运到信息科技有限公司 | Method for automatically obtaining target image |
CN105632186A (en) * | 2016-03-11 | 2016-06-01 | 博康智能信息技术有限公司 | Method and device for detecting vehicle queue jumping behavior |
CN105678338A (en) * | 2016-01-13 | 2016-06-15 | 华南农业大学 | Target tracking method based on local feature learning |
CN105933678A (en) * | 2016-07-01 | 2016-09-07 | 湖南源信光电科技有限公司 | Multi-focal length lens linkage imaging device based on multi-target intelligent tracking |
-
2017
- 2017-01-19 CN CN201710038541.7A patent/CN106920247A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101290681A (en) * | 2008-05-26 | 2008-10-22 | 华为技术有限公司 | Video frequency object tracking method, device and automatic video frequency following system |
CN105224947A (en) * | 2014-06-06 | 2016-01-06 | 株式会社理光 | Sorter training method and system |
CN104200210A (en) * | 2014-08-12 | 2014-12-10 | 合肥工业大学 | License plate character segmentation method based on parts |
CN104200236A (en) * | 2014-08-22 | 2014-12-10 | 浙江生辉照明有限公司 | Quick target detection method based on DPM (deformable part model) |
CN105260997A (en) * | 2015-09-22 | 2016-01-20 | 北京好运到信息科技有限公司 | Method for automatically obtaining target image |
CN105678338A (en) * | 2016-01-13 | 2016-06-15 | 华南农业大学 | Target tracking method based on local feature learning |
CN105632186A (en) * | 2016-03-11 | 2016-06-01 | 博康智能信息技术有限公司 | Method and device for detecting vehicle queue jumping behavior |
CN105933678A (en) * | 2016-07-01 | 2016-09-07 | 湖南源信光电科技有限公司 | Multi-focal length lens linkage imaging device based on multi-target intelligent tracking |
Non-Patent Citations (3)
Title |
---|
NAM H 等: "Learning Multi-Domain Convolutional Neural Networks for Visual Tracking", 《COMPUTER SCIENCE》 * |
ZITNICK C L等: "Edge Boxes:Locating object proposals from edges", 《COMPUTER VISION–ECCV 2014》 * |
孔军 等: "一种面向高斯差分图的压缩感知目标跟踪算法", 《红外与毫米波学报》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609485A (en) * | 2017-08-16 | 2018-01-19 | 中国科学院自动化研究所 | The recognition methods of traffic sign, storage medium, processing equipment |
CN107492115A (en) * | 2017-08-30 | 2017-12-19 | 北京小米移动软件有限公司 | The detection method and device of destination object |
CN107492115B (en) * | 2017-08-30 | 2021-01-01 | 北京小米移动软件有限公司 | Target object detection method and device |
WO2019042419A1 (en) * | 2017-09-04 | 2019-03-07 | 腾讯科技(深圳)有限公司 | Image tracking point acquisition method and device, and storage medium |
US11164323B2 (en) | 2017-09-04 | 2021-11-02 | Tencent Technology (Shenzhen) Company Limited | Method for obtaining image tracking points and device and storage medium thereof |
CN107992819A (en) * | 2017-11-29 | 2018-05-04 | 青岛海信网络科技股份有限公司 | A kind of definite method and apparatus of vehicle attribute structured features |
CN107992819B (en) * | 2017-11-29 | 2020-07-10 | 青岛海信网络科技股份有限公司 | Method and device for determining vehicle attribute structural features |
CN108133197A (en) * | 2018-01-05 | 2018-06-08 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of information |
CN108596957A (en) * | 2018-04-26 | 2018-09-28 | 北京小米移动软件有限公司 | Object tracking methods and device |
CN108596957B (en) * | 2018-04-26 | 2022-07-22 | 北京小米移动软件有限公司 | Object tracking method and device |
CN111709978A (en) * | 2020-05-06 | 2020-09-25 | 广东康云科技有限公司 | Cross-screen target tracking method, system, device and storage medium |
CN112985263A (en) * | 2021-02-09 | 2021-06-18 | 中国科学院上海微系统与信息技术研究所 | Method, device and equipment for detecting geometrical parameters of bow net |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106920247A (en) | A kind of method for tracking target and device based on comparison network | |
JP6842520B2 (en) | Object detection methods, devices, equipment, storage media and vehicles | |
CN106909885A (en) | A kind of method for tracking target and device based on target candidate | |
CN106920248A (en) | A kind of method for tracking target and device | |
CN105260712B (en) | A kind of vehicle front pedestrian detection method and system | |
Cao et al. | Rapid detection of blind roads and crosswalks by using a lightweight semantic segmentation network | |
CN111027481B (en) | Behavior analysis method and device based on human body key point detection | |
KR20200040665A (en) | Systems and methods for detecting a point of interest change using a convolutional neural network | |
CN105260749B (en) | Real-time target detection method based on direction gradient binary pattern and soft cascade SVM | |
CN108447078A (en) | The interference of view-based access control model conspicuousness perceives track algorithm | |
CN107545263B (en) | Object detection method and device | |
CN104590130A (en) | Rearview mirror self-adaptive adjustment method based on image identification | |
CN110765906A (en) | Pedestrian detection algorithm based on key points | |
CN102510734A (en) | Pupil detection device and pupil detection method | |
JP7079358B2 (en) | Target detection methods and devices, computer systems and readable storage media | |
CN104574393A (en) | Three-dimensional pavement crack image generation system and method | |
CN105760846A (en) | Object detection and location method and system based on depth data | |
CN103247038B (en) | A kind of global image information synthesis method of visual cognition model-driven | |
CN109087337B (en) | Long-time target tracking method and system based on hierarchical convolution characteristics | |
CN103455795A (en) | Method for determining area where traffic target is located based on traffic video data image | |
CN110992424A (en) | Positioning method and system based on binocular vision | |
CN104463240A (en) | Method and device for controlling list interface | |
CN104376323A (en) | Object distance determining method and device | |
CN111862511A (en) | Target intrusion detection device and method based on binocular stereo vision | |
CN103006332A (en) | Scalpel tracking method and device and digital stereoscopic microscope system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170704 |