CN106204646A - Multiple mobile object tracking based on BP neutral net - Google Patents
Multiple mobile object tracking based on BP neutral net Download PDFInfo
- Publication number
- CN106204646A CN106204646A CN201610514056.8A CN201610514056A CN106204646A CN 106204646 A CN106204646 A CN 106204646A CN 201610514056 A CN201610514056 A CN 201610514056A CN 106204646 A CN106204646 A CN 106204646A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- neutral net
- multiple mobile
- mobile object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007935 neutral effect Effects 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 73
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 230000008569 process Effects 0.000 claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 45
- 238000012549 training Methods 0.000 claims description 34
- 238000012937 correction Methods 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 230000000877 morphologic effect Effects 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 11
- 238000013461 design Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 8
- 230000002123 temporal effect Effects 0.000 claims description 7
- 238000005352 clarification Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 230000006978 adaptation Effects 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 5
- 230000008878 coupling Effects 0.000 claims description 5
- 238000010168 coupling process Methods 0.000 claims description 5
- 238000005859 coupling reaction Methods 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 230000009897 systematic effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 230000001537 neural effect Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract description 3
- 230000007423 decrease Effects 0.000 abstract 1
- 238000012544 monitoring process Methods 0.000 description 19
- 230000008859 change Effects 0.000 description 5
- 230000007797 corrosion Effects 0.000 description 5
- 238000005260 corrosion Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 210000005036 nerve Anatomy 0.000 description 4
- 239000000470 constituent Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000007788 roughening Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of multiple mobile object tracking based on BP neutral net, relate to technical field of computer vision, the method comprises the steps: step S1: go out multiple mobile object based on Background difference and frame difference method fusion detection;Multiple mobile object detects and is broadly divided into three steps: 1) initial background model;2) utilize frame difference method context update, then carry out binaryzation;3) utilize Background difference background difference, then image is carried out binaryzation;Step S2: the image of binaryzation is carried out further denoising, and be input to BP neutral net and carry out Segmentation of Multi-target;Step S3: multiple mobile object tracking based on BP neutral net processes.The present invention solves moving object in video sequences detection with tracking technique to robustness and the requirement of accuracy, decreases the amount of calculation of neutral net global search, meets moving object in video sequences detection and the tracking technique requirement to processing speed.
Description
Technical field
The present invention relates to technical field of computer vision, refer in particular to a kind of multiple mobile object track side based on BP neutral net
Method.
Background technology
Carrying on and deepening along with every policies such as China's safe city construction, and traffic, education, finance etc. are each
The continuous enhancing of industry user's security protection consciousness, video monitoring market increases powerful, and the quantity of photographic head quickly increases, video resource
Explosive growth.But monitor mode is the most still mainly manual monitoring, this just brings a lot of problem, and such as monitoring, personnel are tired
Labor, wrong report fail to report many, video retrieval is difficult, a large amount of junk datas etc., it is impossible to effectively utilize the real-time of video monitoring system.
In order to solve above-mentioned great number of issues, Intelligent Video Surveillance Technology was suggested in recent years and was widely studied and answers
With.Computer vision technique is incorporated video monitoring system by intelligent video monitoring system, video data stream is detected automatically,
Follow the tracks of target, analyze the behavior of target and carry out relative recording, video monitoring system being controlled in real time simultaneously, send report
Alert, make computer generation replace people to be monitored, give video monitoring system intelligent, become passive monitoring into actively monitoring.Intelligence regards
Frequently monitoring system can pass through intelligent algorithm, completes round-the-clock monitoring the most automatically, the most effectively reduces rate of false alarm and rate of failing to report,
Reduce junk data amount.Just can effectively report to the police before incident, prompting monitoring personnel pay close attention to monitored picture, carry afterwards
High Video content retrieval speed, strengthens the availability of video information.In sum, by existing substantial amounts of digital video monitor system
Carrying out intelligentized work, oneself starts to obtain the attention of researcher, government and businessman, has good application prospect and grinds
Study carefully value.
The present invention is directed to the multiple mobile object tracking problem in intelligent video monitoring system, in conjunction with the actual demand of security protection,
The multi-object tracking method that research is accurate, stable, real-time.
In terms of moving object detection, on the basis of analyzing current goal detection algorithm, it is proposed that a kind of based on the most special
Levy united moving target fast algorithm of detecting.The algorithm that this algorithm merges first with background subtraction and frame difference method obtains object
Zone of action, recycles morphologic filtering, it is thus achieved that the minimum circumscribed rectangular region at target place.It is demonstrated experimentally that this algorithm letter
Victory, quickly, effectively, can obtain the more accurate feature of motion ratio, greatly reduces simultaneously and image carries out rim detection, image knowledge
Not Deng computing.In terms of motion target tracking, a kind of Moving Target Tracking Algorithm based on BP neutral net is proposed.Will be herein
The moving target fast algorithm of detecting proposed combines with BP neutral net, on the one hand takes full advantage of neutral net fault-tolerance
By force, the advantage of fast operation, solve moving object in video sequences detection with tracking technique to robustness and accuracy
Requirement;On the other hand utilize moving target fast algorithm of detecting, it is possible to reduce the amount of calculation of neutral net global search, meet
Moving object in video sequences detection and the tracking technique requirement to processing speed.
The inventive method is applicable not only to video monitoring, applies also for intelligent transportation, medical diagnosis, intelligent industrial machine
The fields such as people.
Summary of the invention
The present invention is directed to the problem that in current intelligent video monitoring, multiple target tracking exists, in order to improve intelligent video monitoring
Accuracy, strengthen the stability of its monitoring so that it is meet real-time demand, propose a kind of based on BP neutral net do more physical exercises
Method for tracking target.
The present invention adopts the technical scheme that: a kind of multiple mobile object tracking based on BP neutral net, it includes
Following steps:
1, a kind of multiple mobile object tracking based on BP neutral net, the method comprises the steps:
Step S1: go out multiple mobile object based on Background difference and frame difference method fusion detection;Main point of multiple mobile object detection
It is three steps: 1) initial background model;2) utilize frame difference method context update, then carry out binaryzation;3) Background difference is utilized to carry on the back
Scape difference, then carries out binaryzation by image;
Step S2: the image of binaryzation is carried out further denoising, and be input to BP neutral net and carry out multiple target and divide
Cut;
Step S3: multiple mobile object tracking based on BP neutral net processes.
Further, step S1 of the present invention is the most further comprising the steps of:
Step S1.1: initial background model;Use multiple image averaging method to calculate initial background image, be denoted as Bk
(x,y,t);
In formula, Ik(x, y k) represent that (x, y, k) pixel value at place, N is the frame number of statistics to kth frame image at point;
Step S1.2: utilize frame difference method context update, then carry out binaryzation;Carry out with former frame first by present frame
Difference, obtains frame difference result, i.e. foreground target, and carries out binaryzation, such as following formula:
Wherein, I (x, y, t+1), I (x, y, t) be respectively present frame and previous frame image, | and I (x, y, t+1)-I (x, y, t)
| for difference image, T is threshold value, and threshold value iterative method calculates;Then B (x, y, t+1) is utilized to carry out context update, as
Following formula is carried out:
In formula, α, for updating the factor, represents the speed of context update;
Step S1.3: utilize Background difference to carry out the detection of moving region;By present frame I (x, y, t+1) with extract
Background image B (x, y, t) carry out difference, obtains foreground moving region, then utilize threshold value T carry out binaryzation and obtain D (x, y,
T+1), threshold value here uses described iterative method to calculate equally:
Further, step S2 of the present invention the most also includes: after being detected moving region, will transport moving region
Animal body is split;Bianry image is done certain process, obtains the region of complete moving target;This process includes using shape
The basic skills of state carries out denoising to testing result;After denoising completes, re-use certain algorithm and come in moving region
In mark each moving target;Concrete operation step is as follows:
Step S2.1;Morphological operation;Described morphologic basic skills includes: has burn into expand, opens operation and close behaviour
Make four kinds;
Step S2.2;Segmentation of Multi-target algorithm, was using morphology operations to enter the foreground picture of binaryzation moving region
After row noise remove, each moving Object Segmentation is opened, and is marked, obtain the positional information of each moving target;Mark
With order mark, partitioning algorithm step is as follows:
S2.2.1: image is scanned to bottom right from upper left;
S2.2.2.: when the foreground pixel not being marked occurs, be labeled;
S2.2.3.: if the 8 of the neighborhood of this pixel pixels are not foreground pixel, return S2.2.1;If before having
Scene element, then neighborhood territory pixel is also marked same sequence number, and neighborhood territory pixel is repeated this step work;So through above-mentioned
The mark of step, all moving targets all can be separated, and be recorded.
Further, described step S3 of the present invention also includes herein below: the target after splitting step S2 carries out mesh
Mark feature selection and extraction, BP neural network classifier carries out Local Search, and will search in the moving region of gained target
The target arrived generates coupling bianry image, finally generation coupling bianry image is generated design sketch output;
Further, the described BP neural network classifier of the present invention can be divided into two stages: training stage and detection
In the stage, training stage system makes discriminant function and decision rule according to the sample of known class, for the sample identified
Carry out Classification and Identification;Detection-phase is then responsible for classifying the sample of unknown classification;Described BP neural network classifier work
Make flow process as follows:
The described training stage includes:
(1) input training sample, training sample is the most identified, for formulating the image set of discriminant classification function
Close;
(2) Image semantic classification, including removing noise and interference present in image, image restoration or image enhaucament, image
Filtration etc.;
(3) feature extraction, is digitized some feature of object to be identified;
(4) feature selection, is analyzed from selecting a collection of sample through the image of pretreatment, therefrom chooses and be applicable to
The feature set that constituent class requires;
(5) self adaptation part, system finds out an effective classification rule according to training sample set;
Described detection-phase includes:
(1) unknown input sample, unknown sample sample the most to be identified, is randomly drawed from sequence of video images
Sample;
(2) Image semantic classification, including removing noise and interference present in image, image restoration or image enhaucament, image
Filtration etc.;
(3) feature extraction, is digitized some feature of object to be identified;
(4) feature selection, is analyzed from selecting a collection of sample through the image of pretreatment, therefrom chooses and be applicable to
The feature set that constituent class requires;
(5) output result.
Further, choosing of described threshold value T of the present invention uses adaptive iteration method, is specifically calculated as follows:
Calculate gray-scale statistical rectangular histogram H [i] of image:
H [i]=niI=0,1 ..., L-1
Wherein L is the scope of image gray levels, the most generally uses L=256, niBe gray scale be the number of pixels of i;
Utilize maximum max and minima min of the grey scale pixel value of grey level histogram acquisition difference diagram;
Formula below is utilized to set initial threshold, as the starting point of iteration:
Utilize T0Difference image is split, and calculates the average gray value M of background area and foreground zoneb、Mf:
Calculate new iteration threshold, order:
If T ≠ T0, make T=T0, forward step 4 to;If T=T0, algorithm terminates;
If new threshold value and T0Unequal, then make T=T0, continue iteration, until calculating final threshold value T.
Further, the design procedure of the BP neural network classifier of the present invention is as follows:
1) Image semantic classification, pretreatment mainly includes the noise remove of image, image enhaucament, image binaryzation and morphology
Filtering etc.;
2) clarification of objective analysis, gives space characteristics and temporal characteristics at sequence of video images, for space characteristics,
Use target extraction method based on shape information;For temporal characteristics, use the target extraction method of kinetic characteristic;
3) set up nonlinear mapping relation, use neighborhood that input vector is processed by pixels approach, use 3 × 3
Template is slided on image, samples image with center for sampled point simultaneously, thus it is defeated to construct one 9 dimension input 1 dimension
The neutral net gone out;During the training of neutral net, from any n frame video image sequence, choose front 10 frames as training sample
This collection, then utilizes a frame therein to construct the input feature vector collection of BP network;
4) BP network, it includes the transfer function between input layer, hidden layer, output layer and each layer, and training method and
Parameter selects;In addition to input layer and output layer, arbitrarily the network number of plies can be arbitrarily set according to system required mode, minimum can
Only to comprise a hidden layer;Described input layer be input number of nodes be 9, the output node number of output layer is 1;Use trial and error procedure
Determine optimal the number of hidden nodes, calculate the number of hidden nodes formula as follows:
Wherein, n is the number of hidden nodes, n1For input layer number, n0For output node number, b is constant, value be 1~
10;
5) training method and parameter select;Use additional momentum BP algorithm come training network additional momentum BP algorithm be exactly
Introduce a factor of momentum on the basis of gradient descent algorithm, then adjust this correction according to previous correction result
Amount;When previous correction is excessive, this algorithm will reduce this correction, to keep the direction revising direction along convergence
Carry out;By the effect reducing vibration so can be played;And when previous correction is too small, this algorithm will strengthen this
Correction, so can play the effect accelerating to revise;
Weighting regulation formula with momentum term is:
ω (k+1)=Δ ω (k+1)+ω (k+1)
Wherein, E is systematic error, and ω is the weights of correction, and k is arbitrary number of times, and η is Learning Step, and β is study speed
Degree.β and η generally can adjust according to real system, for known coefficient.
Compared with prior art, the invention has the beneficial effects as follows:
1, neutral net fault-tolerance is taken full advantage of strong, the advantage of fast operation, solve motion mesh in video image
Mark detection and tracking technique are to robustness and the requirement of accuracy;
2, utilize moving target fast algorithm of detecting, it is possible to reduce the amount of calculation of neutral net global search, meet and regard
Frequently the moving object detection and tracking technology requirement to processing speed in image.
Accompanying drawing explanation
Fig. 1 is algorithm entirety schematic diagram described in embodiment;
Fig. 2 is flow chart based on BP nerve network system described in embodiment;
Fig. 3 is BP neural network classifier schematic diagram described in embodiment;
Fig. 4 is BP neural network structure figure described in embodiment.
Detailed description of the invention
In conjunction with accompanying drawing Fig. 1-4, as a example by common multiple target tracking, details are as follows: frame difference method i.e. time differencing method, its profit
The region of variation determining in image with the inter-frame difference of image that is continuous or that be separated by certain frame number, thus carry out moving target inspection
Survey.Continuous print two two field picture or multiple image are carried out calculus of differences by generally frame difference method, then differentiated image are carried out two
Value also filters, it would be possible to moving region detect, thus detect moving target.
The difference diagram that frame difference method obtains can represent by equation below:
Dk(x, y)=| Ik+1(x,y)-Ik(x,y)| (1)
Wherein Dk(x, y), Ik+1(x, y), Ik(x y) is respectively inter-frame difference image, kth+1 two field picture and kth frame image.
Tk(x, is y) target image after binaryzation, and T is threshold value set in advance.Generally utilizing formula (1) Dk(x, y)=| Ik+1
(x,y)-Ik(x, y) | calculate inter-frame difference image Dk(x, y) after, re-use formula (2)Carry out binaryzation, finally give the motion target area of correspondence.
What Background difference utilized is that the present frame in video flowing carries out difference with the background frames pre-build, those and background
The different region of image i.e. needs the target of detection.The key of Background difference is how to obtain background image, carries out background and builds
Mould, and can keep it is updated, when obtaining accurate background image, the various deficiencies of frame difference method are overcome the most completely
?.Background subtraction can represent by equation below:
Dk(x, y)=| Ik(x,y)-Bk(x,y)| (3)
Wherein, Dk(x,y)、Ik(x,y)、Bk(x, y) be respectively kth frame time foreground image, two field picture and
Background image.
Step S1: go out multiple mobile object based on Background difference and frame difference method fusion detection;
S1.1 initial background model.Use multiple image averaging method to calculate initial background image, be denoted as Bk(x,y,
t)。
In formula, Ik(x, y k) represent that (x, y, k) pixel value at place, N is the frame number of statistics to k two field picture at point.
S1.2 utilizes frame difference method context update, then carries out binaryzation.Difference is carried out first by present frame and former frame,
Obtain frame difference result, i.e. foreground target, and carry out binaryzation, such as following formula:
Wherein, I (x, y, t+1), I (x, y, t) be respectively present frame and previous frame image, | and I (x, y, t+1)-I (x, y, t)
| for difference image, T is threshold value, and threshold value iterative method calculates.Then B (x, y, t+1) is utilized to carry out context update, as
Following formula is carried out:
In formula, α, for updating the factor, represents the speed of context update;
For threshold value T, if choosing excessive, the moving object of prospect can be caused to be filtered into background, cause the leakage of detection
Report;If it is too small that threshold value is chosen, can cause again having substantial amounts of background noise to be detected as moving target, cause substantial amounts of mistake
Report.In actual applications, owing to illumination, weather, the characteristic etc. of moving target change, manually set threshold value the most unrealistic, because of
Here iterative method that the present invention uses is adaptive iteration method, is specifically calculated as follows:
1. calculate gray-scale statistical rectangular histogram H [i] of image:
H [i]=niI=0,1 ..., L-1
Wherein L is the scope of image gray levels, the most generally uses L=256, niBe gray scale be the number of pixels of i.
2. utilize maximum max and minima min of the grey scale pixel value of grey level histogram acquisition difference diagram.
3. utilize formula below to set initial threshold, as the starting point of iteration:
4. utilize T0Difference image is split, and calculates the average gray value M of background area and foreground zoneb、Mf:
5. calculate new iteration threshold, order:
If 6. T ≠ T0, make T=T0, forward step 4 to;If T=T0, algorithm terminates;
If new threshold value and T0Unequal, then make T=T0, continue iteration, until calculating final threshold value T.Use certainly
After the threshold value adapted to, effectively entirety and local message according to image can carry out binaryzation, it is thus achieved that good effect.
S1.3 utilizes Background difference to carry out the detection of moving region.By present frame I (x, y, t+1) and the Background extracted
Picture B (x, y, t) carry out difference, obtains foreground moving region, then utilizes threshold value T carry out binaryzation and obtain D (x, y, t+1),
Threshold value here uses described iterative method to calculate equally:
Step S2: the image of binaryzation is carried out further denoising, and carries out Segmentation of Multi-target;
After being detected moving region, moving region to be carried out moving meshes.The bianry image obtained is typically due to
The interference of noise and background slight change often differs, and to establish a capital be the integrity profile of moving target, therefore, first has to binary map
As doing certain process, obtain the region of complete moving target.Employ morphologic basic skills in the present invention
Testing result is carried out denoising.After denoising completes, re-use certain algorithm in moving region, mark each motion
Target.
S2.1 morphological operation
The present invention has mainly used morphologic basic skills in multi-target detection, has burn into expand, opens operation and close
Operate four kinds.
S2.1.1 corrodes
The definition of corrosion is:
This formula points out that B is the set of an all of some z being included in A with the B of z translation to the corrosion of A.The knot of corrosion
Fruit can reduce or refine the object in bianry image, corrodes the operation being considered as morphologic filtering, it can from image incite somebody to action
Some less image details filter, and therefore use etching operation can remove some the interference information in target detection image.
S2.1.2 expands
The definition expanded is:
This formula is about the reflection of its initial point with B, and based on reflection is translated by z.With corrosion
Difference, expansion can expand or be roughened the object in bianry image, the connection sheet of connection fracture, fill up the interior void of detection object
Deng.
S2.1.3 hold operation and closed operation: corrosion then can reduce or refine in bianry image object, expansion can expand or
Object in roughening bianry image.Open operation and closed operation is then to carry out image expanding and the cascade operation of rotten candle.Open operation
Can the profile of smooth object, disconnect narrower place and eliminate thin thrust, closed operation also can the profile of smooth object, but can fill out
Fill narrower place, eliminate little hole.
The step opening operation is first to corrode to expand afterwards, is defined as follows:
The step of closed operation is that after first expanding, corruption is lit up, and is defined as follows:
Operation and closed operation are opened in recycling, Binary image noises just can be made to remove, leave the most accurate moving target
Region.
S2.2 Segmentation of Multi-target algorithm
Segmentation of Multi-target refers to go using morphology operations to carry out noise the foreground picture of binaryzation moving region
After removing, each moving Object Segmentation is opened, and is marked, obtain the positional information of each moving target.In order to complete herein
Multiple target tracking work, the moving target detected carries out splitting and marking the most extremely being necessary.Mark usual order
Mark, the partitioning algorithm step that the present invention proposes is as follows:
S2.2.1. image is scanned to bottom right from upper left;
S2.2.2., when the foreground pixel not being marked occurs, it is labeled;
If S2.2.3. 8 pixels of the neighborhood of this pixel are not foreground pixel, return S2.2.1 step;
If there being foreground pixel, then neighborhood territory pixel is also marked same sequence number, and neighborhood territory pixel is repeated this step
Work.
So through the mark of above-mentioned steps, all moving targets all can be separated, and be recorded.
Step S3: multiple mobile object track algorithm based on BP neutral net;Based on the Background difference improved and frame difference method
On the one hand the fast algorithm of detecting of the moving target combined can effectively and rapidly detect moving target, on the other hand
Hunting zone can be reduced again in target following.And BP neutral net on the one hand distributed information storage and extensive self adaptation
The feature of parallel processing, can meet the real time handling requirement to big data quantity target image, on the other hand its self adaptation, from group
Knit, self study, Error Tolerance feature can in target following more accurate object matching.Both are combined
Moving object detection and tracking system based on BP neutral net can well reach moving object detection and tracking in Shandong
The requirement of rod, accuracy and rapidity.Entirety combines flow chart as shown in Figure 2.Enter utilizing BP neutral net tracking technique
During row target following, BP neural network classifier is its basis, and main BP neural network classifier can be divided into two stages:
Training stage and detection-phase, training stage system is made discriminant function and decision rule according to the sample of known class, is used
Classification and Identification is carried out in the sample identified.Detection-phase is then responsible for classifying the sample of unknown classification.Idiographic flow
As shown in Figure 3.
1) training sample is the most identified, for formulating the image collection of discriminant classification function.Unknown sample is i.e.
Sample to be identified, is the sample randomly drawed from sequence of video images.
2) pretreatment includes removing noise and interference present in image, image restoration or image enhaucament, the filtration of image
Deng.For the preprocess method that different model selections is different.Pretreatment is a very important link.If pretreatment is done
Obtaining bad, the identification of moving target cannot be accurate, the most even None-identified.
3) feature extraction is that some feature of object to be identified (either physics or form) is all carried out numeral
Change.
4) feature selection is to be analyzed from selecting a collection of sample through the image of pretreatment, therefrom chooses and has been applicable to
The feature set that constituent class requires.Feature extraction and feature selection are the keys of pattern recognition.If some feature set selected
When can distinguish pattern, then pattern recognition task is nearly completed the most.If after selected a certain stack features, and by this spy
Levy and pattern is extracted so that classifying, can not get satisfactory result, it is necessary to improve Feature Extraction Method.Again selected characteristic.
So it is repeated a number of times, until obtaining satisfactory result.
5) self adaptation part refers to that system finds out an effective classification rule according to training sample set.Its basic process
It is: after training sample being made some decision rules according to some criterion, then detects these training samples one by one, detection
Whether there is error.If there being error, improve decision rule the most further, until obtaining satisfied result.The sample used is the most,
The performance of grader is the best.
The design of S3.1BP neural network classifier and realization: mainly include 5 key issues: 1) Image semantic classification;2)
Tracking clarification of objective is analyzed;3) nonlinear mapping relation is set up;4) BP neutral net specific design method;5) BP neutral net
Parameter after training sets.
S3.1.1 Image semantic classification
The Main Function of Image semantic classification is prominent clarification of objective, facilitates network to extract clarification of objective.In advance
Process and mainly include the noise remove of image, image enhaucament, image binaryzation and morphologic filtering etc. described in step S1 and S2.Warp
Crossing pretreated image, can remove the data that major part is invalid, the data also reducing image procossing greatly are easy, from
And make the sample of neutral net choose work and become simpler.
S3.1.2 clarification of objective is analyzed
After the target of detecting and tracking determines, accurately the key problem of coupling is how to find suitable feature.Special
The extraction levied and the key that expression is target recognition.Only on the basis of obtaining key feature specific to target, Cai Nengshi
An other target.Feature extraction refers to only extract the operation of the information relevant with these features.And Tracking Recognition is exactly with these
Feature whether exist or to a certain degree exist for basis, and carry out distinguish and determine.
Us are given two kinds of possible feature space features and temporal characteristics (i.e. motion feature) at sequence of video images
In, space characteristics refers to the feature closely coupled with the target in single-frame images, such as size, the position of target, target area
Main gray features etc., for space characteristics, have target extraction method based on shape information;Temporal characteristics refers to and followed the tracks of
The feature that journey is closely related, such as speed, the change etc. of target sizes of target, these features provide target and change over
Multidate information, cannot obtain from single-frame images.For temporal characteristics, there is target extraction method based on kinetic characteristic.
Reasonably feature extracting method is the prior step of most of image automatic identification.It constitute based on neutral net detection method
Input.
S3.1.3 sets up nonlinear mapping relation
Present invention employs " neighborhood is to pixels approach " input vector is processed.By this method, can improve
The arithmetic speed of neutral net, simplifies algorithm model." neighborhood is to pixels approach " thinks that each pixel value is only in target image
Neighborhood corresponding with input picture (such as 3 × 3 neighborhood) pixel value is relevant.As a example by 1920 × 960 video images of system input,
By this image is slided by the order pointwise of row or column with the neighborhood of 3 × 3 pixels, it is possible to obtain (1920-3+1) ×
(960-3+1) neighborhood of=1837444 3 × 3 pixels.Then the center with these neighborhoods as sampled point thus obtains
1837444 pixel values.Being processed by above, we can set up the pixel of 1920 × 960 pixels to 1837444 pixels
Mapping relations.
By above-mentioned known, we can set up nonlinear mapping relation by such method: the template with 3 × 3 is at figure
As upper slip, for sampled point, image is sampled with center simultaneously, thus construct the nerve of one 9 dimension input 1 dimension output
Network.During the training of neutral net of the present invention, we choose front 10 frames as instruction from any n frame video image sequence
Practice sample set.Then a frame therein is utilized to construct the input feature vector collection of BP network.
S3.1.4BP neutral net specific design method
The design of BP network is mainly concerned with the following aspects: the transmission between input layer, hidden layer, output layer and each layer
Function.
1) the network number of plies
If by the basic feature of neutral net it is known that the node of hidden layer is unrestricted, though only one of which hidden layer,
BP nerve network system can also realize any nonlinear mapping.So, in BP nerve network system, except input layer and defeated
Go out beyond layer, arbitrarily the network number of plies can be arbitrarily set according to system required mode, minimum can only comprise a hidden layer.
2) input layer and the nodes of output layer
The effect of input layer is the input data outside reception, plays the effect of buffer storage.The nodes of input layer by
The dimension of input vector is determined.Because " neighborhood is to pixels approach " of the present invention, by the template of 3x3, image is carried out
Sampling, thus input layer be input number of nodes be 9.And the output node number of output layer is 1.
3) nodes of hidden layer
The effect of hidden node be extract from sample in it rule, and store these rules, each hidden node
It is owned by several weights.Weights are for strengthening network mapping ability parameter.When the nodes of hidden layer is very few,
Rule in training sample well can not be summarized and embody.And when the number of hidden nodes is too much, then can remember
The content of irregularity in sample, thus reduce the generalization ability of network.Moreover, when the node of hidden layer is too much, also can
Increase the training time of network.Generally we use trial and error procedure to determine optimal the number of hidden nodes.Following formula be the present invention according to
The in the past training time of designer, the empirical equation out that the several respects such as discrimination consider:
In above formula, n is the number of hidden nodes, n1For input layer number, n0For output node number, b is constant, value be 1~
10。
4) transfer function
It is illustrated in figure 4 the transfer function of conventional BP network, generally can select not in table according to system requirements
Same transfer function.
5) training method and parameter select
Because standard BP algorithm has the problems such as convergence rate is slow, convergence precision is restricted.Therefore the present invention uses additional
Momentum BP Algorithm carrys out training network.
Additional momentum BP algorithm is exactly one factor of momentum of introducing on the basis of gradient descent algorithm, then according to previous
Secondary correction result adjusts this correction.When previous correction is excessive, this algorithm will reduce this correction,
To keep correction direction to carry out along the direction of convergence.By the effect reducing vibration so can be played.And when previous
When correction is too small, this algorithm will strengthen this correction, so can play the effect accelerating to revise.
Weighting regulation formula with momentum term is:
ω (k+1)=Δ ω (k+1)+ω (k+1)
In above-mentioned formula, E is systematic error, and ω is the weights of correction, and k is arbitrary number of times, and η is Learning Step,
β is pace of learning.β and η generally can adjust according to real system, for known coefficient.In above-mentioned additional momentum BP algorithm,
Generally can use bigger learning rate.So do not result in dissipating of learning process, and convergence rate can be accelerated, reduce and learn
The habit time.
According to above-mentioned 5 the BP neural network structures designed as shown in Figure 4.
Wherein, P1~P9For input vector, a is output vector, ω1And ω2Represent ground floor and the network of the second layer respectively
Weight vector, f1And f2It is respectively hidden layer and the transfer function of output layer.After determining acquisition impact point feature, carry out precisely
Feature selection, determine moving target position, and intelligent-tracking identification can be carried out based on this feature point.
The system that summary algorithm realizes can accurately extract motion target area, then utilizes BP neural network classification
Device is accurately detected the position of moving target and carries out Multiobjective Intelligent tracking.
The method proposed in the present invention actually can embed FPGA and realize, and exploitation has the high definition of intelligent multiple target tracking and takes the photograph
As monitoring system.
Above example only plays the effect explaining technical solution of the present invention, protection domain of the presently claimed invention not office
It is limited to the system that realizes described in above-described embodiment and is embodied as step.Therefore, only to formula concrete in above-described embodiment and
Algorithm is simply replaced, but the technical scheme that its flesh and blood is still consistent with the method for the invention, all should be belonged to this
Bright protection domain.
Claims (7)
1. a multiple mobile object tracking based on BP neutral net, it is characterised in that the method comprises the steps:
Step S1: go out multiple mobile object based on Background difference and frame difference method fusion detection;Multiple mobile object detection is broadly divided into three
Step: 1) initial background model;2) utilize frame difference method context update, then carry out binaryzation;3) Background difference background subtraction is utilized
Point, then image is carried out binaryzation;
Step S2: the image of binaryzation is carried out further denoising, and be input to BP neutral net and carry out Segmentation of Multi-target;
Step S3: multiple mobile object tracking based on BP neutral net processes.
Multiple mobile object tracking based on BP neutral net the most according to claim 1, it is characterised in that step S1
The most further comprising the steps of:
Step S1.1: initial background model;Use multiple image averaging method to calculate initial background image, be denoted as Bk(x,y,
t);
In formula, Ik(x, y k) represent that (x, y, k) pixel value at place, N is the frame number of statistics to kth frame image at point;
Step S1.2: utilize frame difference method context update, then carry out binaryzation;Poor with former frame first by present frame
Point, obtain frame difference result, i.e. foreground target, and carry out binaryzation, such as following formula:
Wherein, I (x, y, t+1), I (x, y, t) be respectively present frame and previous frame image, | I (x, y, t+1)-I (x, y, t) | for
Difference image, T is threshold value, and threshold value iterative method calculates;Then B (x, y, t+1) is utilized to carry out context update, as follows
Formula is carried out:
In formula, α, for updating the factor, represents the speed of context update;
Step S1.3: utilize Background difference to carry out the detection of moving region;By present frame I (x, y, t+1) and the background extracted
(x, y, t) carry out difference to image B, obtains foreground moving region, then utilizes threshold value T carry out binaryzation and obtain D (x, y, t+
1), threshold value here uses described iterative method to calculate equally:
Multiple mobile object tracking based on BP neutral net the most according to claim 2, it is characterised in that step S2
Specifically also include: after being detected moving region, moving region will be carried out moving meshes;Bianry image is done certain place
Reason, obtains the region of complete moving target;This process includes using morphologic basic skills to carry out testing result
Denoising;After denoising completes, re-use certain algorithm in moving region, mark each moving target;Concrete operations walk
Rapid as follows:
Step S2.1;Morphological operation;Described morphologic basic skills includes: has burn into expand, open operation and closed operation four
Kind;
Step S2.2;Segmentation of Multi-target algorithm, was using morphology operations to make an uproar the foreground picture of binaryzation moving region
After sound is removed, each moving Object Segmentation is opened, and is marked, obtain the positional information of each moving target;Mark is used
Order mark, partitioning algorithm step is as follows:
S2.2.1: image is scanned to bottom right from upper left;
S2.2.2.: when the foreground pixel not being marked occurs, be labeled;
S2.2.3.: if the 8 of the neighborhood of this pixel pixels are not foreground pixel, return S2.2.1;If there being prospect
Pixel, then neighborhood territory pixel is also marked same sequence number, and neighborhood territory pixel is repeated this step work;So through above-mentioned step
Rapid mark, all moving targets all can be separated, and be recorded.
Multiple mobile object tracking based on BP neutral net the most according to claim 3, it is characterised in that described step
Rapid S3 also includes herein below: the target after splitting step S2 carries out target characteristic and selects and extraction, BP neural network classification
Device carries out Local Search in the moving region of gained target, and target search obtained generates coupling bianry image, finally will
Generate coupling bianry image and generate design sketch output.
Described BP neural network classifier includes two stages: training stage and detection-phase, training stage system is according to known
The sample of classification makes discriminant function and decision rule, carries out Classification and Identification for the sample identified;Detection-phase is then
It is responsible for the sample of unknown classification is classified.
Multiple mobile object tracking based on BP neutral net the most according to claim 4, it is characterised in that described instruction
The white silk stage includes:
(1) input training sample, training sample is the most identified, for formulating the image collection of discriminant classification function;
(2) Image semantic classification, including removing noise and interference present in image, image restoration or image enhaucament, the mistake of image
Filter etc.;
(3) feature extraction, is digitized some feature of object to be identified;
(4) feature selection, is analyzed from selecting a collection of sample through the image of pretreatment, therefrom chooses and has been applicable to point
The feature set that class requires;
(5) self adaptation part, system finds out an effective classification rule according to training sample set;
Described detection-phase includes:
(1) unknown input sample, unknown sample sample the most to be identified, is the sample randomly drawed from sequence of video images
This;
(2) Image semantic classification, including removing noise and interference present in image, image restoration or image enhaucament, the mistake of image
Filter etc.;
(3) feature extraction, is digitized some feature of object to be identified;
(4) feature selection, is analyzed from selecting a collection of sample through the image of pretreatment, therefrom chooses and has been applicable to point
The feature set that class requires;
(5) output result.
6., according to the multiple mobile object tracking based on BP neutral net described in any one of claim 2 to 4, its feature exists
In, choosing of described threshold value T uses adaptive iteration method, is specifically calculated as follows:
Calculate gray-scale statistical rectangular histogram H [i] of image:
H [i]=niI=0,1 ..., L-1
Wherein L is the scope of image gray levels, the most generally uses L=256, niBe gray scale be the number of pixels of i;
Utilize maximum max and minima min of the grey scale pixel value of grey level histogram acquisition difference diagram;
Formula below is utilized to set initial threshold, as the starting point of iteration:
Utilize T0Difference image is split, and calculates the average gray value M of background area and foreground zoneb、Mf:
Calculate new iteration threshold, order:
If T ≠ T0, make T=T0, forward step 4 to;If T=T0, algorithm terminates;
If new threshold value and T0Unequal, then make T=T0, continue iteration, until calculating final threshold value T.
Multiple mobile object tracking based on BP neutral net the most according to claim 4, it is characterised in that BP is neural
The design procedure of network classifier is as follows:
1) Image semantic classification, pretreatment mainly includes the noise remove of image, image enhaucament, image binaryzation and morphologic filtering
Deng;
2) clarification of objective analysis, gives space characteristics and temporal characteristics at sequence of video images, for space characteristics, uses
Target extraction method based on shape information;For temporal characteristics, use the target extraction method of kinetic characteristic;
3) set up nonlinear mapping relation, use neighborhood that input vector is processed by pixels approach, use the template of 3 × 3
Image slides, for sampled point, image is sampled with center simultaneously, thus construct one 9 dimension input 1 dimension output
Neutral net;During the training of neutral net, from any n frame video image sequence, choose front 10 frames as training sample
Collection, then utilizes a frame therein to construct the input feature vector collection of BP network;
4) BP network;Including the transfer function between input layer, hidden layer, output layer and each layer, and training method and parameter are selected
Select;In addition to input layer and output layer, arbitrarily the network number of plies can be arbitrarily set according to system required mode, minimum can only bag
Containing a hidden layer;Described input layer be input number of nodes be 9, the output node number of output layer is 1;Trial and error procedure is used to determine
Optimal the number of hidden nodes, calculates the number of hidden nodes formula as follows:
Wherein, n is the number of hidden nodes, n1For input layer number, n0For output node number, b is constant, and value is 1~10;
5) training method and parameter select;Using additional momentum BP algorithm to carry out training network additional momentum BP algorithm is exactly in gradient
Introduce a factor of momentum on the basis of descent algorithm, then adjust this correction according to previous correction result;
When previous correction is excessive, this algorithm will reduce this correction, to keep correction direction to enter along the direction restrained
OK;By the effect reducing vibration so can be played;And when previous correction is too small, by increasing, this repaiies this algorithm
Positive quantity, so can play the effect accelerating to revise;
Weighting regulation formula with momentum term is:
ω (k+1)=Δ ω (k+1)+ω (k+1)
Wherein, E is systematic error, and ω is the weights of correction, and k is arbitrary number of times, and η is Learning Step, and β is pace of learning;β
Generally can adjust according to real system with η, for known coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610514056.8A CN106204646A (en) | 2016-07-01 | 2016-07-01 | Multiple mobile object tracking based on BP neutral net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610514056.8A CN106204646A (en) | 2016-07-01 | 2016-07-01 | Multiple mobile object tracking based on BP neutral net |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106204646A true CN106204646A (en) | 2016-12-07 |
Family
ID=57464349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610514056.8A Pending CN106204646A (en) | 2016-07-01 | 2016-07-01 | Multiple mobile object tracking based on BP neutral net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106204646A (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107230221A (en) * | 2017-05-25 | 2017-10-03 | 武汉理工大学 | One kind is based on convolutional neural networks adaptive background modeling object detecting method |
CN107480653A (en) * | 2017-08-30 | 2017-12-15 | 安徽理工大学 | passenger flow volume detection method based on computer vision |
CN107492113A (en) * | 2017-06-01 | 2017-12-19 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
CN108010050A (en) * | 2017-11-27 | 2018-05-08 | 电子科技大学 | A kind of foreground detection method based on adaptive RTS threshold adjustment and selective context update |
CN108010064A (en) * | 2017-11-03 | 2018-05-08 | 河海大学 | Motor cell tracking based on active profile and Kalman filter |
CN108171723A (en) * | 2017-12-22 | 2018-06-15 | 湖南源信光电科技股份有限公司 | Based on more focal length lens of Vibe and BP neural network algorithm linkage imaging camera machine system |
CN108198207A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Multiple mobile object tracking based on improved Vibe models and BP neural network |
CN108805863A (en) * | 2018-05-02 | 2018-11-13 | 南京工程学院 | The method of depth convolutional neural networks combining form detection image variation |
CN108875900A (en) * | 2017-11-02 | 2018-11-23 | 北京旷视科技有限公司 | Method of video image processing and device, neural network training method, storage medium |
CN108932735A (en) * | 2018-07-10 | 2018-12-04 | 广州众聚智能科技有限公司 | A method of generating deep learning sample |
CN109190691A (en) * | 2018-08-20 | 2019-01-11 | 小黄狗环保科技有限公司 | The method of waste drinking bottles and pop can Classification and Identification based on deep neural network |
CN109285181A (en) * | 2018-09-06 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | The method and apparatus of image for identification |
CN109584213A (en) * | 2018-11-07 | 2019-04-05 | 复旦大学 | A kind of selected tracking of multiple target number |
CN109766828A (en) * | 2019-01-08 | 2019-05-17 | 重庆同济同枥信息技术有限公司 | A kind of vehicle target dividing method, device and communication equipment |
CN109949332A (en) * | 2017-12-20 | 2019-06-28 | 北京京东尚科信息技术有限公司 | Method and apparatus for handling image |
CN109960988A (en) * | 2017-12-26 | 2019-07-02 | 浙江宇视科技有限公司 | Image analysis method, device, electronic equipment and readable storage medium storing program for executing |
CN110119730A (en) * | 2019-06-03 | 2019-08-13 | 齐鲁工业大学 | A kind of monitor video processing method, system, terminal and storage medium |
CN110730037A (en) * | 2019-10-21 | 2020-01-24 | 苏州大学 | Optical signal-to-noise ratio monitoring method of coherent optical communication system based on momentum gradient descent method |
CN112034198A (en) * | 2020-07-03 | 2020-12-04 | 朱建国 | High-shooting-speed bullet continuous-firing initial speed measuring method |
CN112270317A (en) * | 2020-10-16 | 2021-01-26 | 西安工程大学 | Traditional digital water meter reading identification method based on deep learning and frame difference method |
CN112330720A (en) * | 2020-11-12 | 2021-02-05 | 北京环境特性研究所 | Tracking method and device for moving weak and small target |
CN112733676A (en) * | 2020-12-31 | 2021-04-30 | 青岛海纳云科技控股有限公司 | Method for detecting and identifying garbage in elevator based on deep learning |
CN112862854A (en) * | 2021-02-08 | 2021-05-28 | 桂林电子科技大学 | Multi-unmanned aerial vehicle tracking method for improving KCF algorithm |
CN113327269A (en) * | 2021-05-21 | 2021-08-31 | 哈尔滨理工大学 | Unmarked cervical vertebra movement detection method |
CN113362371A (en) * | 2021-05-18 | 2021-09-07 | 北京迈格威科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN113723230A (en) * | 2021-08-17 | 2021-11-30 | 山东科技大学 | Process model extraction method for extracting field procedural video by business process |
CN114037558A (en) * | 2021-11-05 | 2022-02-11 | 国网上海市电力公司 | Transmission line anticollision early warning system based on multidimensional feature recognition |
CN114219839A (en) * | 2021-12-24 | 2022-03-22 | 欧波同科技产业有限公司 | Frame image position calculation method based on matrix filling algorithm |
CN114511539A (en) * | 2017-12-26 | 2022-05-17 | 三星电子株式会社 | Image acquisition device and method for controlling image acquisition device |
CN114677343A (en) * | 2022-03-15 | 2022-06-28 | 华南理工大学 | A detection method for freeway splinters based on dual backgrounds |
CN115082412A (en) * | 2022-07-05 | 2022-09-20 | 中国科学院合肥物质科学研究院 | A kind of ELMs filamentous structure extraction method |
CN115359094A (en) * | 2022-09-05 | 2022-11-18 | 珠海安联锐视科技股份有限公司 | Moving target detection method based on deep learning |
WO2023005760A1 (en) * | 2021-07-26 | 2023-02-02 | Huawei Technologies Co., Ltd. | Systems and methods for performing computer vision task using sequence of frames |
CN115937263A (en) * | 2023-02-27 | 2023-04-07 | 南昌理工学院 | Vision-based target tracking method, system, electronic device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101017573A (en) * | 2007-02-09 | 2007-08-15 | 南京大学 | Method for detecting and identifying moving target based on video monitoring |
CN103116746A (en) * | 2013-03-08 | 2013-05-22 | 中国科学技术大学 | Video flame detecting method based on multi-feature fusion technology |
-
2016
- 2016-07-01 CN CN201610514056.8A patent/CN106204646A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101017573A (en) * | 2007-02-09 | 2007-08-15 | 南京大学 | Method for detecting and identifying moving target based on video monitoring |
CN103116746A (en) * | 2013-03-08 | 2013-05-22 | 中国科学技术大学 | Video flame detecting method based on multi-feature fusion technology |
Non-Patent Citations (2)
Title |
---|
曾宏亮: "视频图像中运动目标检测与跟踪技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李彤: "智能视频监控下的多目标跟踪技术研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107230221A (en) * | 2017-05-25 | 2017-10-03 | 武汉理工大学 | One kind is based on convolutional neural networks adaptive background modeling object detecting method |
CN107230221B (en) * | 2017-05-25 | 2019-07-09 | 武汉理工大学 | An object detection method based on convolutional neural network adaptive background modeling |
CN107492113A (en) * | 2017-06-01 | 2017-12-19 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
CN107492113B (en) * | 2017-06-01 | 2019-11-05 | 南京行者易智能交通科技有限公司 | A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method |
CN107480653A (en) * | 2017-08-30 | 2017-12-15 | 安徽理工大学 | passenger flow volume detection method based on computer vision |
CN108875900B (en) * | 2017-11-02 | 2022-05-24 | 北京旷视科技有限公司 | Video image processing method and device, neural network training method, storage medium |
CN108875900A (en) * | 2017-11-02 | 2018-11-23 | 北京旷视科技有限公司 | Method of video image processing and device, neural network training method, storage medium |
CN108010064A (en) * | 2017-11-03 | 2018-05-08 | 河海大学 | Motor cell tracking based on active profile and Kalman filter |
CN108010050A (en) * | 2017-11-27 | 2018-05-08 | 电子科技大学 | A kind of foreground detection method based on adaptive RTS threshold adjustment and selective context update |
CN108010050B (en) * | 2017-11-27 | 2022-01-25 | 电子科技大学 | Foreground detection method based on adaptive background updating and selective background updating |
CN109949332A (en) * | 2017-12-20 | 2019-06-28 | 北京京东尚科信息技术有限公司 | Method and apparatus for handling image |
CN108171723A (en) * | 2017-12-22 | 2018-06-15 | 湖南源信光电科技股份有限公司 | Based on more focal length lens of Vibe and BP neural network algorithm linkage imaging camera machine system |
CN108198207A (en) * | 2017-12-22 | 2018-06-22 | 湖南源信光电科技股份有限公司 | Multiple mobile object tracking based on improved Vibe models and BP neural network |
CN109960988A (en) * | 2017-12-26 | 2019-07-02 | 浙江宇视科技有限公司 | Image analysis method, device, electronic equipment and readable storage medium storing program for executing |
CN114511539A (en) * | 2017-12-26 | 2022-05-17 | 三星电子株式会社 | Image acquisition device and method for controlling image acquisition device |
CN114511539B (en) * | 2017-12-26 | 2025-01-21 | 三星电子株式会社 | Image acquisition device and method for controlling image acquisition device |
CN108805863B (en) * | 2018-05-02 | 2022-02-22 | 南京工程学院 | Deep Convolutional Neural Networks Combined with Morphology to Detect Image Changes |
CN108805863A (en) * | 2018-05-02 | 2018-11-13 | 南京工程学院 | The method of depth convolutional neural networks combining form detection image variation |
CN108932735A (en) * | 2018-07-10 | 2018-12-04 | 广州众聚智能科技有限公司 | A method of generating deep learning sample |
CN109190691A (en) * | 2018-08-20 | 2019-01-11 | 小黄狗环保科技有限公司 | The method of waste drinking bottles and pop can Classification and Identification based on deep neural network |
CN109285181A (en) * | 2018-09-06 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | The method and apparatus of image for identification |
CN109584213A (en) * | 2018-11-07 | 2019-04-05 | 复旦大学 | A kind of selected tracking of multiple target number |
CN109584213B (en) * | 2018-11-07 | 2023-05-30 | 复旦大学 | A Tracking Method for Multi-Target Number Selection |
CN109766828A (en) * | 2019-01-08 | 2019-05-17 | 重庆同济同枥信息技术有限公司 | A kind of vehicle target dividing method, device and communication equipment |
CN110119730A (en) * | 2019-06-03 | 2019-08-13 | 齐鲁工业大学 | A kind of monitor video processing method, system, terminal and storage medium |
CN110730037A (en) * | 2019-10-21 | 2020-01-24 | 苏州大学 | Optical signal-to-noise ratio monitoring method of coherent optical communication system based on momentum gradient descent method |
CN112034198A (en) * | 2020-07-03 | 2020-12-04 | 朱建国 | High-shooting-speed bullet continuous-firing initial speed measuring method |
CN112270317A (en) * | 2020-10-16 | 2021-01-26 | 西安工程大学 | Traditional digital water meter reading identification method based on deep learning and frame difference method |
CN112270317B (en) * | 2020-10-16 | 2024-06-07 | 西安工程大学 | Reading identification method of traditional digital water meter based on deep learning and frame difference method |
CN112330720A (en) * | 2020-11-12 | 2021-02-05 | 北京环境特性研究所 | Tracking method and device for moving weak and small target |
CN112733676A (en) * | 2020-12-31 | 2021-04-30 | 青岛海纳云科技控股有限公司 | Method for detecting and identifying garbage in elevator based on deep learning |
CN112862854A (en) * | 2021-02-08 | 2021-05-28 | 桂林电子科技大学 | Multi-unmanned aerial vehicle tracking method for improving KCF algorithm |
CN113362371A (en) * | 2021-05-18 | 2021-09-07 | 北京迈格威科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN113327269A (en) * | 2021-05-21 | 2021-08-31 | 哈尔滨理工大学 | Unmarked cervical vertebra movement detection method |
WO2023005760A1 (en) * | 2021-07-26 | 2023-02-02 | Huawei Technologies Co., Ltd. | Systems and methods for performing computer vision task using sequence of frames |
CN113723230A (en) * | 2021-08-17 | 2021-11-30 | 山东科技大学 | Process model extraction method for extracting field procedural video by business process |
CN114037558A (en) * | 2021-11-05 | 2022-02-11 | 国网上海市电力公司 | Transmission line anticollision early warning system based on multidimensional feature recognition |
CN114219839A (en) * | 2021-12-24 | 2022-03-22 | 欧波同科技产业有限公司 | Frame image position calculation method based on matrix filling algorithm |
CN114677343A (en) * | 2022-03-15 | 2022-06-28 | 华南理工大学 | A detection method for freeway splinters based on dual backgrounds |
CN115082412A (en) * | 2022-07-05 | 2022-09-20 | 中国科学院合肥物质科学研究院 | A kind of ELMs filamentous structure extraction method |
CN115359094A (en) * | 2022-09-05 | 2022-11-18 | 珠海安联锐视科技股份有限公司 | Moving target detection method based on deep learning |
CN115937263A (en) * | 2023-02-27 | 2023-04-07 | 南昌理工学院 | Vision-based target tracking method, system, electronic device and storage medium |
CN115937263B (en) * | 2023-02-27 | 2023-06-09 | 南昌理工学院 | Vision-based target tracking method, system, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106204646A (en) | Multiple mobile object tracking based on BP neutral net | |
EP3614308B1 (en) | Joint deep learning for land cover and land use classification | |
CN112541483B (en) | Dense face detection method combining YOLO and blocking-fusion strategy | |
CN106778595B (en) | Method for detecting abnormal behaviors in crowd based on Gaussian mixture model | |
CN112734775B (en) | Image labeling, image semantic segmentation and model training methods and devices | |
CN103971386B (en) | A kind of foreground detection method under dynamic background scene | |
US20190122072A1 (en) | Reverse neural network for object re-identification | |
CN113065474B (en) | Behavior recognition method and device and computer equipment | |
CN110363131B (en) | Abnormal behavior detection method, system and medium based on human skeleton | |
CN108198207A (en) | Multiple mobile object tracking based on improved Vibe models and BP neural network | |
CN108491766B (en) | End-to-end crowd counting method based on depth decision forest | |
CN108875624A (en) | Method for detecting human face based on the multiple dimensioned dense Connection Neural Network of cascade | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN106529499A (en) | Fourier descriptor and gait energy image fusion feature-based gait identification method | |
CN109344285A (en) | A kind of video map construction and method for digging, equipment towards monitoring | |
Yu et al. | Research of image main objects detection algorithm based on deep learning | |
CN104268594A (en) | Method and device for detecting video abnormal events | |
CN112434599B (en) | Pedestrian re-identification method based on random occlusion recovery of noise channel | |
CN107315795B (en) | The instance of video search method and system of joint particular persons and scene | |
CN107230267A (en) | Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method | |
CN113763424B (en) | Real-time intelligent target detection method and system based on embedded platform | |
CN104134077A (en) | Deterministic learning theory based gait recognition method irrelevant to visual angle | |
CN111192294A (en) | A target tracking method and system based on target detection | |
CN105303163B (en) | A kind of method and detection device of target detection | |
CN110956158A (en) | Pedestrian shielding re-identification method based on teacher and student learning frame |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161207 |