CN106384359B - Motion target tracking method and TV - Google Patents
Motion target tracking method and TV Download PDFInfo
- Publication number
- CN106384359B CN106384359B CN201610848424.2A CN201610848424A CN106384359B CN 106384359 B CN106384359 B CN 106384359B CN 201610848424 A CN201610848424 A CN 201610848424A CN 106384359 B CN106384359 B CN 106384359B
- Authority
- CN
- China
- Prior art keywords
- moving target
- video image
- target
- tracking
- frame video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000033001 locomotion Effects 0.000 title claims abstract description 42
- 238000012423 maintenance Methods 0.000 claims abstract description 23
- 230000003044 adaptive effect Effects 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 33
- 239000002245 particle Substances 0.000 claims description 21
- 238000005315 distribution function Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 12
- 239000004973 liquid crystal related substance Substances 0.000 claims description 8
- 230000000903 blocking effect Effects 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 42
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 238000012952 Resampling Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 230000000386 athletic effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Image Analysis (AREA)
Abstract
The present invention provides a kind of motion target tracking method and TV, wherein this method comprises: removing the shade of the moving target in video image using the shadow removal method for improving mixed Gaussian background modeling;Model information is established for each moving target;According to the model information of each moving target, the tracking window of adaptive updates moving target;According to the model information of each moving target, the weight between each moving target is determined;According to the weight between each moving target, the association probability between each moving target is determined;According to the association probability between each moving target, tracking maintenance is carried out to the moving target under tracking window.Due to establishing the association probability between each moving target, so as to be separated to the moving target for respectively blocking, merging, multiple moving targets can be separated, and then tracking maintenance can be carried out to the moving target in video image, go the purpose of real motion target tracking well.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of motion target tracking methods and TV.
Background technique
As image technique develops, intelligent monitor system is universal to be applied in life.Intelligent monitor system is
Using image procossing and mode identification technology, information useless in scene is filtered out by data processing function, it is then emerging to sense
The moving target of interest or non-athletic target carry out rapid examination and analysis, so these targets are detected, are described, are identified and
The technology of behavior understanding, may be implemented it is to the Intelligent target in monitoring scene, be accurately monitored in real time.Currently,
In the prior art, the detection that moving target is carried out commonly using intelligent monitor system, passes through the figure to moving target
Picture and background image are analyzed, and then realize the detection of moving target.
It however in the prior art, can not be to more when blocking or merging occur in the moving target in video image
A moving target is separated, and then can not go to realize the purpose of motion target tracking well.
Summary of the invention
The present invention provides a kind of motion target tracking method and TV, to solve in the prior art in video image
When blocking or merging occur in moving target, multiple moving targets can not be separated, and then reality can not be gone well
The problem of purpose of existing motion target tracking.
It is an aspect of the present invention to provide a kind of motion target tracking methods, comprising:
Using the shadow removal method for improving mixed Gaussian background modeling, the yin of the moving target in video image is removed
Shadow;
Model information is established for each moving target;
According to the model information of each moving target, the tracking window of adaptive updates moving target;
According to the model information of each moving target, the weight between each moving target is determined;
According to the weight between each moving target, the association probability between each moving target is determined;
According to the association probability between each moving target, tracking maintenance is carried out to the moving target under tracking window.
Another aspect of the present invention is to provide a kind of TV characterized by comprising
Show machine core, field programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) core
Piece and liquid crystal display, the fpga chip are connect with the display machine core, the liquid crystal display respectively;
Wherein, the fpga chip is for realizing described in any item motion target tracking methods as above.
The solution have the advantages that: by using the shadow removal method for improving mixed Gaussian background modeling, removal view
The shade of moving target in frequency image;Model information is established for each moving target;According to each moving target
Model information, the tracking window of adaptive updates moving target;According to the model information of each moving target, each movement is determined
Weight between target;According to the weight between each moving target, the association probability between each moving target is determined;According to each fortune
Association probability between moving-target carries out tracking maintenance to the moving target under tracking window.Due to establishing each movement mesh
Association probability between mark, so as to be separated to the moving target for respectively blocking, merging, can to multiple moving targets into
Row separation, and then tracking maintenance can be carried out to the moving target in video image, the mesh of real motion target tracking is removed well
's.
Detailed description of the invention
Fig. 1 is the flow chart for the motion target tracking method that the embodiment of the present invention one provides;
Fig. 2 is the flow chart of motion target tracking method provided by Embodiment 2 of the present invention;
Fig. 3 is the structural schematic diagram for the motion target tracking device that the embodiment of the present invention three provides;
Fig. 4 is the structural schematic diagram for the motion target tracking device that the embodiment of the present invention four provides;
Fig. 5 is the structural schematic diagram for the TV that the embodiment of the present invention five provides.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Fig. 1 is the flow chart for the motion target tracking method that the embodiment of the present invention one provides, as shown in Figure 1, the present embodiment
The method of offer, comprising:
Step 101, using improve mixed Gaussian background modeling shadow removal method, remove video image in movement mesh
Target shade.
Wherein, the specific implementation of step 101 are as follows: for each of video image moving target, determine movement
Color angle between the foreground image and background area of targetWherein, XdFor moving target
Color vector of the foreground image in j-th of pixel, XbFor moving target background area j-th of pixel color to
Amount, wherein j is positive integer;
For each of video image moving target, if α < τ, it is determined that j-th of pixel of moving target is doubtful
Like shade, wherein τ is constant;
For each of video image moving target, if j-th of pixel of moving target meetsJ-th of pixel for then determining moving target is shade,
In, (x, y) is the coordinate of j-th of pixel, IH(x,y)、IS(x,y)、IV(x, y) is respectively that the foreground image of moving target exists
H, S, the V component of j-th of pixel, BH(x,y)、BS(x,y)、BV(x, y) is respectively the background area of moving target at j-th
H, S, the V component of pixel.
In the present embodiment, specifically, for each frame image in video image, a back can be initially set up
A background area can be fixedly installed in scene area, and a background area can dynamically be arranged.Pass through background area and video
The comparison and calculating of each frame of image, so that it is determined that each moving target in video image.
Firstly, it is necessary to remove the shade of each moving target in video image, specifically, can be mixed using improving
The shadow removal method of Gaussian Background modeling, removes the shade of the moving target in video image.
Due in video image, the difference very little that the shade coloration of moving target changes, therefore the color angle of shade
Also can very little, so as to judge each pixel by the size of color angle, if be doubtful shade.Specifically, may be used
With set the foreground image of moving target j-th of pixel color vector as Xd=[Hj,Sj,Vj], set moving target
Background area is X in the color vector of j-th of pixelb=[H'j,S'j,Vj'], so as to for every in video image
One moving target calculates the color angle between the foreground image of moving target and background areaThen, a constant tau is set, τ is the lesser numerical value voluntarily selected, can be with if α < τ
First determine that j-th of pixel may be shade.
Later, whether the pixel for determining doubtful shade is life shadow, can introduce sentencing for another life shadow
Certainly method, for each of video image moving target, if j-th of pixel of moving target meetsIt is assured that j-th of moving target
Pixel is shade.Here, (x, y) is the coordinate of j-th of pixel, and IH(x,y)、IS(x,y)、IV(x, y) and BH(x,
y)、BS(x,y)、BV(x, y) respectively indicates pixel input values I (x, y) and H, S, V of background pixel value point at coordinate points (x, y)
Amount.
Step 102 establishes model information for each moving target.
In the present embodiment, specifically, needing to go to establish a model information, this mould for each moving target
Type information, for establishing the tracking window of moving target.
Step 103, the model information according to each moving target, the tracking window of adaptive updates moving target.
In the present embodiment, specifically, according to the model information of each moving target, certainly according to the size of moving target
The size of the tracking window of dynamic adjustment moving target, and then the tracking window of adaptive updates moving target.
Step 104, the model information according to each moving target, determine the weight between each moving target.
In the present embodiment, specifically, being gone according to the model information of each moving target calculated in step 103
Calculate the weight between each moving target.
Step 105, according to the weight between each moving target, determine the association probability between each moving target.
In the present embodiment, specifically, according to the weight between each moving target, the association between each moving target is determined
Probability, association probability here are divided between different moving targets association probability and same moving target in different frame
Association probability.
Step 106, according to the association probability between each moving target, tracking dimension is carried out to the moving target under tracking window
It holds.
In the present embodiment, specifically, after calculating the association probability between each moving target, so that it may count
The status predication value for calculating the moving target at current time, from the status predication value according to the moving target at current time, to
Moving target under track window carries out tracking maintenance.
The present embodiment removes the fortune in video image by using the shadow removal method for improving mixed Gaussian background modeling
The shade of moving-target;Model information is established for each moving target;It is adaptive according to the model information of each moving target
The tracking window of moving target should be updated;According to the model information of each moving target, the power between each moving target is determined
Value;According to the weight between each moving target, the association probability between each moving target is determined;According between each moving target
Association probability carries out tracking maintenance to the moving target under tracking window.Due to establishing the association between each moving target
Probability can separate multiple moving targets, in turn so as to separate to the moving target for respectively blocking, merging
Tracking maintenance can be carried out to the moving target in video image, go the purpose of real motion target tracking well.
Fig. 2 is the flow chart of motion target tracking method provided by Embodiment 2 of the present invention, on the basis of example 1,
As shown in Fig. 2, step 102 method provided in this embodiment specifically includes:
For each moving target, determine moving target in the observed object probability-distribution function of the i-th frame imageWherein, β is control parameter, Xm,iFor m-th of moving target
In the observation of the i-th frame video image, μm,iIndicate m-th of target in the mean vector of the i-th frame video image;
For each moving target, determine moving target in J background probability function of the i-th frame video image, wherein
J-th of background probability function of m-th of moving target beS is the dimension of state space, YiRepresent pixel
Pixel value in the i-th frame video image,Indicate mean vector of j-th of Gauss model in the i-th frame video image,
In, i, j, m, J are positive integer;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function,
And moving target determines the similarity function of moving target in J background probability function of the i-th frame video imageδ is constant;
Determine moving target in the i-th frame video figure according to the similarity function of moving target for each moving target
The information weight of pictureWherein, N indicates the total number of moving target;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function,
Determine the model information of moving target
In the present embodiment, specifically, for for each moving target, it is assumed that the i-th frame of m-th of target observes mesh
Marking probability-distribution function isWherein, β is control parameter, general feelings
It can be with β=20 under condition;Xm,iObservation for m-th of moving target in the i-th frame video image, μm,iIndicate m-th of target i-th
The mean vector of frame video image.
Meanwhile establishing northern background model, wherein background model is obtained by mixed Gaussian background modeling, then, for each
A moving target determines moving target in J background probability function of the i-th frame video image.Wherein, m-th moving target
J-th of background probability function beS is state space
Dimension, YiPixel value of the pixel in the i-th frame video image is represented,Indicate j-th of Gauss model in the i-th frame video figure
Mean vector as in, i, j, m, J are positive integers.
In the state feature that moving target prospect and background can only be embodied due to the modeling statistics information based on weight, thus
Discrimination at this time between the two cannot be depicted.In view of the above problems, the present invention proposes a kind of feature similar function method, this
Moving target statistical information can be divided into a positive value collection region by a similar function, and the relevant information of background is then divided
To negative value collection region.For each moving target, according to moving target the i-th frame video image observed object probability distribution
Function and moving target determine the similarity function of moving target in J background probability function of the i-th frame video image
Expression way isδ is constant;Wherein, numerical value δ is seen as
δ=0.0001 is arranged in the numerical value of one very little under normal circumstances.Similarity function has reacted moving target foreground image and background
The state characteristic similarity in region.
Then, by the similarity function in obtained positive value collection region, i.e., by Rm(i) > 0 similarity function when be brought into as
In lower formula, so as to obtain m-th of target in the information weight of the i-th frame imageWherein, N indicates movement
The total number of target.
Later, can be directed to each moving target, according to moving target the i-th frame video image observed object probability
Distribution function determines that the model information of moving target is
Step 103, it specifically includes:
Step 1031 is directed to each moving target, determines the initial tracking window of moving target.
Wherein, the specific implementation of step 1031 are as follows: if moving target becomes larger, it is determined that the length and width of tracking window multiplied by
1+ ζ, wherein ζ is constant;If moving target becomes smaller, it is determined that the length and width of tracking window are multiplied by 1- ζ.
In the present embodiment, specifically, determining the initial tracking of moving target firstly the need of each moving target is directed to
Window determines model information G corresponding to initial preceding 5 tracking windows of moving target1、G2、G3、G4、G5。
Assuming that the model information of the i-th frame video image is G1If moving target becomes larger, the length and width of tracking window are multiplied
With 1+ ζ, if moving target becomes smaller, to the length and width of tracking window multiplied by 1- ζ, so that the model information of moving target becomes G2,
Wherein, ζ is constant, 0≤ζ≤0.6;Then, and so on, for the model information G of the video image of the n-th+M frame3、G4、G5。
Wherein model information is also corresponding with tracking window, and then can determine the initial tracking window of moving target.
Step 1032 determines tracking window according to the initial tracking window of moving target for each moving target
Change in size ratioWherein, G1、G2、
G3、G4、G5The respectively initial corresponding model information of preceding 5 tracking windows of moving target.
In the present embodiment, specifically, being directed to each moving target, according to the initial tracking window of moving target, really
Determine the change in size ratio of tracking window
The wherein effect of the change in size ratio q of tracking window is the influence for reducing background to target prospect, wherein q and 1
Difference it is bigger illustrate that background area is more complicated, and when target scale becomes larger, q >=1, when scale becomes smaller, q≤1.
Step 1033 is directed to each moving target, is existed according to the change in size ratio and moving target of tracking window
The information weight of i-th frame video image determines the long H of track window window of m-th of target in the i-th frame video imagei+1=
λm,iHi(1+q) and width Wi+1=λm.iWi(1+q), wherein λm.iFor moving target m the i-th frame video image information weight.
In the present embodiment, specifically, each moving target can be directed to, according to the change in size of tracking window ratio
Example and moving target determine tracking of m-th of target in the i-th frame video image in the information weight of the i-th frame video image
A length of H of window windowi+1=λm,iHiThe width of the track window window of (1+q), m-th of target in the i-th frame video image is Wi+1=
λm.iWi(1+q), wherein λm.iFor moving target m the i-th frame video image information weight.
Step 104, it specifically includes:
For each moving target, the observed object model state vector X=[x, y, g, l] of moving target is determined,
In, x, y, g, l are respectively the model information of the length and width of tracking window, color histogram and moving target;
Moving target is determined according to the observed object model state vector of moving target for each moving target
Association probability function between adjacent two frame
Determine m-th of moving target and n-th of moving target between the size and color histogram of the i-th frame video image
The first association probability functionAnd determine m-th of moving target and n-th of movement
The second association probability function between the model information and LBP texture value of target
Wherein, i is the i-th frame video image, and i, m, n are positive integer;
According to the first association probability function and the second association probability function, m-th of moving target and n-th of movement mesh are determined
Bayes's coefficient between mark
According to Bayes's coefficient, the weight of the i-th frame video image is determinedWherein, σ is
The variance of Gaussian function.
In the present embodiment, specifically, carrying out the foundation of multiple mobile object model first.In order to increase the steady of target following
Qualitative and accuracy determines observed object model state vector X=[x, y, g, l] first against each moving target,
Middle x, y, g, l are expressed as the model information of the length and width of tracking window, color histogram, moving target.
Then, for each moving target, the association probability function between adjacent two frame of moving target is determined that,
In, the association probability function between adjacent two frame of m-th of target is
Then, m-th of moving target and n-th of moving target are set in the size and color histogram of the i-th frame video image
The first association probability function between figureAnd set m-th of moving target and n-th
The second association probability function between the model information and LBP texture value of moving target
Wherein, i is the i-th frame video image, and i, m, n are positive integers.
It is then assumed that each hypothesis region is s(n), for s(n)Each state computation go out Bayes
Bhattacharyya coefficientFurther according to Bayes's coefficient, the i-th frame video image is determined
WeightWherein, σ is the variance of Gaussian function.
To provide with the correlation between the dependent probability function and different target of the adjacent interframe of target in same frame
Probability function, and then guarantee is provided for accurate data correlation.
Step 105, it specifically includes:
For each moving target, tracking window matrix Ω is calculated;
For each moving target, the state renewal equation of the joint probability data association of moving target is determinedWherein, Xm(i) moving target m is represented in data correlation in the i-th frame video figure
State vector as in, Vm(i+1) united information new after expression particle filter,What is indicated is that moving target m exists
State vector predicted value in i-th frame video image, andpm(i+
1) the association probability function between adjacent two frame of moving target m;
Association probability is determined according to tracking window matrix for each moving targetWherein, ωiFor the weight of the i-th frame video image;
Determine that associated observation sample number is not in video image
Wherein, for each moving target, tracking window matrix is calculated, comprising:
For each moving target, if observation data u has been dropped into the tracking window of moving target m, matrix valueIf observation data u is not fallen in the tracking window of moving target m, matrix value
For each moving target, according to matrix valueDetermine that tracking window matrix isWherein, N indicate moving target total number, M indicate with
The sum that track window updates.
In the present embodiment, specifically, then needing to be associated the determination of matrix and association probability.First against each
A moving target calculates tracking window matrix.For indicating the square of incidence matrix that whether measured value i falls into track window
Battle array value, as t=0, indicates no target, all elements are all 1, illustrate that measured value is fallen into track window as t ≠ 0, next
With regard to calculating each measured value probability associated with its possible each provenance target.Then, pass through particle distribution in incidence matrix
The relationship of position and track window determines confirmation matrix, obtain can be inferred to after confirmation matrix moving target various associations it is general
Rate.Wherein u indicates observation data, and m indicates that moving target, N indicate the total number of moving target, and M indicates what tracking window updated
Sum.According to each moving target is directed to, if observation data u has been dropped into the tracking window of moving target m,
If observation data u is not fallen in the tracking window of moving target m,And then determine tracking window matrix
Then, for each moving target, the state renewal equation of the joint probability data association of moving target is determined
ForWherein, Xm(i) moving target m is represented in data correlation in the i-th frame video
State vector in image, Vm(i+1) united information new after expression particle filter,That indicate is moving target m
State vector predicted value in the i-th frame video image, simultaneouslypm(i
+ 1) the association probability function between adjacent two frame of moving target m.Joint Probabilistic Data Association algorithm considers all observations
The incidence relation of data and moving target.
Association probability is determined according to tracking window matrix for each moving targetWherein, ωiFor the weight of the i-th frame video image.
Then according to tracking window matrix Ω, determine that associated observation sample number is not in video image
Step 106, it specifically includes:
Primary point sequence x is generated according to prior probability distribution*(1)(m),x*(2)(m),...x*(z)(m), wherein m table
M-th of moving target is levied, z indicates to generate particle point sequence sum, and z is positive integer;
Posterior probability distribution is sampled, to determine sample valueAnd calculate acceptance probability
If α (X, X*(z)(m)) >=1, then receiving sampled value X(z)(m)=X*(z)(m), and determine that particle weight is 1/N, if
α(X,X*(z)(m)) < 1, then ignore sampled value;
By measured value, importance weight is determinedWherein, P (X(z)(m)|
X*(z)It (m)) is the association probability being calculated in data correlation,Indicate a simple sampling
Known probability density fonction,I is the i-th frame video figure
Picture;
By the weight of update, state estimation equation is determinedAnd covariance matrix
After primary to video image sampling every N frame, and the M particle that MN iteration is sampled is as current
The status predication value of the moving target at momentAccording to moving target
Status predication value carries out tracking maintenance to the moving target under tracking window.
In the present embodiment, specifically, connecing down the tracking maintenance that can be carried out moving target.It can be using filtering and pre-
The mode of survey, the tracking for carrying out moving target maintain.The present invention is filtered moving target using particle filter method, in order to
The diversity for increasing particle, introduces MCMC methodology to improve the estimated accuracy of filtering and adjust to particle number after resampling
It is whole.
Firstly, initialization Markov Chain and MCMC filter, so that one Markov Chain of construction, concurrently sets sampling
In particle aging period B and sampling frame number interval N, and importance sampling is carried out to Markov Chain, according to prior probability distribution
Generate primary point sequence x*(1)(m),x*(2)(m),...x*(z)(m), z indicates to generate particle point sequence sum.
It will suggest that distribution is defined as normal distribution, obtain sample value after sampling to Posterior probability distributionIt calculates
Acceptance probability α (X, X*(z)(m)), whereinIf α (X, X*(z)(m)) >=1, then receiving sampled value X*(z)(m), i.e. X(z)(m)=X*(z)(m), while by particle weight it is set as 1/N;If α
(X,X*(z)(m)) < 1, then sampled value is ignored, keeps original sampled point X(z)(m) constant.
Then importance sampling is carried out, importance weight can be calculated by measured value isWherein, P (X(z)(m)|X*(z)It (m)) is the association being calculated in data correlation
Probability,q(X(z)(m),X*(z)(m)) a simple sampling has been indicated
Know probability density function,
I is the i-th frame video image.
To calculate state estimation equation by the weight updatedAnd covariance matrix
For
Resampling is finally carried out, video image can be sampled once every N frame, and MN iteration is sampled
Predicted value of the M particle as the dbjective state at current timeAnd then root
Tracking maintenance is carried out to the moving target under tracking window according to the status predication value of moving target.
The present embodiment removes the fortune in video image by using the shadow removal method for improving mixed Gaussian background modeling
The shade of moving-target;Model information is established for each moving target;It is adaptive according to the model information of each moving target
The tracking window of moving target should be updated;According to the model information of each moving target, the power between each moving target is determined
Value;According to the weight between each moving target, the association probability between each moving target is determined;According between each moving target
Association probability carries out tracking maintenance to the moving target under tracking window.And in conjunction with Bayes's coefficient, tracking window can be made
Both it had been suitable for the size of moving target and had been changed, and then effectively the biggish target of dimensional variation can have been tracked,
Improve the real-time of the tracking of moving target;Due to establishing the association probability between each moving target, so as to right
The moving target respectively block, merged is separated, and can be separated to multiple moving targets, and then can be in video image
Moving target carry out tracking maintenance, go the purpose of real motion target tracking well.
Fig. 3 is the structural schematic diagram for the motion target tracking device that the embodiment of the present invention three provides, as shown in figure 3, this reality
The device of example offer is provided, comprising:
Shadow removal module 31, for removing video figure using the shadow removal method for improving mixed Gaussian background modeling
The shade of moving target as in;
Model building module 32, for establishing model information for each moving target;
Window update module 33, for the model information according to each moving target, adaptive updates moving target
Tracking window;
Weight determining module 34 determines between each moving target for the model information according to each moving target
Weight;
Relating module 35, for determining the association probability between each moving target according to the weight between each moving target;
Tracking module 36, for according to the association probability between each moving target, to the moving target under tracking window into
Line trace maintains.
The motion target tracking method that the embodiment of the present invention one provides can be performed in the motion target tracking device of the present embodiment,
Its realization principle is similar, and details are not described herein again.
The present embodiment removes the fortune in video image by using the shadow removal method for improving mixed Gaussian background modeling
The shade of moving-target;Model information is established for each moving target;It is adaptive according to the model information of each moving target
The tracking window of moving target should be updated;According to the model information of each moving target, the power between each moving target is determined
Value;According to the weight between each moving target, the association probability between each moving target is determined;According between each moving target
Association probability carries out tracking maintenance to the moving target under tracking window.Due to establishing the association between each moving target
Probability can separate multiple moving targets, in turn so as to separate to the moving target for respectively blocking, merging
Tracking maintenance can be carried out to the moving target in video image, go the purpose of real motion target tracking well.
Fig. 4 is the structural schematic diagram for the motion target tracking device that the embodiment of the present invention four provides, in the base of embodiment three
On plinth, as shown in figure 4, device provided in this embodiment, shadow removal module 31, are specifically used for:
For each of video image moving target, determine between the foreground image of moving target and background area
Color angleWherein, XdFor moving target foreground image j-th of pixel color to
Amount, XbFor moving target background area j-th of pixel color vector, wherein j is positive integer;
For each of video image moving target, if α < τ, it is determined that j-th of pixel of moving target is doubtful
Like shade, wherein τ is constant;
For each of video image moving target, if j-th of pixel of moving target meetsJ-th of pixel for then determining moving target is shade,
In, (x, y) is the coordinate of j-th of pixel, IH(x,y)、IS(x,y)、IV(x, y) is respectively that the foreground image of moving target exists
H, S, the V component of j-th of pixel, BH(x,y)、BS(x,y)、BV(x, y) is respectively the background area of moving target at j-th
H, S, the V component of pixel.
Model building module 32, is specifically used for:
For each moving target, determine moving target in the observed object probability-distribution function of the i-th frame imageWherein, β is control parameter, Xm,iFor m-th of moving target
In the observation of the i-th frame video image, μm,iIndicate m-th of target in the mean vector of the i-th frame video image;
For each moving target, determine moving target in J background probability function of the i-th frame video image, wherein
J-th of background probability function of m-th of moving target be
S is the dimension of state space, YiPixel value of the pixel in the i-th frame video image is represented,Indicate that j-th of Gauss model exists
Mean vector in i-th frame video image, wherein i, j, m, J are positive integer;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function,
And moving target determines the similarity function of moving target in J background probability function of the i-th frame video imageδ is constant;
Determine moving target in the i-th frame video figure according to the similarity function of moving target for each moving target
The information weight of pictureWherein, N indicates the total number of moving target;
For each moving target, according to moving target the i-th frame video image observed object probability-distribution function,
Determine the model information of moving target
Window update module 33, comprising:
Initial submodule 331 determines the initial tracking window of moving target for being directed to each moving target;
Ratio-dependent submodule 332, for being directed to each moving target, according to the initial tracking window of moving target,
Determine the change in size ratio of tracking window
Wherein, G1、G2、G3、G4、G5The respectively initial corresponding model information of preceding 5 tracking windows of moving target;
Window determines submodule 333, for be directed to each moving target, according to the change in size ratio of tracking window,
And moving target determines track window of m-th of target in the i-th frame video image in the information weight of the i-th frame video image
The long H of windowi+1=λm,iHi(1+q) and width Wi+1=λm.iWi(1+q), wherein λm.iIt is moving target m in the i-th frame video image
Information weight.
Initial submodule 331, is specifically used for:
If moving target becomes larger, it is determined that the length and width of tracking window are multiplied by 1+ ζ, wherein ζ is constant;
If moving target becomes smaller, it is determined that the length and width of tracking window are multiplied by 1- ζ.
Weight determining module 34, is specifically used for:
For each moving target, the observed object model state vector X=[x, y, g, l] of moving target is determined,
In, x, y, g, l are respectively the model information of the length and width of tracking window, color histogram and moving target;
Moving target is determined according to the observed object model state vector of moving target for each moving target
Association probability function between adjacent two frame
Determine m-th of moving target and n-th of moving target between the size and color histogram of the i-th frame video image
The first association probability functionAnd determine m-th of moving target and n-th of fortune
The second association probability function between the model information and LBP texture value of moving-target
Wherein, i is the i-th frame video image, and i, m, n are positive integer;
According to the first association probability function and the second association probability function, m-th of moving target and n-th of movement mesh are determined
Bayes's coefficient between mark
According to Bayes's coefficient, the weight of the i-th frame video image is determinedWherein, σ
For the variance of Gaussian function.
Relating module 35, comprising:
Computational submodule 351 calculates tracking window matrix Ω for being directed to each moving target;
It updates and determines submodule 352, for being directed to each moving target, determine that the joint probability data of moving target closes
The state renewal equation of connectionWherein, Xm(i) it represents and moves mesh in data correlation
Mark state vector of the m in the i-th frame video image, Vm(i+1) united information new after expression particle filter,Table
What is shown is state vector predicted value of the moving target m in the i-th frame video image, andpm(i+1) association probability between adjacent two frame of moving target m
Function;
Determine the probability submodule 353, for according to tracking window matrix, determining and closing for being directed to each moving target
Join probabilityWherein, ωiFor the weight of the i-th frame video image;
Sample determines submodule 354, and for determining, associated observation sample number is not in video image
Computational submodule 351, is specifically used for:
For each moving target, if observation data u has been dropped into the tracking window of moving target m, matrix valueIf observation data u is not fallen in the tracking window of moving target m, matrix value
For each moving target, according to matrix valueDetermine that tracking window matrix isWherein, N indicate moving target total number, M indicate with
The sum that track window updates.
Tracking module 36, is specifically used for:
Primary point sequence x is generated according to prior probability distribution*(1)(m),x*(2)(m),...x*(z)(m), wherein m table
M-th of moving target is levied, z indicates to generate particle point sequence sum, and z is positive integer;
Posterior probability distribution is sampled, to determine sample valueAnd calculate acceptance probability
If α (X, X*(z)(m)) >=1, then receiving sampled value X(z)(m)=X*(z)(m), and determine that particle weight is 1/N, if
α(X,X*(z)(m)) < 1, then ignore sampled value;
By measured value, importance weight is determinedWherein, P (X(z)(m)|
X*(z)It (m)) is the association probability being calculated in data correlation,Indicate the known of a simple sampling
Probability density function,
I is the i-th frame video image;
By the weight of update, state estimation equation is determinedAnd covariance matrix
After primary to video image sampling every N frame, and the M particle that MN iteration is sampled is as current
The status predication value of the moving target at momentAccording to moving target
Status predication value carries out tracking maintenance to the moving target under tracking window.
The moving target that the embodiment of the present invention one can be performed in the motion target tracking device of the present embodiment, embodiment two provides
Tracking, realization principle is similar, and details are not described herein again.
The present embodiment removes the fortune in video image by using the shadow removal method for improving mixed Gaussian background modeling
The shade of moving-target;Model information is established for each moving target;It is adaptive according to the model information of each moving target
The tracking window of moving target should be updated;According to the model information of each moving target, the power between each moving target is determined
Value;According to the weight between each moving target, the association probability between each moving target is determined;According between each moving target
Association probability carries out tracking maintenance to the moving target under tracking window.And in conjunction with Bayes's coefficient, tracking window can be made
Both it had been suitable for the size of moving target and had been changed, and then effectively the biggish target of dimensional variation can have been tracked,
Improve the real-time of the tracking of moving target;Due to establishing the association probability between each moving target, so as to right
The moving target respectively block, merged is separated, and can be separated to multiple moving targets, and then can be in video image
Moving target carry out tracking maintenance, go the purpose of real motion target tracking well.
Fig. 5 is the structural schematic diagram for the TV that the embodiment of the present invention five provides, as shown in figure 5, electricity provided in this embodiment
Depending on, comprising:
Show machine core 51, fpga chip 52 and liquid crystal display 53, fpga chip 52 respectively with display machine core 51, liquid crystal display 53
Connection;
Wherein, the motion target tracking device provided in above-described embodiment is provided in fpga chip 52.
In the present embodiment, specifically, being provided with display machine core, fpga chip and liquid crystal display, fpga chip in TV
It is connect respectively with display machine core, liquid crystal display.The motion target tracking device that embodiment one and embodiment two can be provided, setting
In fpga chip.
Wherein, the hardware realization that method is carried out using FPGA, can be used a multi-media processing based on EP2C70 chip
Plateform system DE2-70 is carried out;Selected multimedia processing platform system has extensive and high speed programmable logic
Resource needs the memory resource of large capacity to go to store high-resolution image data, needs the data transmission channel of high speed
The code stream of transmission high bandwidth is removed, and can support various video input/output interface.
The moving target that the embodiment of the present invention three can be performed in the motion target tracking device of the present embodiment, example IV provides
Tracking device, realization principle is similar, and details are not described herein again.
The motion target tracking device that embodiment three is arranged in the present embodiment on television, example IV provides, by using
The shadow removal method of mixed Gaussian background modeling is improved, the shade of the moving target in video image is removed;For each
Moving target establishes model information;According to the model information of each moving target, the track window of adaptive updates moving target
Mouthful;According to the model information of each moving target, the weight between each moving target is determined;According between each moving target
Weight determines the association probability between each moving target;According to the association probability between each moving target, under tracking window
Moving target carries out tracking maintenance.And in conjunction with Bayes's coefficient, tracking window can be made both to be suitable for the size of moving target
And be changed, and then effectively the biggish target of dimensional variation can be tracked, improve the tracking of moving target
Real-time;Due to establishing the association probability between each moving target, so as to the moving target for respectively blocking, merging into
Row separation, can separate multiple moving targets, and then can carry out tracking maintenance to the moving target in video image,
The purpose of real motion target tracking is gone well.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Program above-mentioned can be stored in a computer readable storage medium.The journey
When being executed, execution includes the steps that above-mentioned each method embodiment to sequence;And storage medium above-mentioned include: ROM, RAM, magnetic disk or
The various media that can store program code such as person's CD.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (9)
1. a kind of motion target tracking method characterized by comprising
Using the shadow removal method for improving mixed Gaussian background modeling, the shade of the moving target in video image is removed;
Model information is established for each moving target;
According to the model information of each moving target, the tracking window of adaptive updates moving target;
According to the model information of each moving target, the weight between each moving target is determined;
According to the weight between each moving target, the association probability between each moving target is determined;
According to the association probability between each moving target, tracking maintenance is carried out to the moving target under tracking window;
The shadow removal method using improvement mixed Gaussian background modeling, removes the yin of the moving target in video image
Shadow, comprising:
For each of video image moving target, the color between the foreground image of moving target and background area is determined
AngleWherein, XdFor moving target foreground image j-th of pixel face
Color vector, XbFor moving target background area j-th of pixel color vector, wherein j is positive integer;
For each of video image moving target, if α < τ, it is determined that j-th of pixel of moving target is doubtful yin
Shadow, wherein τ is constant;
For each of video image moving target, if j-th of pixel of moving target meetsJ-th of pixel for then determining moving target is shade,
In, (x, y) is the coordinate of j-th of pixel, IH(x,y)、IS(x,y)、IV(x, y) is respectively that the foreground image of moving target exists
H, S, the V component of j-th of pixel, BH(x,y)、BS(x,y)、BV(x, y) is respectively the background area of moving target at j-th
H, S, the V component of pixel.
2. the method according to claim 1, wherein described establish model information for each moving target,
Include:
For each moving target, determine moving target in the observed object probability-distribution function of the i-th frame imageWherein, β is control parameter, Xm,iFor m-th of moving target
In the observation of the i-th frame video image, μm,iIndicate m-th of target in the mean vector of the i-th frame video image;
For each moving target, determine moving target in J background probability function of the i-th frame video image, wherein m
J-th of background probability function of a moving target beS is the dimension of state space, YiRepresent pixel
Pixel value in the i-th frame video image,Indicate mean vector of j-th of Gauss model in the i-th frame video image,
In, i, j, m, J are positive integer;
For each moving target, according to moving target in the observed object probability-distribution function of the i-th frame video image and
Moving target determines the similarity function of moving target in J background probability function of the i-th frame video imageδ is constant;
Determine moving target in the i-th frame video image according to the similarity function of moving target for each moving target
Information weightWherein, N indicates the total number of moving target;
It is determined for each moving target according to moving target in the observed object probability-distribution function of the i-th frame video image
The model information of moving target
3. according to the method described in claim 2, it is characterized in that, the model information according to each moving target, from
Adapt to update the tracking window of moving target, comprising:
For each moving target, the initial tracking window of moving target is determined;
The change in size ratio of tracking window is determined according to the initial tracking window of moving target for each moving targetWherein, G1、G2、G3、G4、G5Respectively
The initial corresponding model information of preceding 5 tracking windows of moving target;
For each moving target, according to the change in size ratio of tracking window and moving target in the i-th frame video image
Information weight, determine the long H of track window window of m-th of target in the i-th frame video imagei=λM, iHi-1It is (1+q) and wide
Wi=λM, iWi-1(1+q), wherein λM, iInformation weight for moving target m in the i-th frame video image, Hi-1For the (i-1)-th frame video
The length of track window window in image, wi-1For the width of the track window window in the (i-1)-th frame video image, q is the ruler of tracking window
Very little variation ratio.
4. according to the method described in claim 3, it is characterized in that, it is described be directed to each moving target, determine moving target
Initial tracking window, comprising:
If moving target becomes larger, it is determined that the length and width of tracking window are multiplied by 1+ ζ, wherein ζ is constant;
If moving target becomes smaller, it is determined that the length and width of tracking window are multiplied by 1- ζ.
5. the method according to claim 1, wherein the model information according to each moving target, really
Weight between fixed each moving target, comprising:
For each moving target, the observed object model state vector X=[x, y, g, l] of moving target is determined, wherein x,
Y, g, l are respectively the model information of the length and width of tracking window, color histogram and moving target;
The adjacent of moving target is determined according to the observed object model state vector of moving target for each moving target
Association probability function between two frames
Determine of m-th of moving target and n-th of moving target between the size and color histogram of the i-th frame video image
One association probability functionAnd determine m-th of moving target and n-th of movement mesh
The second association probability function between target model information and LBP texture valueIts
In, i is the i-th frame video image, and i, m, n are positive integer;
According to the first association probability function and the second association probability function, determine m-th of moving target and n-th moving target it
Between Bayes's coefficient
According to Bayes's coefficient, the weight of the i-th frame video image is determinedWherein, σ is Gauss
The variance of function.
6. the method according to claim 1, wherein the weight according between each moving target, determines each
Association probability between moving target, comprising:
For each moving target, tracking window matrix Ω is calculated;
For each moving target, the state renewal equation of the joint probability data association of moving target is determinedWherein, Xm(i) moving target m is represented in data correlation in the i-th frame video figure
State vector as in, Vm(i+1) united information new after expression particle filter,What is indicated is that moving target m exists
State vector predicted value in i-th frame video image, andpm(i+
1) the association probability function between adjacent two frame of moving target m;
Association probability is determined according to tracking window matrix for each moving targetWherein, ωiFor the weight of the i-th frame video image.
7. according to the method described in claim 6, it is characterized in that, described be directed to each moving target, calculating tracking window
Matrix, comprising:
For each moving target, if observation data u has been dropped into the tracking window of moving target m, matrix valueIf observation data u is not fallen in the tracking window of moving target m, matrix value
For each moving target, according to matrix valueDetermine that tracking window matrix isWherein, N indicate moving target total number, M indicate with
The sum that track window updates.
8. method according to claim 1-7, which is characterized in that the association according between each moving target
Probability carries out tracking maintenance to the moving target under tracking window, comprising:
Primary point sequence x is generated according to prior probability distribution*(1)(m),x*(2)(m),...x*(z)(m), wherein m characterizes m
A moving target, z indicate generation particle point sequence sum, and z is positive integer;
Posterior probability distribution is sampled, to determine sample valueAnd calculate acceptance probability
If α (X, X*(z)(m)) >=1, then receiving sampled value X(z)(m)=X*(z)(m), and determine that particle weight is 1/N, if α (X,
X*(z)(m)) sampled value is then ignored in 1 <;
By measured value, importance weight is determinedWherein, P (X(z)(m)|X*(z)
It (m)) is the association probability being calculated in data correlation,Indicate a simple sampling
Known probability density fonction,I is the i-th frame video figure
Picture, ωi-1For the weight of the (i-1)-th frame video image, βm(i) be moving target m association probability;
By the weight of update, state estimation equation is determinedAnd covariance matrix
After primary to video image sampling every N frame, and the M particle that MN iteration is sampled is as current time
Moving target status predication valueAccording to the state of moving target
Predicted value carries out tracking maintenance to the moving target under tracking window.
9. a kind of TV characterized by comprising
Show machine core, field programmable gate array FPGA chip and liquid crystal display, the fpga chip respectively with the display machine
Core, liquid crystal display connection;
Wherein, the fpga chip is for realizing the described in any item motion target tracking methods of claim 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610848424.2A CN106384359B (en) | 2016-09-23 | 2016-09-23 | Motion target tracking method and TV |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610848424.2A CN106384359B (en) | 2016-09-23 | 2016-09-23 | Motion target tracking method and TV |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106384359A CN106384359A (en) | 2017-02-08 |
CN106384359B true CN106384359B (en) | 2019-06-25 |
Family
ID=57936913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610848424.2A Active CN106384359B (en) | 2016-09-23 | 2016-09-23 | Motion target tracking method and TV |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106384359B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10139833B1 (en) * | 2017-05-17 | 2018-11-27 | GM Global Technology Operations LLC | Six-dimensional point cloud system for a vehicle |
CN111052753A (en) * | 2017-08-30 | 2020-04-21 | Vid拓展公司 | Tracking video scaling |
CN108257148B (en) * | 2018-01-17 | 2020-09-25 | 厦门大学 | Target suggestion window generation method of specific object and application of target suggestion window generation method in target tracking |
CN108711164B (en) * | 2018-06-08 | 2020-07-31 | 广州大学 | A Motion Detection Method Based on LBP and Color Features |
CN110009665B (en) * | 2019-03-12 | 2020-12-29 | 华中科技大学 | A Target Detection and Tracking Method in Occlusion Environment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101355692A (en) * | 2008-07-30 | 2009-01-28 | 浙江大学 | An intelligent monitoring device for real-time tracking of moving target area |
US8401239B2 (en) * | 2009-03-30 | 2013-03-19 | Mitsubishi Electric Research Laboratories, Inc. | Object tracking with regressing particles |
CN103914853A (en) * | 2014-03-19 | 2014-07-09 | 华南理工大学 | Method for processing target adhesion and splitting conditions in multi-vehicle tracking process |
CN104299210A (en) * | 2014-09-23 | 2015-01-21 | 同济大学 | Vehicle shadow eliminating method based on multi-feature fusion |
CN105931269A (en) * | 2016-04-22 | 2016-09-07 | 海信集团有限公司 | Tracking method for target in video and tracking device thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10018703B2 (en) * | 2012-09-13 | 2018-07-10 | Conduent Business Services, Llc | Method for stop sign law enforcement using motion vectors in video streams |
-
2016
- 2016-09-23 CN CN201610848424.2A patent/CN106384359B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101355692A (en) * | 2008-07-30 | 2009-01-28 | 浙江大学 | An intelligent monitoring device for real-time tracking of moving target area |
US8401239B2 (en) * | 2009-03-30 | 2013-03-19 | Mitsubishi Electric Research Laboratories, Inc. | Object tracking with regressing particles |
CN103914853A (en) * | 2014-03-19 | 2014-07-09 | 华南理工大学 | Method for processing target adhesion and splitting conditions in multi-vehicle tracking process |
CN104299210A (en) * | 2014-09-23 | 2015-01-21 | 同济大学 | Vehicle shadow eliminating method based on multi-feature fusion |
CN105931269A (en) * | 2016-04-22 | 2016-09-07 | 海信集团有限公司 | Tracking method for target in video and tracking device thereof |
Non-Patent Citations (1)
Title |
---|
基于图像序列的运动目标检测与跟踪算法研究;刘雪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20070915(第3期);正文19页,30页,36-37页 |
Also Published As
Publication number | Publication date |
---|---|
CN106384359A (en) | 2017-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106384359B (en) | Motion target tracking method and TV | |
CN107886048B (en) | Target tracking method and system, storage medium and electronic terminal | |
CN101339655B (en) | Visual Tracking Method Based on Object Features and Bayesian Filter | |
CN107408303A (en) | System and method for Object tracking | |
CN103971386A (en) | Method for foreground detection in dynamic background scenario | |
CN105550678A (en) | Human body motion feature extraction method based on global remarkable edge area | |
CN114639042A (en) | Video target detection algorithm based on improved CenterNet backbone network | |
CN105303581B (en) | A kind of moving target detecting method of auto-adaptive parameter | |
CN108647649A (en) | The detection method of abnormal behaviour in a kind of video | |
US20190180447A1 (en) | Image processing device | |
CN109191498B (en) | Object detection method and system based on dynamic memory and motion perception | |
KR102584708B1 (en) | System and Method for Crowd Risk Management by Supporting Under and Over Crowded Environments | |
CN111402237A (en) | Video image anomaly detection method and system based on spatiotemporal cascade autoencoder | |
Meng et al. | Video‐Based Vehicle Counting for Expressway: A Novel Approach Based on Vehicle Detection and Correlation‐Matched Tracking Using Image Data from PTZ Cameras | |
Li et al. | A traffic state detection tool for freeway video surveillance system | |
CN102063625B (en) | Improved particle filtering method for multi-target tracking under multiple viewing angles | |
CN110659658A (en) | Target detection method and device | |
CN105184229A (en) | Online learning based real-time pedestrian detection method in dynamic scene | |
CN103475800A (en) | Method and device for detecting foreground in image sequence | |
CN114169425A (en) | Training target tracking model and target tracking method and device | |
CN107403451A (en) | Adaptive binary feature monocular vision odometer method and computer, robot | |
CN110060278A (en) | The detection method and device of moving target based on background subtraction | |
CN106780567B (en) | Immune particle filter extension target tracking method fusing color histogram and gradient histogram | |
CN116245949B (en) | A high-precision visual SLAM method based on improved quadtree feature point extraction | |
CN105740819A (en) | Integer programming based crowd density estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Patentee after: Hisense Visual Technology Co., Ltd. Address before: 266555 Qingdao economic and Technological Development Zone, Shandong, Hong Kong Road, No. 218 Patentee before: QINGDAO HISENSE ELECTRONICS Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |