[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111612818A - Novel binocular vision multi-target tracking method and system - Google Patents

Novel binocular vision multi-target tracking method and system Download PDF

Info

Publication number
CN111612818A
CN111612818A CN202010384346.1A CN202010384346A CN111612818A CN 111612818 A CN111612818 A CN 111612818A CN 202010384346 A CN202010384346 A CN 202010384346A CN 111612818 A CN111612818 A CN 111612818A
Authority
CN
China
Prior art keywords
probability
vehicle motion
target
tracking
binocular vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010384346.1A
Other languages
Chinese (zh)
Inventor
胡广地
李孝哲
黎康杰
顾丽军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Xintongda Electronic Technology Co ltd
Original Assignee
Jiangsu Xintongda Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Xintongda Electronic Technology Co ltd filed Critical Jiangsu Xintongda Electronic Technology Co ltd
Priority to CN202010384346.1A priority Critical patent/CN111612818A/en
Publication of CN111612818A publication Critical patent/CN111612818A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of multi-target tracking of automatic driving, and particularly relates to a novel binocular vision multi-target tracking method and system, wherein the novel binocular vision multi-target tracking method comprises the following steps: acquiring an image through binocular vision; acquiring a moving target according to the image; constructing a vehicle motion space and a vehicle motion model; and tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association, realizing the multi-target tracking of the intelligent vehicle, greatly improving the automation and intelligence level of a driving system, improving the tracking precision and speed, and generating no obvious deviation and missing the tracking of pedestrians when tracking the vehicle.

Description

Novel binocular vision multi-target tracking method and system
Technical Field
The invention belongs to the technical field of multi-target tracking of automatic driving, and particularly relates to a novel binocular vision multi-target tracking method and system.
Background
Achieving reliable perception of the surrounding environment under a variety of uncertain conditions is a fundamental task in almost any assistive or autonomous system application, and especially with the continued rise of automated driving research, academia and various major technology companies are actively developing advanced driving assistance systems. The core technology of the driving assistance system comprises functions of self-adaptive cruise, collision avoidance, lane change assistance, traffic sign identification, parking assistance and the like, and aims to realize full automation of vehicle driving and reduce human errors causing road accidents while improving safety. In various technologies, moving object tracking is a key task of a driving assistance system, and when a vehicle can detect a dynamic object in its environment and predict its future behavior, it can greatly improve the level of intelligence of the vehicle.
Because real-time and accurate tracking of various objects under different environmental conditions needs to be realized, no sensing system can completely provide all information required by target tracking at present. In view of this, the driving assistance system generally achieves accurate detection of a moving target by means of a composite sensing system including a millimeter wave radar, a laser range finder, a vision system, and the like. The radar apparatus can accurately measure the relative speed and distance of an object. Laser rangefinders have a higher lateral resolution than radar and, in addition to accurately detecting object distances, can detect the footprint of objects and provide a detailed representation of the scene. The vision-based sensing system can provide accurate lateral measurement and rich image information, thereby providing an effective supplement for distance-measuring-based sensor road scene analysis. Among other things, stereo vision sensors can provide object detection with high lateral resolution and a small range of certainty, while generally providing sufficient information for the identification and classification of objects.
No matter which sensor is used, the problem of multi-target tracking must be solved in traffic scenarios. At the moment, the state of each target needs to be tracked, and meanwhile, the measured value is processed in a cluttered environment, so that the problem of data association of the tracked target is solved.
Therefore, based on the above technical problems, a new binocular vision multi-target tracking method and system need to be designed.
Disclosure of Invention
The invention aims to provide a novel binocular vision multi-target tracking method and system.
In order to solve the technical problem, the invention provides a novel binocular vision multi-target tracking method, which comprises the following steps:
acquiring an image through binocular vision;
acquiring a moving target according to the image;
constructing a vehicle motion space and a vehicle motion model; and
and tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association.
Further, the method for acquiring the moving target according to the image comprises the following steps:
and after the image is corrected, estimating the displacement of the intermediate vehicle according to a visual stereo distance measurement algorithm to obtain a moving target.
Further, the method for constructing the vehicle motion space comprises the following steps:
constructing a vehicle motion space GL into a Cartesian product S of two matrix Lie groups S based on an equivalent principle of a rigid body constant velocity motion model1×S2
Wherein: s1Is a position component; s2Is the velocity component.
Further, the method for constructing the vehicle motion model comprises the following steps:
building and updating vehicle motion models, i.e.
The vehicle motion model is as follows:
Xk+1=Xk·exp(αkk);
wherein, Xk∈ GL, the motion state of the system at the time k, αkAs a non-linear function βkIs white gaussian noise;
when the posterior distribution of the step k-1 meets the Gaussian distribution on the Lie group, the method is based on
Xk+1=Xkexp(log(αk) Predicting a motion state indicated by the vehicle motion model to update the vehicle motion model to:
Figure RE-GDA0002549985270000031
wherein v is1k、v2k、ωkRespectively longitudinal, transverse and rotational speed β1k、β2k、βωkThe components of the Gaussian noise in the longitudinal direction, the transverse direction and the rotation direction are respectively; t is the transposed sign of the matrix.
Further, the method for tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association comprises the following steps:
when the number of moving objects is k, then the plurality of moving objects are represented as { T }1,...,Tn};
YkSet representing all detections at time k
Figure RE-GDA0002549985270000032
Y1:kHistory Y representing all metrics1:k={Y1,…,Yk};
Predicting each moving target T according to joint probability data associationiPosterior density of (i ═ 1, 2.., n), i.e.
Figure RE-GDA0002549985270000033
Wherein,
Figure RE-GDA0002549985270000034
density of the target state;
Figure RE-GDA0002549985270000035
density of presence of a target; p is the probability;
density of target state
Figure RE-GDA0002549985270000036
And their existence
Figure RE-GDA0002549985270000037
Is all YkMeasuring (2);
describing the probability of the presence of a moving object according to a Markov chain model, i.e.
Figure RE-GDA0002549985270000041
Wherein: rho is the probability that the moving target still exists at the moment k when the moving target exists at the moment k-1;
from the measured value YkPredicting k time scanning to moving target T through total probability formulaiThe posterior density of (a):
Figure RE-GDA0002549985270000042
wherein,
Figure RE-GDA0002549985270000043
associating probabilities with posterior data of the existence of the object;
Figure RE-GDA0002549985270000044
is a probability hypothesis;
the probability of the existence of the detected target is as follows:
Figure RE-GDA0002549985270000045
when in use
Figure RE-GDA0002549985270000046
When composed of all joint events F, with each trajectory having zero or one measurement, and each measurement is assigned to zero or one trajectory, then
Figure RE-GDA0002549985270000047
Figure RE-GDA0002549985270000048
TiExist ofBut not detected by the measurements within the cluster are:
Figure RE-GDA0002549985270000049
wherein,
Figure RE-GDA00025499852700000410
probability hypothesis for 0 measurements;
Figure RE-GDA00025499852700000411
probability of 0 measurements;
p (F | Y) corresponding to each joint event F is calculated1:k) Then, a target set of measurements C is assigned, and a set of measured orbit sets D is assigned, to obtain:
Figure RE-GDA00025499852700000412
wherein i is the ith moving target; pdIs TiA probability of being detected; pqFor the modified metric at TiA probability within a threshold range of (d);
Figure RE-GDA0002549985270000051
is the rotation probability; rhokDensity is measured for clutter priors; τ is assigned to T at Joint event FiAn index of the measure of (d);
the rotation probability is:
Figure RE-GDA0002549985270000052
the probability of the target being present is:
Figure RE-GDA0002549985270000053
then the joint data probability is:
Figure RE-GDA0002549985270000054
and realizing accurate tracking of a plurality of moving targets according to the joint data probability.
On the other hand, the invention also provides a novel binocular vision multi-target tracking system, which comprises:
the acquisition module acquires images through binocular vision;
the moving target acquisition module acquires a moving target according to the image;
the building module is used for building a vehicle motion space and a vehicle motion model; and
and the tracking module tracks the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association.
The invention has the beneficial effects that the invention obtains images through binocular vision; acquiring a moving target according to the image; constructing a vehicle motion space and a vehicle motion model; and tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association, realizing the multi-target tracking of the intelligent vehicle, greatly improving the automation and intelligence level of a driving system, improving the tracking precision and speed, and generating no obvious deviation and missing the tracking of pedestrians when tracking the vehicle.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a novel binocular vision multi-target tracking method according to the present invention;
FIG. 2 is a detailed flow chart of the novel binocular vision multi-target tracking method according to the present invention;
fig. 3 is a schematic block diagram of the novel binocular vision multi-target tracking system according to the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Fig. 1 is a flow chart of a novel binocular vision multi-target tracking method according to the present invention.
As shown in fig. 1, this embodiment 1 provides a novel binocular vision multi-target tracking method, including: acquiring an image through binocular vision; acquiring a moving target according to the image; constructing a vehicle motion space and a vehicle motion model; and tracking the moving target according to the vehicle motion space and the vehicle motion model through improved Joint Probabilistic Data Association (JPDA), realizing intelligent vehicle multi-target tracking, greatly improving the automation and intelligence level of a driving system, improving the tracking precision and speed, and generating no obvious offset and missing the tracking of pedestrians when tracking the vehicle.
Fig. 2 is a specific flowchart of the novel binocular vision multi-target tracking method according to the present invention.
As shown in fig. 2, in the present embodiment, the method for acquiring an image through binocular vision includes: the stereoscopic vision camera is used for collecting images and videos of vehicles and pedestrians; modeling the uncertainty of the sensor under the Lie group, and performing state filtering on the preprocessed image by adopting a Euclidean group algorithm; false detections are removed using binocular vision in areas where vehicles may be present to correct the images.
In this embodiment, the method for acquiring a moving object according to an image includes: after the image is corrected, the measured uncertainty and the track of the predicted target motion are confirmed through a Kalman filter, the displacement of an intermediate vehicle is estimated by using a visual stereo distance measurement algorithm, an object which does not conform to the characteristics is regarded as a moving target, and the moving target needs to be subjected to stereo detection in the process, wherein the specific process is as follows:
firstly, after correcting the image, projecting all the feature points from the previous frame into a 3D world frame through a standard pinhole camera model, then carrying out composite back projection on the position and the obtained motion matrix into the current camera frame, and connecting to the 3D points from the current frame corresponding to the previous frame to form a vector field, wherein each vector represents the motion of the corresponding 3D point relative to the world frame; secondly, since the uncertainty of the measurement in the 3D space has a high anisotropy, it is difficult to accurately determine the intensity of motion along the optical axis direction, and the vectors are projected onto an image plane in which the uncertainty is uniformly distributed, and a threshold value is assigned to the motion amplitude of each point, and then the remaining vectors are connected into clusters according to the translation and rotation parameters; finally, if at least 3 vectors are present therein, each cluster corresponds to a moving object (moving target) and the moving object is described by the centroid point of all the corresponding points.
In this embodiment, a state uncertainty representation and a motion model are constructed based on the extended kalman filter in the Lie group, that is, a vehicle motion space and a vehicle motion model need to be constructed.
In this embodiment, the method for constructing the vehicle motion space includes: the vehicle is a typical rigid body, so the state of the vehicle needs to be described by using a rigid body motion equation set; furthermore, it is also possible to use when considering the speed of the vehicleExpressing the state change of high order by the same motion equation system; constructing a vehicle motion space GL into a Cartesian product S of two matrix Lie groups S based on an equivalent principle of a rigid body constant velocity motion model1×S2(ii) a Wherein: s1Is a position component; s2Is the velocity component.
In this embodiment, the method for constructing the vehicle motion model includes:
building and updating vehicle motion models, i.e.
The vehicle motion model is as follows:
Xk+1=Xk·exp(αkk);
wherein, Xk∈ GL, the motion state of the system at the time k, αkAs a non-linear function βkIs white gaussian noise;
if the posterior distribution of the k-1 step satisfies the Gaussian distribution on the Lie group, then X can be usedk+1=Xkexp(log(αk) Predicting a motion state indicated by the vehicle motion model to update the vehicle motion model (re-modeling an equation of the vehicle motion model) as:
Figure RE-GDA0002549985270000081
wherein v is1k、v2k、ωkRespectively longitudinal, transverse and rotational speed β1k、β2k、βωkThe components of the Gaussian noise in the longitudinal direction, the transverse direction and the rotation direction are respectively; t is the transposed sign of the matrix.
In this embodiment, the method for tracking a moving object according to a vehicle motion space and a vehicle motion model by improved joint probability data association includes:
assuming that the number of moving objects is k, the moving objects are expressed as { T }1,...,TnThe number k of moving objects to be tracked varies with time, that is to say moving objects may appear or disappear from the field of view of the sensor at any time;
definition of YkRepresenting the set of all detections at time k, i.e.
Figure RE-GDA0002549985270000091
Definition of Y1:kRepresenting the history of all metrics, i.e.
Y1:k={Y1,…,Yk} (2);
Predicting (estimating) each moving target T based on joint probability data associationiThe posterior density of (i ═ 1, 2.., n) solves this problem, i.e.
Figure RE-GDA0002549985270000092
Wherein,
Figure RE-GDA0002549985270000093
density (probability) of the target state;
Figure RE-GDA0002549985270000094
density (likelihood) of presence of a target; p is the probability;
the density of the target state is expressed by the formula (3)
Figure RE-GDA0002549985270000095
And their existence
Figure RE-GDA0002549985270000098
(density of existence of target) is all YkAnd is related to k;
describing the probability of the presence of a moving object according to a Markov chain model, i.e.
Figure RE-GDA0002549985270000096
Wherein: rho is the probability that the moving target still exists at the moment k when the moving target exists at the moment k-1;
from the measured value YkPredicting k time scanning to moving target T through total probability formulaiThe posterior density of (a) is:
Figure RE-GDA0002549985270000097
wherein,
Figure RE-GDA0002549985270000101
associating probabilities with posterior data of the existence of the object;
Figure RE-GDA0002549985270000102
is a probability hypothesis; n iskIs a number of
The probability of the existence of the detected target is as follows:
Figure RE-GDA0002549985270000103
to calculate
Figure RE-GDA0002549985270000104
The associated events that measure the objects need to be considered in the set of objects; it is assumed at this time
Figure RE-GDA0002549985270000105
Consisting of all joint events F, where each trajectory has zero or one measurement, and each measurement is assigned to zero or one trajectory, then
Figure RE-GDA0002549985270000106
Figure RE-GDA0002549985270000107
TiThe probability of being present but not detected by the measurements within the cluster is:
Figure RE-GDA0002549985270000108
wherein,
Figure RE-GDA0002549985270000109
probability hypothesis for 0 measurements;
Figure RE-GDA00025499852700001010
probability of 0 measurements; for calculating P (F | Y) corresponding to each joint event F1:k) Then, a target set of measurements C needs to be assigned, and a set of measured trajectory sets D needs to be assigned, and then:
Figure RE-GDA00025499852700001011
wherein i is the ith moving target; pdIs TiA probability of being detected; pqFor the modified metric at TiA probability within a threshold range of (d);
Figure RE-GDA00025499852700001012
is the rotation probability; rhokDensity is measured for clutter priors; τ is assigned to T at Joint event FiAn index of the measure of (d);
the rotation probability can be calculated from equation (10) as:
Figure RE-GDA0002549985270000111
the probability that all elements can be obtained to determine the existence of the target is given by the following public expression:
Figure RE-GDA0002549985270000112
then the joint data probability is:
Figure RE-GDA0002549985270000113
and realizing accurate tracking of a plurality of moving targets according to the joint data probability.
Example 2
Fig. 3 is a schematic block diagram of the novel binocular vision multi-target tracking system according to the present invention.
As shown in fig. 3, on the basis of embodiment 1, this embodiment 2 further provides a novel binocular vision multi-target tracking system, including: the acquisition module acquires images through binocular vision; the moving target acquisition module acquires a moving target according to the image; the building module is used for building a vehicle motion space and a vehicle motion model; and the tracking module tracks the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association.
In the embodiment, the acquisition module acquires an image through binocular vision, the moving target acquisition module acquires a moving target according to the image, and the construction module constructs a vehicle motion space and a vehicle motion model; and the method for tracking the moving target by the tracking module according to the vehicle motion space and the vehicle motion model through the improved joint probability data association has already been explained in detail in embodiment 1, and is not described in detail in this embodiment.
In summary, the invention acquires images through binocular vision; acquiring a moving target according to the image; constructing a vehicle motion space and a vehicle motion model; and tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association, realizing the multi-target tracking of the intelligent vehicle, greatly improving the automation and intelligence level of a driving system, improving the tracking precision and speed, and generating no obvious deviation and missing the tracking of pedestrians when tracking the vehicle.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (6)

1. A novel binocular vision multi-target tracking method is characterized by comprising the following steps:
acquiring an image through binocular vision;
acquiring a moving target according to the image;
constructing a vehicle motion space and a vehicle motion model; and
and tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association.
2. The novel binocular vision multi-target tracking method of claim 1,
the method for acquiring the moving target according to the image comprises the following steps:
and after the image is corrected, estimating the displacement of the intermediate vehicle according to a visual stereo distance measurement algorithm to obtain a moving target.
3. The novel binocular vision multi-target tracking method of claim 2,
the method for constructing the vehicle motion space comprises the following steps:
based on the equivalent principle of the rigid body constant velocity motion model, the vehicle motion space GL is constructed into two matrix Lie group Cartesian products S1×S2
Wherein: s1Is a position component; s2Is the velocity component.
4. The novel binocular vision multi-target tracking method of claim 3,
the method for constructing the vehicle motion model comprises the following steps:
building and updating vehicle motion models, i.e.
The vehicle motion model is as follows:
Xk+1=Xk·exp(αkk);
wherein, Xk∈ GL, which is the motion state at the time k, αkAs a non-linear function βkIs white gaussian noise;
when the posterior distribution of the step k-1 meets the Gaussian distribution on the Lie group, according to Xk+1=Xkexp(log(αk) Predicting a motion state indicated by the vehicle motion model to update the vehicle motion model to:
Figure FDA0002480743360000021
wherein v is1k、v2k、ωkRespectively longitudinal, transverse and rotational speed β1k、β2k、βωkThe components of the Gaussian noise in the longitudinal direction, the transverse direction and the rotation direction are respectively; t is the transposed sign of the matrix.
5. The novel binocular vision multi-target tracking method of claim 4,
the method for tracking the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association comprises the following steps:
when the number of moving objects is n, then the plurality of moving objects are represented as { T }1,...,Tn};
YkSet representing all detections at time k
Figure FDA0002480743360000022
Y1:kHistory Y representing all metrics1:k={Y1,…,Yk};
Predicting each moving target T according to joint probability data associationiPosterior density of (i ═ 1, 2.., n), i.e.
Figure FDA0002480743360000023
Wherein,
Figure FDA0002480743360000024
density of the target state;
Figure FDA0002480743360000025
density of presence of a target; p is the probability;
density of target state
Figure FDA0002480743360000026
And their existence
Figure FDA0002480743360000027
Is all YkMeasuring (2);
describing the probability of the presence of a moving object according to a Markov chain model, i.e.
Figure FDA0002480743360000028
Wherein: rho is the probability that the moving target still exists at the moment k when the moving target exists at the moment k-1;
from the measured value YkPredicting k time scanning to moving target T through total probability formulaiThe posterior density of (a):
Figure FDA0002480743360000029
wherein,
Figure FDA00024807433600000210
associating probabilities with posterior data of the existence of the object;
Figure FDA00024807433600000211
is a probability hypothesis;
the probability of the existence of the detected target is as follows:
Figure FDA00024807433600000212
when in use
Figure FDA0002480743360000031
When composed of all joint events F, with each trajectory having zero or one measurement, and each measurement is assigned to zero or one trajectory, then
Figure FDA0002480743360000032
Figure FDA0002480743360000033
TiThe probability of being present but not detected by the measurements within the cluster is:
Figure FDA0002480743360000034
wherein,
Figure FDA0002480743360000035
probability hypothesis for 0 measurements;
Figure FDA0002480743360000036
probability of 0 measurements;
p (F | Y) corresponding to each joint event F is calculated1:k) Then, a target set of measurements C is assigned, and a set of measured orbit sets D is assigned, to obtain:
Figure FDA0002480743360000037
wherein i is the ith moving target; pdIs TiA probability of being detected; pqFor the modified metric at TiA probability within a threshold range of (d);
Figure FDA0002480743360000038
is the rotation probability; rhokDensity is measured for clutter priors; τ is assigned to T at Joint event FiAn index of the measure of (d);
the rotation probability is:
Figure FDA0002480743360000039
the probability of the target being present is:
Figure FDA00024807433600000310
then the joint data probability is:
Figure FDA00024807433600000311
and realizing accurate tracking of a plurality of moving targets according to the joint data probability.
6. A novel binocular vision multi-target tracking system is characterized by comprising:
the acquisition module acquires images through binocular vision;
the moving target acquisition module acquires a moving target according to the image;
the building module is used for building a vehicle motion space and a vehicle motion model; and
and the tracking module tracks the moving target according to the vehicle motion space and the vehicle motion model through the improved joint probability data association.
CN202010384346.1A 2020-05-07 2020-05-07 Novel binocular vision multi-target tracking method and system Withdrawn CN111612818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010384346.1A CN111612818A (en) 2020-05-07 2020-05-07 Novel binocular vision multi-target tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010384346.1A CN111612818A (en) 2020-05-07 2020-05-07 Novel binocular vision multi-target tracking method and system

Publications (1)

Publication Number Publication Date
CN111612818A true CN111612818A (en) 2020-09-01

Family

ID=72196782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010384346.1A Withdrawn CN111612818A (en) 2020-05-07 2020-05-07 Novel binocular vision multi-target tracking method and system

Country Status (1)

Country Link
CN (1) CN111612818A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112304250A (en) * 2020-10-15 2021-02-02 天目爱视(北京)科技有限公司 Three-dimensional matching equipment and method between moving objects
US20220135074A1 (en) * 2020-11-02 2022-05-05 Waymo Llc Classification of objects based on motion patterns for autonomous vehicle applications
CN118115755A (en) * 2024-04-28 2024-05-31 四川盎智未来科技有限公司 Multi-target tracking method, system and storage medium
US12050267B2 (en) 2020-11-09 2024-07-30 Waymo Llc Doppler-assisted object mapping for autonomous vehicle applications

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799900A (en) * 2012-07-04 2012-11-28 西南交通大学 Target tracking method based on supporting online clustering in detection
CN109447121A (en) * 2018-09-27 2019-03-08 清华大学 A kind of Visual Sensor Networks multi-object tracking method, apparatus and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799900A (en) * 2012-07-04 2012-11-28 西南交通大学 Target tracking method based on supporting online clustering in detection
CN109447121A (en) * 2018-09-27 2019-03-08 清华大学 A kind of Visual Sensor Networks multi-object tracking method, apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张琦等: "Lie群下利用改进JPDA 滤波器的智能车立体视觉多目标跟踪方法" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112304250A (en) * 2020-10-15 2021-02-02 天目爱视(北京)科技有限公司 Three-dimensional matching equipment and method between moving objects
US20220135074A1 (en) * 2020-11-02 2022-05-05 Waymo Llc Classification of objects based on motion patterns for autonomous vehicle applications
US12050267B2 (en) 2020-11-09 2024-07-30 Waymo Llc Doppler-assisted object mapping for autonomous vehicle applications
CN118115755A (en) * 2024-04-28 2024-05-31 四川盎智未来科技有限公司 Multi-target tracking method, system and storage medium
CN118115755B (en) * 2024-04-28 2024-06-28 四川盎智未来科技有限公司 Multi-target tracking method, system and storage medium

Similar Documents

Publication Publication Date Title
EP3745158B1 (en) Methods and systems for computer-based determining of presence of dynamic objects
CN105892471B (en) Automatic driving method and apparatus
CN112700470B (en) Target detection and track extraction method based on traffic video stream
Wang et al. Robust road modeling and tracking using condensation
CN111612818A (en) Novel binocular vision multi-target tracking method and system
CN112132896B (en) Method and system for detecting states of trackside equipment
Erbs et al. Moving vehicle detection by optimal segmentation of the dynamic stixel world
CN110738121A (en) front vehicle detection method and detection system
CN110794406A (en) Multi-source sensor data fusion system and method
CN114758504B (en) Online vehicle overspeed early warning method and system based on filtering correction
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN114296095A (en) Method, device, vehicle and medium for extracting effective target of automatic driving vehicle
Zhang et al. A novel vehicle reversing speed control based on obstacle detection and sparse representation
Muresan et al. Multimodal sparse LIDAR object tracking in clutter
Hartmann et al. Towards autonomous self-assessment of digital maps
An et al. Multi-object tracking based on a novel feature image with multi-modal information
US20220309776A1 (en) Method and system for determining ground level using an artificial neural network
US20220284623A1 (en) Framework For 3D Object Detection And Depth Prediction From 2D Images
CN113511194A (en) Longitudinal collision avoidance early warning method and related device
Christiansen et al. Monocular vehicle distance sensor using HOG and Kalman tracking
Schilling et al. Mind the gap-a benchmark for dense depth prediction beyond lidar
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
Zeisler et al. Analysis of the performance of a laser scanner for predictive automotive applications
CN118259312B (en) Laser radar-based vehicle collision early warning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200901

WW01 Invention patent application withdrawn after publication