[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113625262A - Target track determination method and related equipment - Google Patents

Target track determination method and related equipment Download PDF

Info

Publication number
CN113625262A
CN113625262A CN202110893979.XA CN202110893979A CN113625262A CN 113625262 A CN113625262 A CN 113625262A CN 202110893979 A CN202110893979 A CN 202110893979A CN 113625262 A CN113625262 A CN 113625262A
Authority
CN
China
Prior art keywords
matrix
target
obtaining
mean
covariance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110893979.XA
Other languages
Chinese (zh)
Inventor
闫海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Xiangyun Ruifeng Information Technology Co ltd
Original Assignee
Changsha Xiangyun Ruifeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Xiangyun Ruifeng Information Technology Co ltd filed Critical Changsha Xiangyun Ruifeng Information Technology Co ltd
Priority to CN202110893979.XA priority Critical patent/CN113625262A/en
Publication of CN113625262A publication Critical patent/CN113625262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a target track determining method and related equipment. The method comprises the following steps: acquiring a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on a radar and the pitch angle and the azimuth angle of the target measured based on infrared; acquiring a second orientation matrix, wherein the second matrix comprises target positions of the targets acquired by the radar; and determining a target track based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix. The method integrates infrared and radar signals, can avoid the defect of poor tracking performance of a single sensor, and can combine the advantages of two measurement modes in a variable environment to accurately acquire the target track.

Description

Target track determination method and related equipment
Technical Field
The present disclosure relates to the field of target tracking, and more particularly, to a target trajectory determination method and related apparatus.
Background
Along with the increasing trend of the radar detection environment to be more complicated, the radar cannot meet the accurate detection and tracking of targets in a complex and variable environment due to the limitation of the radar, and the research of a multi-sensor target tracking method is always a hotspot in the fields of multi-sensor target tracking and information fusion. Meanwhile, a single sensor has poor tracking performance, and data reported by the single sensor always has target missing report and false report due to various reasons, so that the whole tracking environment cannot be accurately described.
Therefore, there is a need to provide a target trajectory determination method to at least partially solve the problems in the prior art.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
To at least partially solve the above problem, in a first aspect, the present invention provides a target trajectory determination method, including:
acquiring a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on a radar and the pitch angle and the azimuth angle of the target measured based on infrared;
acquiring a second orientation matrix, wherein the second matrix comprises a target position of the target acquired by the radar;
and determining a target track based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix.
Optionally, the obtaining and determining a target track based on the first matrix, the second matrix, a third matrix and a preset motion equation, where the third matrix is a unit matrix with the same dimension as the second matrix, and the motion equation is set by a user, includes:
obtaining a weight value corresponding to a Sigma point and the Sigma point based on the probability density distribution of the approximate nonlinear function of the first matrix, the second matrix and the third matrix;
carrying out nonlinear mapping based on the Sigma point and the motion equation to obtain Gaussian approximate distribution of a mean value;
carrying out nonlinear mapping based on the Sigma points and an observation equation to obtain measured Gaussian approximate distribution;
and obtaining the target track based on the Gaussian approximate distribution of the mean value, the measured Gaussian approximate distribution and the weight.
Optionally, the obtaining the target trajectory based on the gaussian approximate distribution of the mean, the measured gaussian approximate distribution, and the weight includes:
obtaining a mean value prediction based on the weight value and the Gaussian approximate distribution of the mean value;
obtaining a measurement prediction based on the weight and the measured Gaussian approximate distribution;
and obtaining the target track based on the average value prediction and the measurement prediction.
Optionally, the obtaining the target trajectory based on the mean prediction and the measurement prediction includes:
acquiring a system mean covariance based on the weight, the mean prediction, the Gaussian approximate distribution of the mean and system process noise, wherein the system process noise is a random variable related to a measurement environment;
obtaining a system measurement covariance based on the weight, the measurement prediction, the measured Gaussian approximation distribution, and a system measurement noise, wherein the system measurement noise is a random variable associated with a measurement environment;
and acquiring a target track based on the system mean covariance and the system measurement covariance.
Optionally, the obtaining the target trajectory based on the system mean covariance and the system measurement covariance includes:
obtaining a system cross covariance based on the system mean covariance and the system measurement covariance;
obtaining an optimal fusion estimation based on the system cross covariance;
and obtaining the target track based on the optimal fusion estimation.
Optionally, the obtaining of the optimal fusion estimate based on the system cross covariance includes:
acquiring Kalman gain based on the system cross covariance;
and obtaining an optimal fusion estimation based on the Kalman gain, the mean value prediction and the measurement prediction.
Optionally, the method further includes:
and matching and associating the data measured by the radar and the data measured by the infrared ray based on a nearest neighbor algorithm.
In a second aspect, the present invention further provides an apparatus for determining a target trajectory, including:
a first acquisition unit: the positioning method comprises the steps of obtaining a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on a radar and a pitch angle and an azimuth angle of the target measured based on infrared;
a second acquisition unit; a second orientation matrix is obtained, wherein the second matrix comprises the target position of the target obtained by the radar;
a calculation unit: and the target trajectory is determined based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix.
In a third aspect, an electronic device includes: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor is configured to implement the steps of the target trajectory determination method according to any one of the first aspect as described above when executing the computer program stored in the memory.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the target trajectory determination method of any one of the above aspects of the first aspect.
To sum up, the target trajectory is determined through the first matrix, the second matrix, the third matrix and the preset equation of motion, the first matrix and the second matrix comprise signals measured by the radar and the infrared, the two signals are fused, the defect of poor tracking performance of a single sensor can be avoided, and meanwhile, the target trajectory can be accurately obtained by combining the advantages of two measurement modes in a changeable environment.
The object track determination method of the present invention, and other advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the specification. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flowchart of a possible target trajectory determination method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of another possible target track determination method provided in the embodiment of the present application;
FIG. 3 is a diagram of the effect of the pedestrian trajectory determined based on the method according to the embodiment of the present application;
FIG. 4 is a diagram of pedestrian trajectory effects based on radar determination according to an embodiment of the present disclosure;
FIG. 5 is a diagram of a real target trajectory provided by an embodiment of the present application;
fig. 6 is a diagram for identifying a track based on radar according to an embodiment of the present disclosure;
fig. 7 is a target recognition trajectory diagram based on the present method according to an embodiment of the present application;
FIG. 8 is a target recognition trajectory graph based on covariance fusion according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a possible target track determining apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a possible target track determination electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a target track determining method and related equipment, and the target track can be more accurately determined through fusion of radar and infrared signals.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
Referring to fig. 1, a schematic flow chart of a possible target track determining method provided in the embodiment of the present application may specifically include:
s110, acquiring a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on a radar and the pitch angle and the azimuth angle of the target measured based on infrared;
specifically, the first matrix is
Figure BDA0003197108510000051
Wherein
Figure BDA0003197108510000052
Establishing a space polar coordinate system by taking the radar as a coordinate origin for the measurement result of the radar at the time k, wherein R is the distance measured by the target, theta is the azimuth angle measured by the target,
Figure BDA0003197108510000053
measured pitch angle for the target.
Figure BDA0003197108510000054
Is the measurement result of the infrared sensor at the time k, theta is the azimuth angle of the target measurement,
Figure BDA0003197108510000061
measured pitch angle for the target.
S120, acquiring a second orientation matrix, wherein the second matrix comprises the target position of the target acquired by the radar;
specifically, the second matrix is x0,x0Initially [ x 00 y 00 z 00]Wherein x, y and z are target coordinates in a space rectangular coordinate system established by taking the radar as the origin of coordinates
Figure BDA0003197108510000062
And performing coordinate conversion to obtain the product.
And S130, determining a target track based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix.
Specifically, calculation is performed according to the first matrix, the second matrix and the third matrix, and optimal fusion is performed on each time point to obtain a target track.
In summary, the first matrix and the second matrix include signals measured by radar and infrared, and the two signals are fused, so that the defect of poor tracking performance of a single sensor can be avoided, and meanwhile, the target track can be accurately obtained by combining the advantages of two measurement modes in a variable environment.
In some examples, optionally, the obtaining and determining a target trajectory based on the first matrix, the second matrix, a third matrix and a preset motion equation, where the third matrix is a unit matrix with the same dimension as the second matrix, and the motion equation is set by a user, includes:
obtaining a weight value corresponding to a Sigma point and the Sigma point based on the probability density distribution of the approximate nonlinear function of the first matrix, the second matrix and the third matrix;
carrying out nonlinear mapping based on the Sigma point and the motion equation to obtain Gaussian approximate distribution of a mean value;
carrying out nonlinear mapping based on the Sigma points and an observation equation to obtain measured Gaussian approximate distribution;
and obtaining the target track based on the Gaussian approximate distribution of the mean value, the measured Gaussian approximate distribution and the weight.
Specifically, Sigma points are determined by equations 1.1, 1.2, and 1.3 based on the probability density distribution of the nonlinear function approximated by the first matrix, the second matrix, and the third matrix
Figure BDA0003197108510000071
Figure BDA0003197108510000072
Figure BDA0003197108510000073
Where formula 1.1 finds the mean Sigma points and formula 1.2 and formula 1.3 find the Sigma points symmetrically distributed about the mean Sigma points. Wherein,
Figure BDA0003197108510000074
denotes the system mean at time k-1, λ is the scale factor, Pk-1Is the system variance at time k-1,
Figure BDA0003197108510000075
and
Figure BDA0003197108510000076
is 3 Sigma dots.
Determining the weight value corresponding to Sigma according to the formula 1.4
Figure BDA0003197108510000077
Wherein, Wi mWeight, W, representing the ith Sigma point for the predicted meani cRepresents the weight of the ith Sigma point used to predict covariance.
The Sigma points are converted into the Sigma points in an approximate nonlinear state, nonlinear mapping is carried out through a motion equation, and Gaussian approximate distribution of mean values with dimensionality of (9 multiplied by 1) is obtained
Figure BDA00031971085100000710
As shown in formula 1.5:
Figure BDA00031971085100000711
wherein f represents the nonlinear mapping relation of the motion equation.
Mapping the mapped point set of Sigma to a brand-new Sigma point set through an observation function to obtain the Gaussian approximate distribution with the dimension of (5 multiplied by 1) measurement
Figure BDA00031971085100000712
As shown in formula 1.6:
Figure BDA00031971085100000713
wherein h represents an observation function mapping relationship.
The three dimensions before Gaussian approximate distribution of measurement comprise position information, and the last two dimensions are used for residual calculation with the two dimensions after measurement input so as to obtain different residual values of different sensors for a pitch angle and an azimuth angle and calculate proper Kalman gain.
In some examples, the obtaining the target trajectory based on the gaussian approximation distribution of the mean, the measured gaussian approximation distribution, and the weight value includes:
obtaining a mean value prediction based on the weight value and the Gaussian approximate distribution of the mean value;
obtaining a measurement prediction based on the weight and the measured Gaussian approximate distribution;
and obtaining the target track based on the average value prediction and the measurement prediction.
Specifically, mean prediction is calculated by equation 1.7:
Figure BDA0003197108510000081
the metrology prediction is calculated by equation 1.8:
Figure BDA0003197108510000082
wherein W in formula 1.7 and formula 1.8i (m)Indicating that the weight is equal to the weight value in equation 1.4.
In some examples, the obtaining the target trajectory based on the mean prediction and the metric prediction includes:
acquiring a system mean covariance based on the weight, the mean prediction, the Gaussian approximate distribution of the mean and system process noise, wherein the system process noise is a random variable related to a measurement environment;
obtaining a system measurement covariance based on the weight, the measurement prediction, the measured Gaussian approximation distribution, and a system measurement noise, wherein the system measurement noise is a random variable associated with a measurement environment;
and acquiring a target track based on the system mean covariance and the system measurement covariance.
Specifically, the system mean covariance is found by equation 1.9:
Figure BDA0003197108510000083
wherein Q isKIs the system process noise.
The system measurement covariance is found by equation 1.10:
Figure BDA0003197108510000091
wherein R iskNoise is measured for the system.
In some examples, the obtaining the target trajectory based on the system mean covariance and the system metrology covariance includes:
obtaining a system cross covariance based on the system mean covariance and the system measurement covariance;
obtaining an optimal fusion estimation based on the system cross covariance;
and obtaining the target track based on the optimal fusion estimation.
Specifically, the system cross covariance is calculated by equation 1.11:
Figure BDA0003197108510000092
in some examples, the obtaining the optimal fusion estimate based on the system cross-covariance includes:
acquiring Kalman gain based on the system cross covariance;
and obtaining an optimal fusion estimation based on the Kalman gain, the mean value prediction and the measurement prediction.
Specifically, the kalman gain is calculated by equation 1.12:
Figure BDA0003197108510000093
the optimal fusion estimate is calculated from equation 1.13:
Figure BDA0003197108510000094
integration between different sensorsIs mainly embodied in
Figure BDA0003197108510000095
Firstly, calculating the deviation between a measured value and the prediction of the measured value, then calculating to obtain the gain relation between different dimensions and different sensors according to the cross covariance between the system state and the measurement, multiplying the deviation and the covariance to obtain the gain of the measurement at the kth moment to the system state, then correcting the state prediction obtained by Sigma calculation to obtain the optimal estimation of the system at the kth moment, wherein the optimal estimation is the target track obtained by fusing radar and infrared measurement information.
In some examples, the method further comprises:
and matching and associating the data measured by the radar and the data measured by the infrared ray based on a nearest neighbor algorithm.
Specifically, the radar and the infrared are measured and input at the same time, matching correlation is carried out through a nearest neighbor algorithm, the difference values of the distance, the pitch angle and the azimuth angle of the historical flight path are calculated, the infrared camera does not have distance information, the infrared camera is judged to be connected with a distance gate, a pitch angle gate and an azimuth angle gate, the minimum difference value of multiple data in the gates is correlated, and the measurement related to the same flight path is judged to be the same target.
In conclusion, the data measured by the radar and the data measured by the infrared can be guaranteed to be fused at the same time through matching management, and the target track is prevented from being determined to have larger errors.
In one example, a target trajectory determination method includes: S210-S290
And S210, associating data.
Specifically, input measurements of different sensors at the same time are matched through a nearest neighbor algorithm, difference values of distances, pitch angles and azimuth angles of historical tracks are calculated, an infrared camera does not have distance information, the infrared camera is judged with a distance gate, a pitch angle gate and an azimuth angle gate, the measurements of different sensors of the same target are correlated, and then the historical existing tracks are correlated;
s220, creating a first matrix, a second matrix and a third matrix.
Specifically, a first matrix is created as
Figure BDA0003197108510000101
Wherein
Figure BDA0003197108510000102
Establishing a space polar coordinate system by taking the radar as a coordinate origin for the measurement result of the radar at the time k, wherein R is the distance measured by the target, theta is the azimuth angle measured by the target,
Figure BDA0003197108510000103
measured pitch angle for the target.
Figure BDA0003197108510000104
Is the measurement result of the infrared sensor at the time k, theta is the azimuth angle of the target measurement,
Figure BDA0003197108510000105
measured pitch angle for the target. Creating a second matrix as x0,x0Initially [ x 00 y 00 z 00]Wherein x, y and z are target coordinates in a space rectangular coordinate system established by taking the radar as the origin of coordinates
Figure BDA0003197108510000106
And performing coordinate conversion to obtain the product. And creating a unit matrix with the same dimension as the second matrix as a third matrix.
And S230, calculating the Sigma point and the weight value.
Determining Sigma points by equations 1.1, 1.2, and 1.3 based on probability density distribution of approximating nonlinear function based on the first matrix, the second matrix, and the third matrix
Figure BDA0003197108510000111
Figure BDA0003197108510000112
Figure BDA0003197108510000113
Where formula 1.1 finds the mean Sigma points and formula 1.2 and formula 1.3 find the Sigma points symmetrically distributed about the mean Sigma points. Wherein,
Figure BDA0003197108510000114
denotes the system mean at time k-1, λ is the scale factor, Pk-1Is the system variance at time k-1,
Figure BDA0003197108510000115
and
Figure BDA0003197108510000116
is 3 Sigma dots.
Determining the weight value corresponding to Sigma according to the formula 1.4
Figure BDA0003197108510000117
Wherein, Wi mWeight, W, representing the ith Sigma point for the predicted meani cRepresents the weight of the ith Sigma point used to predict covariance.
S240, calculating the mean value and the measured approximate Gaussian distribution.
In particular, the method comprises the following steps of,
the Sigma points are converted into the Sigma points in an approximate nonlinear state, nonlinear mapping is carried out through a motion equation, and Gaussian approximate distribution of a mean value with dimension (9 multiplied by 1) is obtained, wherein the formula is 1.5:
Figure BDA00031971085100001110
mapping the point set of the mapped Sigma onto a brand-new Sigma point set through an observation function, namely
Figure BDA00031971085100001111
That is, the approximate distribution of y is obtained to obtain the Gaussian approximate distribution with dimension (5 × 1) measurement, as shown in formula 1.6
Figure BDA00031971085100001112
And S250, calculating mean value prediction and measurement prediction.
Specifically, mean prediction is calculated by equation 1.7:
Figure BDA0003197108510000121
the metrology prediction is calculated by equation 1.8:
Figure BDA0003197108510000122
s260, calculating the mean covariance and the measurement covariance.
Specifically, the system mean covariance is calculated by equation 1.9:
Figure BDA0003197108510000123
wherein Q isKIs the system process noise.
The system measurement covariance is found by equation 1.10:
Figure BDA0003197108510000124
wherein R iskNoise is measured for the system.
And S270, calculating the cross covariance of the system.
Specifically, the system cross covariance is calculated by equation 1.11:
Figure BDA0003197108510000125
and S280, calculating Kalman gain.
Specifically, the kalman gain is calculated by equation 1.12:
Figure BDA0003197108510000126
and S290, calculating the optimal fusion of the system.
The optimal fusion estimate is calculated from equation 1.13:
Figure BDA0003197108510000127
in summary, the tracking trajectory curve of the single-source sensor has large undulation wrinkles, and there is also a case where the filtering diverges. According to the method, the radar and the infrared measured data are fused, so that the tracking error is smaller, the convergence is faster, the track curve is smoother, and the fused data have a better tracking effect compared with the filtering of a single-source sensor.
Referring to fig. 3-4, fig. 3 is a diagram illustrating a pedestrian trajectory effect determined based on the method according to an embodiment of the present disclosure; fig. 4 is a pedestrian trajectory effect graph determined based on radar according to an embodiment of the present application.
In some examples, a case for determining the pedestrian track by fusing data through radar and infrared based on the method is provided, the recognition effect is shown in fig. 3, a case for determining the pedestrian track based on radar is also provided, the recognition effect is shown in fig. 4, and test results show that the target track determination method based on the scheme can perform effective fusion tracking on multi-source signals, so that the problems of defects caused by a single-source sensor during target tracking and inaccurate estimation after data filtering are avoided, the estimation precision of a target state is effectively improved, and a fused target track curve is closer to a real track.
In some examples, three-dimensional target trajectory recognition simulations are performed for radar (unfiltered), infrared (unfiltered), radar (filtered), covariance fusion, and the present scheme.
Figure BDA0003197108510000131
TABLE 1
Table 1 shows the mean square error result calculated based on the three-dimensional target recognition simulation data, the smaller the value, the better the recognition effect, and it can be seen that the target trajectory determination method of the present application is adopted, the mean square errors of the distance, the azimuth angle, and the pitch angle are all the smallest, and it can be seen that the target trajectory determination method provided in the present application is the most preferable.
Fig. 5-8 show simulated images obtained by the simulation method, in which three axes respectively represent the distances between the three directions x, y, and z and the origin, and the unit is mm. Fig. 5 is a real target trajectory graph, and fig. 6-8 are target trajectories determined based on radar, the method and the covariance fusion method, respectively, and it can be seen from the graph that the trajectory graph of the target determined based on the method is smoother and closer to the real target trajectory graph, and thus it can be seen that the effect of determining the target trajectory based on the method is better and more practical.
Referring to fig. 9, an embodiment of a target track determining apparatus in an embodiment of the present application may include:
a first acquisition unit: the method can be used for acquiring a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on radar and a pitch angle and an azimuth angle of the target measured based on infrared;
a second acquisition unit; may be configured to acquire a second orientation matrix, wherein the second matrix comprises target locations of the targets acquired by the radar;
a calculation unit: the method can be used for determining a target track based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix.
Referring to fig. 10, fig. 10 is a schematic view of an embodiment of an electronic device according to an embodiment of the present application.
As shown in fig. 10, the embodiment of the present application further provides an electronic device 300, which includes a memory 310, a processor 320, and a computer program 311 stored on the memory 320 and executable on the processor, and when the computer program 311 is executed by the processor 320, the steps of any one of the methods for determining the target trajectory are implemented.
Since the electronic device described in this embodiment is a device used for implementing a target trajectory determination apparatus in this embodiment, based on the method described in this embodiment, a person skilled in the art can understand a specific implementation manner of the electronic device of this embodiment and various variations thereof, so that how to implement the method in this embodiment by the electronic device is not described in detail herein, and as long as the person skilled in the art implements the device used for implementing the method in this embodiment, the device falls within the scope of protection intended by this application.
In a specific implementation, the computer program 311 may implement any of the embodiments corresponding to fig. 1 when executed by a processor.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Embodiments of the present application further provide a computer program product, which includes computer software instructions, when the computer software instructions are executed on a processing device, cause the processing device to execute the procedure in the target trajectory determination in the corresponding embodiment of fig. 1.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). A computer-readable storage medium may be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for determining a target trajectory, comprising:
acquiring a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on a radar and the pitch angle and the azimuth angle of the target measured based on infrared;
acquiring a second orientation matrix, wherein the second matrix comprises target positions of the targets acquired by the radar;
and determining a target track based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix.
2. The method of claim 1, wherein obtaining the determined target trajectory based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix, and the motion equation is set by a user, comprises:
carrying out probability density distribution of approximate nonlinear functions on the basis of the first matrix, the second matrix and the third matrix to obtain Sigma points and weights corresponding to the Sigma points;
carrying out nonlinear mapping based on the Sigma point and the motion equation to obtain Gaussian approximate distribution of a mean value;
carrying out nonlinear mapping based on the Sigma point and an observation equation to obtain measured Gaussian approximate distribution;
and acquiring the target track based on the Gaussian approximate distribution of the mean value, the measured Gaussian approximate distribution and the weight.
3. The method of claim 2, wherein the obtaining the target trajectory based on the gaussian approximation of the mean, the gaussian approximation of the metric, and the weights comprises:
obtaining a mean prediction based on the weight and a Gaussian approximation distribution of the mean;
obtaining a measurement prediction based on the weight and the measured Gaussian approximate distribution;
obtaining the target trajectory based on the mean prediction and the metrology prediction.
4. The method of claim 3, wherein said obtaining the target trajectory based on the mean prediction and the metrology prediction comprises:
obtaining a system mean covariance based on the weight, the mean prediction, a Gaussian approximation distribution of the mean, and a system process noise, wherein the system process noise is a random variable related to a measurement environment;
obtaining a system metrology covariance based on the weight, the metrology prediction, the measured Gaussian approximation distribution, and a system metrology noise, wherein the system metrology noise is a random variable related to a measurement environment;
and acquiring a target track based on the system mean covariance and the system measurement covariance.
5. The method of claim 4, wherein the obtaining a target trajectory based on the system mean covariance and the system metrology covariance comprises:
obtaining a system cross covariance based on the system mean covariance and the system metrology covariance;
obtaining an optimal fusion estimate based on the system cross-covariance;
and obtaining the target track based on the optimal fusion estimation.
6. The method of claim 5, wherein obtaining an optimal fusion estimate based on system cross-covariance comprises:
acquiring a Kalman gain based on the system cross covariance;
obtaining an optimal fusion estimate based on the Kalman gain, the mean prediction, and the metrology prediction.
7. The method of claim 1, further comprising:
and matching and correlating the data measured by the radar and the data measured by the infrared ray based on a nearest neighbor algorithm.
8. An object trajectory determination device, comprising:
a first acquisition unit: the method comprises the steps of obtaining a first orientation matrix, wherein the first orientation matrix comprises a distance, a pitch angle and an azimuth angle of a target measured based on a radar and a pitch angle and an azimuth angle of the target measured based on infrared;
a second acquisition unit; for obtaining a second orientation matrix, wherein the second matrix comprises target locations of the targets obtained by the radar;
a calculation unit: the target trajectory is determined based on the first matrix, the second matrix, a third matrix and a preset motion equation, wherein the third matrix is a unit matrix with the same dimension as the second matrix.
9. An electronic device, comprising: memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor is adapted to carry out the steps of the target trajectory determination method according to any one of claims 1 to 7 when executing the computer program stored in the memory.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, implements the target trajectory determination method as claimed in any one of claims 1-7.
CN202110893979.XA 2021-08-05 2021-08-05 Target track determination method and related equipment Pending CN113625262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110893979.XA CN113625262A (en) 2021-08-05 2021-08-05 Target track determination method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110893979.XA CN113625262A (en) 2021-08-05 2021-08-05 Target track determination method and related equipment

Publications (1)

Publication Number Publication Date
CN113625262A true CN113625262A (en) 2021-11-09

Family

ID=78382797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110893979.XA Pending CN113625262A (en) 2021-08-05 2021-08-05 Target track determination method and related equipment

Country Status (1)

Country Link
CN (1) CN113625262A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114924264A (en) * 2022-04-28 2022-08-19 西安交通大学 Air target intention inference method based on target motion trajectory

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103727961A (en) * 2014-01-14 2014-04-16 中国科学院长春光学精密机械与物理研究所 Method for correcting dynamic error of electro-optic theodolite
CN110044356A (en) * 2019-04-22 2019-07-23 北京壹氢科技有限公司 A kind of lower distributed collaboration method for tracking target of communication topology switching
CN110208792A (en) * 2019-06-26 2019-09-06 哈尔滨工业大学 The arbitrary line constraint tracking of dbjective state and trajectory parameters is estimated simultaneously

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103727961A (en) * 2014-01-14 2014-04-16 中国科学院长春光学精密机械与物理研究所 Method for correcting dynamic error of electro-optic theodolite
CN110044356A (en) * 2019-04-22 2019-07-23 北京壹氢科技有限公司 A kind of lower distributed collaboration method for tracking target of communication topology switching
CN110208792A (en) * 2019-06-26 2019-09-06 哈尔滨工业大学 The arbitrary line constraint tracking of dbjective state and trajectory parameters is estimated simultaneously

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANFU ZHU, ZHANRONG JING, WEIJUN CHEN, LIGUANG WANG, YUNFEI LI AND ZHENLIN CAO: "Data Fusion of Infrared and Radar for Target Tracking", 《2008 2ND INTERNATIONAL SYMPOSIUM ON SYSTEMS AND CONTROL IN AEROSPACE AND ASTRONAUTICS》, 6 February 2009 (2009-02-06), pages 1 - 4 *
ZHANG XUEJING, MA LONG, CHEN HE, YANG JING: "Target tracking with infrared imaging and millimetre-wave radar sensor", 《IET INTERNATIONAL RADAR CONFERENCE 2013》, 10 October 2013 (2013-10-10), pages 1 - 8 *
李珂;李醒飞;杨帆;: "IMM-UKF算法在两坐标雷达-光电融合跟踪系统中的改进与应用", 《激光与光电子学进展》, vol. 53, no. 12, 21 September 2016 (2016-09-21), pages 250 - 259 *
苏日烈: "多被动传感器的航迹状态估计和关联算法研究", 《中国优秀硕士学位论文全文数据库 信息科学辑》, no. 07, 15 July 2012 (2012-07-15), pages 140 - 229 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114924264A (en) * 2022-04-28 2022-08-19 西安交通大学 Air target intention inference method based on target motion trajectory

Similar Documents

Publication Publication Date Title
CN111896914A (en) Cooperative positioning method, device, equipment and storage medium
US11776217B2 (en) Method for planning three-dimensional scanning viewpoint, device for planning three-dimensional scanning viewpoint, and computer readable storage medium
CN105761276B (en) Based on the iteration RANSAC GM-PHD multi-object tracking methods that adaptively newborn target strength is estimated
CN105182311B (en) Omnidirectional's radar data processing method and system
US11914062B2 (en) Technique for calibrating a positioning system
CN110889862B (en) Combined measurement method for multi-target tracking in network transmission attack environment
CN104777469B (en) A kind of radar node selecting method based on error in measurement covariance matrix norm
CN110058222A (en) A kind of preceding tracking of two-layered spherical particle filtering detection based on sensor selection
CN115204212A (en) Multi-target tracking method based on STM-PMBM filtering algorithm
CN111830501B (en) HRRP history feature assisted signal fuzzy data association method and system
CN115546705A (en) Target identification method, terminal device and storage medium
CN113673565A (en) Multi-sensor GM-PHD self-adaptive sequential fusion multi-target tracking method
CN113625262A (en) Target track determination method and related equipment
Shareef et al. Localization using extended Kalman filters in wireless sensor networks
CN111474516B (en) Multi-level indoor positioning method and system based on crowdsourcing sample surface fitting
CN108304869A (en) A kind of fusion method and system of the comprehensive magnitude information based on multiple sensors
CN117197245A (en) Pose restoration method and device
KR101280348B1 (en) Multiple target tracking method
CN116437290A (en) Model fusion method based on CSI fingerprint positioning
CN113628254A (en) Target track determination method based on mobile platform and related equipment
CN114705223A (en) Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
CN115993791A (en) Method and apparatus for providing tracking data identifying the movements of a person and a hand to control a technical system and a sensor system
CN111132029B (en) Positioning method and device based on terrain constraint
CN113204891A (en) DP-TBD algorithm tracking method and device based on exponential smoothing prediction
CN111177290A (en) Method and device for evaluating accuracy of three-dimensional map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination