[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112313536A - Object state acquisition method, movable platform and storage medium - Google Patents

Object state acquisition method, movable platform and storage medium Download PDF

Info

Publication number
CN112313536A
CN112313536A CN201980041121.1A CN201980041121A CN112313536A CN 112313536 A CN112313536 A CN 112313536A CN 201980041121 A CN201980041121 A CN 201980041121A CN 112313536 A CN112313536 A CN 112313536A
Authority
CN
China
Prior art keywords
value
probability
point cloud
point
taking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980041121.1A
Other languages
Chinese (zh)
Other versions
CN112313536B (en
Inventor
吴显亮
陈进
李星河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112313536A publication Critical patent/CN112313536A/en
Application granted granted Critical
Publication of CN112313536B publication Critical patent/CN112313536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computational Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

An object state acquisition method, a movable platform (11) and a storage medium, wherein the movable platform (11) carries a plurality of sensors, and the sensors are used for collecting data of the environment where the movable platform (11) is located, and the method comprises the following steps: acquiring initial probability distribution (S201) of the motion state of an object (12) in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion state; updating the probability value of each value-taking point according to the data collected by the target sensor (S202); and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point (S203). By the method, the problem that the original data acquired by the sensor is lost in the process of acquiring the initial probability distribution of the motion state can be solved, so that the acquired target probability distribution can be ensured to be more accurate.

Description

Object state acquisition method, movable platform and storage medium
Technical Field
The embodiment of the application relates to a target tracking technology, in particular to an object state acquisition method, a movable platform and a storage medium.
Background
At present, in a dynamic environment where a movable platform such as an unmanned aerial vehicle or an unmanned vehicle is located, a target tracking technology is an important research direction.
How effective observation information of movable platforms such as unmanned aerial vehicles or unmanned vehicles can be fused in a dynamic environment and the observation information is utilized to update state information of a plurality of objects in real time on line, wherein the state information comprises states such as position, orientation and speed of the dynamic objects and associated information of each target in different time sequences (namely whether observation at different moments belongs to the same target) so as to achieve the effect of multi-target tracking.
In the prior art, the movable platform can predict the state of an object, preprocesses data acquired by a sensor, and updates the state of the object through the preprocessed data to obtain the optimal state of the object. The above process excessively depends on the preprocessed data, which often results in the loss of the raw data acquired by the sensor in the preprocessing process, thereby causing the state of the object to be finally determined to be inaccurate.
Disclosure of Invention
The embodiment of the application provides an object state obtaining method, a movable platform and a storage medium. Therefore, the obtained target probability distribution can be ensured to be more accurate.
In a first aspect, the present application provides an object state obtaining method, where a movable platform carries multiple sensors, and the sensors are used to collect data of an environment where the movable platform is located, and the method includes: acquiring initial probability distribution of the motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion state; updating the probability value of each value-taking point according to the data acquired by the target sensor; and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point.
In a second aspect, the present application provides a movable platform, the movable platform carries on a plurality of sensors, the sensors are used for carrying out data acquisition to the environment that the movable platform is located, the movable platform includes: the device comprises an acquisition module, an updating module and a first determination module. The acquisition module is used for acquiring initial probability distribution of the motion state of an object in the environment, the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and the initial probability distribution comprises probability values of value-taking points corresponding to the motion state; the updating module is used for updating the probability value of each value-taking point according to the data acquired by the target sensor; the first determining module is used for determining the target probability distribution of the motion state according to the updated probability value of each value taking point.
In a third aspect, the present application provides a movable platform, the movable platform carries a plurality of sensors, the sensors are used for collecting data of an environment where the movable platform is located, and the movable platform includes: a processor to: acquiring initial probability distribution of the motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion state; updating the probability value of each value-taking point according to the data acquired by the target sensor; and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point.
In a fourth aspect, the present application provides a computer-readable storage medium comprising computer instructions for implementing the method of the first aspect.
The application provides an object state acquisition method, a movable platform and a storage medium, wherein the movable platform carries a plurality of sensors, the sensors are used for carrying out data acquisition on the environment where the movable platform is located, and the method comprises the following steps: acquiring initial probability distribution of the motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion state; updating the probability value of each value-taking point according to the data acquired by the target sensor; and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point. The method can be called a post-processing method, and the problem that the original data acquired by the sensor is lost in the process of acquiring the initial probability distribution of the motion state can be solved through the post-processing method, so that the acquired target probability distribution can be ensured to be more accurate.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application;
fig. 2 is a flowchart of an object state obtaining method according to an embodiment of the present application;
fig. 3 is a flowchart of a method for updating probability values of various value-taking points according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a first likelihood function g (x) according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a likelihood function of a speed deviation provided in an embodiment of the present application;
FIG. 6 is a flowchart of a method for determining value points according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of a method for generating a point cloud cluster according to an embodiment of the present disclosure;
fig. 8 is a flowchart of an object state obtaining method according to another embodiment of the present application;
fig. 9 is a flowchart of an object state obtaining method according to yet another embodiment of the present application;
FIG. 10 is a schematic illustration of a moveable platform according to an embodiment of the present application;
fig. 11 is a schematic diagram of a movable platform according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Aiming at how effective observation information is fused in a dynamic environment by movable platforms such as an unmanned aerial vehicle or an unmanned vehicle and the like, and the observation information is utilized to update state information of a plurality of objects in real time on line, wherein the state information comprises the states of the position, the orientation, the speed and the like of the dynamic objects and the associated information of each target on different time sequences (namely whether the observations at different moments belong to the same target) so as to achieve the effect of multi-target tracking.
The kalman filter or particle filter algorithm is a state estimation technique that estimates a certain set of time sequence states by using online observation recursion. The system model is mainly used for constraining the relation of states among different time sequences, the actual operation stage is mainly used for predicting the time sequences, the observation model is mainly used for constraining the relation between observation and the states, and after the state prediction of a certain current time is obtained, the state can be updated and estimated by using the observation model and corresponding observation. When the system model and the observation model are linear, uncertainty in the observation model and the system model conforms to zero mean Gaussian distribution and variance is known, Kalman filtering is an optimal estimation method for state estimation under the condition, when the model or the observation model of the system does not meet the linear relation, the model or the observation model needs to be processed by using an extended Kalman filtering technology or an unscented Kalman filtering technology, and if the uncertainty has Gaussian characteristics, sampling estimation can be performed by using a particle filtering technology.
For the multi-target tracking problem, the state to be estimated is the simultaneous state of each target, wherein the state of each object may include information such as position, velocity, orientation, angular velocity, acceleration, and the like. Generally, the state of each object is assumed to be independent of each other, so that a plurality of filters can be used to estimate the state of each object individually. However, in the multi-target tracking problem, unlike single-target tracking, the premise of estimation of each object by using a filter is to ensure that the observation and the object have a definite correspondence, i.e., the observation is indeed the observation of the object and not the observation of other objects; if the relationship is unknown, correlation is performed using data correlation techniques, where each observation is correlated with the object being estimated in the system before state estimation can be performed using filtering techniques. At present, Hungarian allocation, multi-hypothesis tracking, joint probability data association technology and the like exist in common association algorithms.
After the associated information is obtained, a system model and an observation model are defined. For a system model, if the estimated object is of a particular class, such as a vehicle, modeling can be performed using a vehicle body dynamics model. For the observation model, the observation data source used may be sensor information such as image, laser, millimeter wave radar, ultrasound, etc. For images and lasers, a common method is to perform preprocessing by using a visual or point cloud processing technology to obtain two-dimensional or three-dimensional stereo detection frames, then update state information by using the frames as observations, and assume that the observations have Gaussian distribution, so that state estimation can be performed by using a standard extended Kalman filtering technology. Of course, the original observation can also be directly used as the observation result without any preprocessing, and the observation often slowly satisfies or approximately satisfies the gaussian distribution assumption and is difficult to process.
For the problem of multi-target tracking, sampling updating technology such as particle filtering can be utilized, but the determination of the number of particles and the particle degradation phenomenon greatly restrict the wide application of the technology in the industry even though the resampling technology relieves the particle degradation problem. Meanwhile, because random sampling needs to be performed in a multidimensional state space, an excessive computational burden is brought, and cost impact is generated on practical application.
At present, more routes are selected to be preprocessed by a vision or point cloud processing technology to obtain a high-quality detection result. Then, the detection results are used as observations, most of the detection results are obtained in the form of three-dimensional or two-dimensional frames, the three-dimensional or two-dimensional frames can be directly used as observations to update the state of the object, but the results obtained through algorithm preprocessing often cannot give very accurate uncertainty description, and in most cases, the results cannot satisfy a single gaussian distribution, so that filtering based on the strong assumption of the gaussian distribution often causes inaccuracy or even instability of a filtering system.
Another drawback of the above method is that the filtering algorithm relies too much on the detection result of the preprocessing, often resulting in the loss of the original data information in the preprocessing step, thereby causing the unreliability of the final filtering result. For example, a multi-target tracking algorithm usually needs a detection algorithm, but missing detection and false detection of the algorithm introduce other unreliable factors into original sensor data, resulting in abnormal final results and greater inconsistency with the original data.
On the other hand, in the multi-target tracking problem, the problem of data association needs to be considered, and most of the common data association technologies at present utilize the Hungarian allocation algorithm. The algorithm is simple and effective, but only hard allocation is carried out according to the observation condition of the current frame, if a certain frame is wrongly allocated, the observation wrongly allocated cannot update the state of the corresponding object, and meanwhile, the wrongly allocated object cannot be recovered.
As described above, in the prior art, the movable platform may predict the state of the object, preprocess the data collected by the sensor, and update the state of the object according to the preprocessed data, so as to obtain the optimal state of the object. The above process excessively depends on the preprocessed data, which often results in the loss of the raw data acquired by the sensor in the preprocessing process, thereby causing the state of the object to be finally determined to be inaccurate.
In order to solve the above technical problem, the present application provides an object state obtaining method, a movable platform, and a storage medium.
By way of example, the present application may be applied to the following application scenarios: fig. 1 is an application scenario diagram provided in an embodiment of the present application, as shown in fig. 1, a movable platform 11 carries a plurality of sensors, each sensor is used for collecting data of an environment in which the movable platform 11 is located, and the environment in which the movable platform 11 is located further includes: at least one object 12, wherein, movable platform 11 in this application can be unmanned aerial vehicle, unmanned vehicles etc. and the sensor can be lidar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor etc. and above-mentioned a plurality of sensors can be the same type of sensor, if all be lidar sensor, or can be the sensor of different grade type, for example be lidar sensor and millimeter wave radar. Further, the different sensors may collect different data about the environment in which the movable platform is located. Such as: the system comprises a laser radar sensor and a binocular vision sensor, wherein data collected by the millimeter wave radar is point cloud data, and data collected by the ultrasonic sensor is an ultrasonic signal.
The main idea of the application is as follows: the movable platform can predict the motion state of an object (namely, any object in the environment where the movable platform is located), and update the motion state by combining the data collected by the sensor. The movable platform can fuse data collected by a plurality of sensors through a certain algorithm so as to predict the motion state of the object. The algorithm may be an algorithm such as kalman filter, single-step particle filter, brute force search, or neural network, which is not limited in this application.
It should be noted that the data measured by the sensor may not be completely accurate because the process of the sensor may not be perfect or other factors and noises that cannot be predicted or controlled by human beings may exist. Therefore, the motion state of the object obtained by fusing the data collected by the plurality of sensors may be understood as a random variable, and the random variable conforms to a certain probability distribution, such as a gaussian distribution, a normal distribution, a linear distribution, a non-gaussian distribution, and the like, which is not limited in this application.
The technical scheme of the application is explained in detail as follows:
fig. 2 is a flowchart of an object state obtaining method according to an embodiment of the present application, where an execution subject of the method may be a part or all of a movable platform, and the part of the movable platform may be a processor of the movable platform. As described above, the movable platform carries a plurality of sensors, the sensors are used for collecting data of an environment where the movable platform is located, and the following describes a method for acquiring a state of an object by using the movable platform as an execution subject, as shown in fig. 2, the method includes the following steps:
step S201: an initial probability distribution of a motion state of an object in an environment is obtained.
Step S202: and updating the probability value of each value-taking point according to the data acquired by the target sensor.
Step S203: and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point.
The initial probability distribution is a fusion result of data acquired by a plurality of sensors, and the initial probability distribution comprises probability values of all value-taking points corresponding to the motion state.
In the present application, the motion state of the object comprises at least one of: position parameters, orientation parameters, velocity parameters, acceleration parameters of the object. That is, the motion state of the object may be any one of a position parameter, an orientation parameter, a velocity parameter, and an acceleration parameter of the object, and the probability distribution to which the motion state corresponds is a probability distribution corresponding to the any one parameter. Alternatively, the motion state of the object may be a combination of at least two of a position parameter, an orientation parameter, a velocity parameter, and an acceleration parameter of the object. The probability distribution that the motion state meets at this time is also the probability distribution corresponding to the combination parameter. Such as: when the motion state includes: when the position parameter and the orientation parameter of the object are determined, the probability distribution that the motion state conforms to is the probability distribution that the position parameter and the orientation parameter correspond to at the same time. When the motion state of the object is one parameter, the spatial dimension of the probability distribution can be reduced, and the calculation amount of the movable platform can be reduced.
After the movable platform obtains the initial probability distribution, the movable platform may select one sensor from the plurality of sensors as a target sensor, and update the probability value of each value-taking point through data collected by the target sensor. Wherein the movable platform may randomly select one sensor among the plurality of sensors as the target sensor, or may select one sensor with the highest accuracy as the target sensor. For example: when the movable platform is used for fusing point cloud data acquired by a laser radar sensor, a binocular vision sensor and a millimeter wave radar, the initial probability distribution of the position parameters of the object is obtained. Further, assuming that the precision of the laser radar sensor is higher than that of the binocular vision sensor and that of the millimeter wave radar, the movable platform uses the laser radar sensor as a target sensor, and the probability values of the value-taking points in the initial probability distribution are updated through point cloud data acquired by the movable platform.
Optionally, the movable platform may select a plurality of target sensors, and in this case, the movable platform updates the probability value of each value-taking point in the initial probability distribution according to data acquired by one target sensor to obtain an updated probability value, and further updates the updated probability value according to data acquired by the next target sensor until the probability value of each value-taking point is updated by data acquired by all target sensors.
It should be noted that, when the motion state of the object includes: when the speed parameter and/or the acceleration parameter are obtained, the speed parameter and/or the acceleration parameter may be directly obtained by the lidar sensor, or obtained by differentiating the position parameter and the orientation parameter of the current frame and the previous frame, and similarly, the acceleration parameter is obtained by differentiating the position parameter and the orientation parameter of the current frame and the previous frame. The position parameter and the orientation parameter of the current frame can be determined by the point cloud data acquired by the laser radar sensor or the millimeter wave radar in the current frame, and the position parameter and the orientation parameter of the previous frame can be determined by the point cloud data acquired by the laser radar sensor or the millimeter wave radar in the previous frame. Therefore, if the initial probability distribution is obtained based on a plurality of sensors collecting data in the current frame and the previous frame, the data collected by the target sensor is also the data collected in the current frame and the previous frame. For example: the initial probability distribution corresponding to the speed parameter is obtained based on point cloud data acquired by a plurality of sensors in the current frame and the previous frame, so that the data acquired by the target sensor is also the point cloud data acquired by the target sensor in the current frame and the previous frame.
When the motion state of the object is a position parameter or an orientation parameter, the position parameter and the orientation parameter of the current frame can be determined through point cloud data acquired by a laser radar sensor or a millimeter wave radar in the current frame. Therefore, if the initial probability distribution is obtained based on data of the current frame by the plurality of sensors, the data collected by the target sensor is also the data collected by the current frame. For example: the initial probability distribution corresponding to the position parameters is obtained based on the point cloud data of the plurality of sensors in the current frame, so that the data acquired by the target sensor is also the point cloud data acquired by the target sensor in the current frame.
Optionally, after the movable platform updates the probability of each value-taking point, the movable platform may select at least one target value-taking point with a probability value greater than a preset threshold, and obtain the target probability distribution of the motion state according to the probability value of the at least one target value-taking point. For example: the movable platform selects a plurality of target value taking points with probability values larger than a preset threshold value, takes the average value of the probability values of the target value taking points as the average value of the target probability distribution, and takes the variance of the probability values of the target value taking points as the variance of the updated probability distribution. Or, the movable platform may select a target value-taking point with a probability value greater than a preset threshold, sample within a preset radius of the target value-taking point to obtain a plurality of other target value-taking points, and take an average value of the probability values of all the target value-taking points as an average value of target probability distribution; and taking the variance of the probability values of all the target value taking points as the variance of the updated probability distribution.
When the movable platform cannot select at least one target value taking point with the probability value larger than the preset threshold value, the movable platform can send alarm information to prompt a user that the movable platform is abnormal. This alarm information can be pronunciation alarm information or characters alarm information by or through the alarm information that warning light scintillation formed, this application does not do the restriction to this.
Optionally, the target probability distribution determined by the movable platform may also be used as a priori for the next frame.
In the application, the movable platform can update the probability value of each value taking point according to the data collected by the target sensor, and determine the target probability distribution of the motion state according to the updated probability value of each value taking point. The method can be called a post-processing method, and the problem that the original data acquired by the sensor is lost in the process of acquiring the initial probability distribution of the motion state can be solved through the post-processing method, so that the acquired target probability distribution can be ensured to be more accurate. Meanwhile, the technical scheme of the application is also suitable for the condition that the motion state does not conform to Gaussian distribution or the motion state has larger non-linearity.
The following describes the above step S202 in detail:
fig. 3 is a flowchart of a method for updating probability values of respective value-taking points according to an embodiment of the present application, as shown in fig. 3, the method includes the following steps:
step S301: and aiming at any value taking point, determining the posterior probability of the value taking point according to the data collected by the target sensor.
The movable platform may determine likelihood probabilities of the value points based on data collected by the target sensors,and calculating the product of the probability value and the likelihood probability of the value-taking point to obtain the posterior probability of the value-taking point. The likelihood probability of the value taking point is the acquisition probability of the data acquired by the target sensor under the condition of obtaining the value taking point; the posterior probability of the value taking point is the probability of the value taking point obtained under the condition of collecting the data collected by the target sensor. For example: suppose a certain value point xiHas a probability value of fi(xi) With a likelihood probability of fi(zi|xi),ziRepresenting the data collected by the target sensor, the posterior probability f of the value-taking point is then determined according to Bayes' theoremi(xi|zi)=fi(zi|xi)fi(xi)。
Or the movable platform can determine the likelihood probability of the value taking point according to the data collected by the target sensor, calculate the product of the probability value of the value taking point and the likelihood probability to obtain a product result, and calculate the quotient of the product result and the normalization factor to obtain the posterior probability of the value taking point. For example: suppose a certain value point xiHas a probability value of fi(xi) With a likelihood probability of fi(zi|xi),ziRepresenting data collected by the target sensor, the posterior probability f of the point of valuei(xi|zi)=fi(zi|xi)fi(xi) Mu, where mu is a normalization factor.
It should be noted that, for data collected by different target sensors, the movable platform may determine the likelihood probability of the value-taking point in different ways.
For example: if the target sensor is a laser sensor, the object is represented by the point cloud cluster, and the movable platform acquires a plurality of first likelihood probabilities of the value-taking points in the point cloud cluster, wherein the first likelihood probability is an acquisition probability of the position of one point cloud particle under the condition of obtaining the value-taking points. Further, the movable platform accumulates a plurality of first likelihood probabilities, i.e.
Figure BDA0002844102430000091
To obtain likelihood probabilities of the valued points. Wherein each z isi,kRepresenting the position of the kth point cloud particle in the point cloud cluster, f (z)i,k|xi) Representing a first probability of likelihood, i.e. at the point x of the valueiUnder the conditions of obtaining zi,kM +1 is the number of point cloud particles in the point cloud cluster. Assuming that the movable platform is according to zi,kDetermining the distance r between the target and the movable platformi,kThus, the first likelihood of the above-mentioned value-taking point can be defined as g (r)i,k)=f(zi,k|xi)。
Fig. 4 is a schematic diagram of a first likelihood function g (x) according to an embodiment of the present disclosure, as shown in fig. 4, when x is 20m, the corresponding first likelihood is the maximum, which is 0.4. When 0 < x < 20, the ideal distance (i.e. 20) of the movable platform from the object is further than the actual laser range (x), which may be due to the object being occluded, and thus the first likelihood that such x is assigned may be a constant probability, and if x > 20, which indicates that the ideal distance of the movable platform from the object is less than the actual laser range, this situation is contrary to physical wisdom, and thus the first likelihood that such x is assigned is close to 0, or equal to 0.
If the target sensor is a binocular vision sensor, the binocular vision sensor can acquire a plurality of images of the object, the movable platform acquires the images, and the images are processed to obtain a point cloud cluster for representing the object. Based on the method, the movable platform can sample sparse points in the point cloud cluster, and for the sparse points, the movable platform adopts the mode of determining the likelihood probability of the value taking point when the target sensor is the laser sensor to obtain the likelihood probability of each sparse point.
If the target sensor is a millimeter wave radar, the millimeter wave radar may obtain point cloud data representing an object, the movable platform may estimate the speed of the object by using the radial speed, that is, a speed component of the object pointing to the millimeter wave radar is compared with a speed of the corresponding millimeter wave radar to calculate a likelihood function of a value-taking point, fig. 5 is an illustration of a likelihood function of a speed deviation provided in an embodiment of the present applicationIt is intended that, as shown in FIG. 5, when the speed deviation is 0m/s2The corresponding likelihood probability is at most 0.8.
Step S302: and updating the probability value of each value-taking point according to the posterior probability of each value-taking point.
Optionally, the movable platform uses the posterior probability of each value-taking point as a new probability value of the value-taking point. Alternatively, the movable platform calculates the posterior probability of each of the valued points and the average of the probability values of the valued points. To obtain a new probability value of the value-taking point.
In the application, the movable platform determines the posterior probability of the value taking point according to the data collected by the target sensor aiming at any value taking point. The movable platform can update the probability value of each value taking point according to the posterior probability of each value taking point. Therefore, the probability value of each value taking point is updated by combining the original data collected by the sensor. By the method, the problem that the original data acquired by the sensor is lost in the process of acquiring the initial probability distribution of the motion state can be solved, so that the acquired target probability distribution can be ensured to be more accurate. Meanwhile, the technical scheme of the application is also suitable for the condition that the motion state does not conform to Gaussian distribution or the motion state has larger non-linearity. It should be noted that, when the motion state of the object includes: and when the speed parameter and/or the acceleration parameter are/is determined, the movable platform can introduce the vehicle body dynamic model when the likelihood probability is determined, so that the obtained posterior probability accords with the vehicle body motion model, and the obtained probability value of the updated value taking point is more accurate.
How to determine the above-mentioned value points is explained below:
the first alternative is as follows: fig. 6 is a flowchart of a method for determining each value-taking point according to an embodiment of the present application, as shown in fig. 6, the method includes the following steps:
step S601: and setting a value range by taking the value taking point with the maximum probability value in the initial probability distribution as a center and the fusion precision value of the target sensor as a radius.
Step S602: and determining each value taking point at equal intervals in the value taking range.
Taking the motion state of the object as a speed parameter as an example, assuming that in the initial probability distribution, the value-taking point with the maximum probability value is 5m/s, and the fusion precision value corresponding to the speed parameter is 0.5, the obtained value-taking range is 4.5, 5.5 ]. Further, the movable platform pair [ 4.5, 5.5 ] is divided at equal intervals, such as: setting the interval to be 0.1, and determining each value taking point of the movable platform in [ 4.5, 5.5 ] as follows: 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5.
The second option is: and the movable platform determines each value taking point in the value range corresponding to the motion state according to the probability density of the initial probability distribution, wherein the larger the probability density is, the smaller the interval between the value taking point and the adjacent value taking point is.
For example: assuming that the motion state of the object conforms to the gaussian distribution, and the motion state is the combination of at least two parameters of the position parameter, the orientation parameter, the velocity parameter and the acceleration parameter of the object, the gaussian distribution is an ellipsoid, and the sampling density of the value-taking points and the spatial distribution probability density keep a direct ratio relationship, that is, the interval between the value-taking point with the higher probability density and the adjacent value-taking point is smaller, so that each obtained value-taking point can basically ensure to conform to the initial probability distribution.
In the application, the movable platform can perform deterministic sampling instead of stochastic sampling at equal intervals or based on Gaussian distribution, so that the use frequency of stochastic sampling can be avoided as much as possible, and more accurate value taking points can be obtained.
Optionally, the data acquired by the target sensor for the environment is point cloud data, the object is represented by a point cloud cluster, based on this, each value-taking point is a position of each point cloud particle in the point cloud cluster, and the probability value of each value-taking point is a probability value of the position of each point cloud particle. It is considered that the data measured by the sensor may not be completely accurate due to the fact that the process of the sensor may not be perfect or due to the presence of other factors and noise that may not be predicted or controlled by human beings. Therefore, there may be a conflict between the first point cloud cluster corresponding to the object and the second point cloud cluster corresponding to the other object, that is, there are some point cloud particles belonging to both the first point cloud cluster and the second point cloud cluster. Based on this, to resolve such conflicts, a method of generating a point cloud cluster is described below:
fig. 7 is a flowchart of a method for generating a point cloud cluster according to an embodiment of the present application, and as shown in fig. 7, the method includes the following steps:
step S701: and determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value.
Step S702: and detecting whether point cloud particles with the distance to the point cloud particles to be detected smaller than a preset distance exist in second point cloud clusters corresponding to other objects.
Step S703: and if the point cloud particles with the distance to the point cloud particles to be detected smaller than the preset distance exist in the second point cloud cluster, the movable platform calculates the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster.
Step S704: and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
It should be noted that the new point cloud cluster may still correspond to the object, and a new target object may be determined based on the new point cloud cluster.
The first preset threshold may be set according to an actual situation, for example: the first preset threshold may be 0.6, 0.8, etc. The preset distance may also be set according to actual conditions, for example: the preset distance is 10cm, 20cm, etc., and the second preset threshold may also be set according to actual situations, for example: the first preset threshold may be 0.6, 0.8, etc. The method and the device do not limit how the first preset threshold value, the second preset threshold value and the preset distance are set.
Optionally, assuming that the point cloud particles whose distance from the point cloud particle to be detected is smaller than the preset distance are called as first point cloud particles corresponding to the point cloud particle to be detected, the movable platform may calculate a product of the probability value of the point cloud particle to be detected and the probability value of the first point cloud particles to obtain a joint probability of the probability value of the point cloud particle to be detected and the first point cloud particles. Because a plurality of point cloud particles to be detected may exist in the first point cloud cluster, the movable platform can calculate the joint probability of each point cloud particle to be detected and the corresponding first point cloud particle to obtain the joint probability of the first point cloud cluster and the second point cloud cluster.
Optionally, the movable platform may form a new point cloud cluster corresponding to the object from the point cloud particles whose joint probability is greater than the second preset threshold, perform local optimal estimation on the new point cloud cluster through the joint probability, and reversely deduce the probability value of each point cloud particle in the new point cloud cluster at the first point cloud cluster, thereby updating the probability distribution of the object.
It should be noted that, the above steps S701 to S704 may be executed after step S203, and therefore, the point cloud particles to be inspected in step S701 refer to point cloud particles with probability values greater than a first preset threshold in the target probability distribution. The probability distribution of the point cloud particles in the first point cloud cluster in step S703 refers to a target probability distribution of the point cloud particles in the first point cloud cluster. Correspondingly, the movable platform reversely deduces the probability value of each point cloud particle in the new point cloud cluster at the first point cloud cluster, so as to update the target probability distribution of the object. Alternatively, the above steps S701 to S704 may be performed before step S203, and therefore, the point cloud particles to be inspected in step S701 refer to point cloud particles having a probability value greater than a first preset threshold in the initial probability distribution. The probability distribution of the point cloud particles in the first point cloud cluster in step S703 refers to an initial probability distribution of the point cloud particles in the first point cloud cluster. Correspondingly, the movable platform reversely deduces the probability value of each point cloud particle in the new point cloud cluster at the first point cloud cluster, so as to update the initial probability distribution of the object.
The above steps S701 to S704 are explained below with reference to examples:
when a laser radar sensor or a millimeter wave radar acquires point cloud data of a truck, the truck head and the truck body may be recognized as two objects due to a large gap between the truck head and the truck body, that is, the truck head is represented by a first point cloud cluster, and the truck body is represented by a second point cloud cluster. Based on the method, the movable platform can determine the point cloud particles to be detected in the first point cloud cluster of the object and detect the first point cloud particles, the distance between the first point cloud particles and the point cloud particles to be detected is smaller than the preset distance, and therefore the first point cloud particles are point cloud particles at the joint of the vehicle head and the vehicle body. Further, the movable platform calculates the product of the probability values of each point cloud particle to be detected and the corresponding first point cloud particle to obtain the joint probability of the first point cloud cluster and the second point cloud cluster, and if the joint probability of a certain point cloud particle is greater than a second preset threshold value, the point cloud particle belongs to both the first point cloud cluster and the second point cloud cluster. Based on the method, the movable platform determines the joint probability distribution of the new point cloud cluster according to the joint probability of the discrete point cloud particles, carries out local optimal estimation based on the joint probability distribution, and reversely deduces the probability value of each point cloud particle in the new point cloud cluster at the first point cloud cluster according to the local optimal estimation, so that the probability distribution of the locomotive is updated.
In the application, when the object and other objects have conflicts, namely, point cloud particles with joint probability larger than a second preset threshold exist, the movable platform can generate a new point cloud cluster according to the point cloud particles with joint probability larger than the second preset threshold. Based on the method, the movable platform carries out local optimal estimation on the new point cloud cluster through the joint probability, and the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster is reversely deduced so as to update the probability distribution of the object, so that point cloud particles with the distance smaller than the preset distance from the point cloud particle to be detected do not exist in the second point cloud cluster corresponding to other objects. Thereby resolving conflicts between the object and other objects.
Exemplarily, fig. 8 is a flowchart of an object state acquiring method according to another embodiment of the present application, and as shown in fig. 8, after step S203, the object state acquiring method further includes the following steps:
step S801: and judging whether the initial probability distribution and the target probability distribution meet the consistency condition.
Step S802: and if the initial probability distribution and the target probability distribution do not meet the consistency condition, the movable platform pushes alarm information to prompt a user that the movable platform is abnormal.
Optionally, whether the initial probability distribution and the target probability distribution meet the consistency condition is judged in a chi-square test mode. Or the movable platform selects at least one first discrete point in the initial probability distribution, selects second discrete points corresponding to the at least one first discrete point in the target probability distribution one by one, calculates the difference of probability values of each first discrete point and each second discrete point to obtain probability difference values, and sums all the probability difference values to obtain a summation result. If the summation result is larger than the preset result, the movable platform determines that the initial probability distribution and the target probability distribution do not meet the consistency condition, otherwise, the movable platform determines that the initial probability distribution and the target probability distribution meet the consistency condition.
Further, if the initial probability distribution and the target probability distribution do not meet the consistency condition, the movable platform pushes alarm information, the alarm information can be voice alarm information or character alarm information, or alarm information formed by flashing of a warning lamp, and the method and the device are not limited by the application.
In the application, if the initial probability distribution and the target probability distribution do not meet the consistency condition, the initial probability distribution and the target probability distribution are far apart, and under the condition, the movable platform pushes alarm information to prompt a user that the movable platform is abnormal, so that the reliability of the movable platform is improved.
Exemplarily, fig. 9 is a flowchart of an object state acquiring method according to still another embodiment of the present application, and as shown in fig. 9, after step S203, the object state acquiring method further includes the following steps:
step S901: and determining the absolute value of the motion state of the object according to the value taking point of the motion state of the movable platform and the value taking point in the target probability distribution corresponding to the motion state of the object.
The mobile platform may obtain motion estimation information (ego-motion) of itself through an Inertial Measurement Unit (IMU), a Global Positioning System (GPS), a wheel encoder odometer (wolfeel odometer), a visual odometer (visual odometer), and the like, that is, a value-taking point of a motion state of the mobile platform, and the value-taking point in a target probability distribution of the motion state of the object is actually a relative value, so that the mobile platform may sum probability values corresponding to the value-taking point in the motion state of itself and the value-taking point in the target probability distribution of the motion state of the object to obtain an absolute value of the motion state of the object.
For example: the position parameters of the movable platform can be understood as random variables which accord with certain probability distribution because the GPS of the movable platform has errors, so the movable platform can select the position parameters corresponding to the position parameters in the target probability distribution in the probability distribution and sum the corresponding position parameters to obtain the absolute position parameters of the object.
In the application of the unmanned aerial vehicle or the unmanned aerial vehicle, if the state of the unmanned aerial vehicle or the unmanned aerial vehicle is unknown, the absolute position and the speed of an object on the other side are not considerable, in this case, only the relative position and the speed can be estimated, of course, a filter or a positioning algorithm aiming at the state estimation of the vehicle can be maintained, sensors such as an IMU, a GPS, a wheel odometer, a high-precision map, a vision, a laser and even a millimeter wave radar can be fused for positioning, and the absolute state estimation of the object can be obtained through coordinate conversion after the information is obtained, so that a dynamic model and an observation model are introduced more naturally.
In the method and the device, the absolute value of the motion state of the object can be determined according to the value taking point of the motion state of the movable platform and the value taking point in the target probability distribution of the motion state of the corresponding object. It should be noted that, in general, data acquired by a sensor are relative data, and for a speed parameter, because a relative speed change is large, a jump of a previous frame and a next frame may occur, which is not favorable for updating a probability value through the speed parameter acquired by the sensor, so that an absolute value can be calculated by using a motion parameter of a movable platform itself, and a jump of a motion state value of a detected object is reduced.
Fig. 10 is a schematic diagram of a movable platform according to an embodiment of the present application, where the movable platform carries a plurality of sensors, and the sensors are used for performing data acquisition on an environment where the movable platform is located, as shown in fig. 10, the movable platform includes:
the obtaining module 1001 is configured to obtain an initial probability distribution of a motion state of an object in an environment, where the initial probability distribution is a fusion result of data acquired by multiple sensors, and the initial probability distribution includes probability values of respective value-taking points corresponding to the motion state.
The updating module 1002 is configured to update the probability value of each value-taking point according to data acquired by the target sensor.
The first determining module 1003 is configured to determine a target probability distribution of the motion state according to the updated probability value of each value-taking point.
Optionally, the update module 1002 includes: the device comprises a determining submodule and an updating submodule, wherein the determining submodule is used for determining the posterior probability of a value taking point according to data collected by a target sensor aiming at any value taking point, and the posterior probability of the value taking point is obtained under the condition that the data collected by the target sensor is collected. And the updating submodule is used for updating the probability value of each value-taking point according to the posterior probability of each value-taking point.
Optionally, the update sub-module is specifically configured to: and determining the likelihood probability of the value taking point according to the data acquired by the target sensor, wherein the likelihood probability of the value taking point is the acquisition probability of the data acquired by the target sensor under the condition of acquiring the value taking point. And calculating the product of the probability value and the likelihood probability of the value taking point to obtain the posterior probability of the value taking point.
Optionally, the movable platform further comprises: a setting module 1004 and a second determining module 1005. The setting module 1004 is configured to set a value range by taking a value-taking point with a maximum probability value in the initial probability distribution as a center and taking a fusion precision value of the target sensor as a radius. The second determining module 1005 is configured to determine each value-taking point at equal intervals in the value-taking range.
Optionally, the movable platform further comprises: a third determining module 1006, configured to determine each value-taking point in the value range corresponding to the motion state according to the probability density of the initial probability distribution, where a larger value-taking point of the probability density has a smaller interval with an adjacent value-taking point.
Optionally, the data acquired by the target sensor for the environment is point cloud data, the object is represented by a point cloud cluster, each value-taking point is a position of each point cloud particle in the point cloud cluster, and the probability value of each value-taking point is a probability value of the position of each point cloud particle.
Optionally, the movable platform further comprises:
a fourth determining module 1007, configured to determine point cloud particles to be detected in the first point cloud cluster of the object, where the point cloud particles to be detected are point cloud particles with a probability value greater than a first preset threshold.
The detection module 1008 is configured to detect whether point cloud particles whose distance from the point cloud particles to be detected is smaller than a preset distance exist in the second point cloud clusters corresponding to the other objects.
A calculating module 1009, configured to calculate, if there is a point cloud particle whose distance from the point cloud particle to be detected is smaller than a preset distance in the second point cloud cluster, a joint probability of the first point cloud cluster and the second point cloud cluster according to a probability distribution of the point cloud particle in the first point cloud cluster and a probability distribution of the point cloud particle in the second point cloud cluster.
The generating module 1010 is configured to generate a new point cloud cluster according to the point cloud particles with the joint probability greater than a second preset threshold.
Optionally, the initial probability distribution is obtained based on data collected by a plurality of sensors in the current frame and the previous frame. The update module 1002 is specifically configured to: and updating the probability value of the value-taking point according to the data acquired by the target sensor in the current frame and the previous frame.
Optionally, the movable platform further comprises:
the determining module 1011 is configured to determine whether the initial probability distribution and the target probability distribution meet a consistency condition after the updating module updates the probability values of the value-taking points according to the data acquired by the target sensor to obtain the target probability distribution of the motion state.
And a pushing module 1012, configured to push alarm information to prompt a user that the movable platform is abnormal if the initial probability distribution and the target probability distribution do not meet the consistency condition.
Optionally, the determining module 1011 is specifically configured to: and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not in a chi-square test mode.
Optionally, the movable platform further comprises: a fifth determining module 1013, configured to determine, after the first determining module determines the target probability distribution of the motion state according to the updated probability value of each value-taking point, an absolute value of the motion state of the object according to the value-taking point of the motion state of the movable platform and the value-taking point in the target probability distribution of the motion state of the corresponding object.
Optionally, the plurality of sensors comprises at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
Optionally, the motion state includes at least one of: position parameters, orientation parameters, velocity parameters, acceleration parameters of the object.
In summary, the present application provides a movable platform, which can execute the object state obtaining method, and the content and effect of the method can be referred to in the embodiment section, which is not described herein again.
Fig. 11 is a schematic view of a movable platform according to an embodiment of the present application, as shown in fig. 11, the movable platform includes: a plurality of sensors 1101 for collecting data about the environment in which the movable platform is located, at least one processor 1102 and a memory 1103 communicatively coupled to the at least one processor. In fig. 11, two sensors 1101 and a processor 1102 are illustrated.
The processor 1102 is configured to: acquiring initial probability distribution of the motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion state; updating the probability value of each value-taking point according to the data acquired by the target sensor; and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point.
Optionally, the processor 1102 is specifically configured to: aiming at any value taking point, determining the posterior probability of the value taking point according to the data collected by the target sensor, wherein the posterior probability of the value taking point is the probability of the value taking point under the collection condition of the data collected by the target sensor; and updating the probability value of each value-taking point according to the posterior probability of each value-taking point.
Optionally, the processor 1102 is specifically configured to: determining the likelihood probability of the value taking point according to the data collected by the target sensor, wherein the likelihood probability of the value taking point is the collection probability of the data collected by the target sensor under the condition of obtaining the value taking point; and calculating the product of the probability value and the likelihood probability of the value taking point to obtain the posterior probability of the value taking point.
Optionally, the processor 1102 is further configured to: setting a value range by taking a value taking point with the maximum probability value in the initial probability distribution as a center and taking the fusion precision value of the target sensor as a radius; and determining each value taking point at equal intervals in the value taking range.
Optionally, the processor 1102 is further configured to: and determining each value taking point in a value range corresponding to the motion state according to the probability density of the initial probability distribution, wherein the interval between a value taking point with the larger probability density and an adjacent value taking point is smaller.
Optionally, the data acquired by the target sensor for the environment is point cloud data, the object is represented by a point cloud cluster, each value-taking point is a position of each point cloud particle in the point cloud cluster, and the probability value of each value-taking point is a probability value of the position of each point cloud particle.
Optionally, the processor 1102 is further configured to: determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value; detecting whether point cloud particles with the distance to the point cloud particles to be detected smaller than a preset distance exist in second point cloud clusters corresponding to other objects; if the point cloud particles with the distance to the point cloud particles to be detected smaller than the preset distance exist in the second point cloud cluster, calculating the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster; and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
Optionally, the initial probability distribution is obtained by collecting data in a current frame and a previous frame based on a plurality of sensors; the processor 1102 is specifically configured to: and updating the probability value of the value-taking point according to the data acquired by the target sensor in the current frame and the previous frame.
Optionally, the processor 1102 is further configured to: after the probability values of the value-taking points are updated according to the data collected by the target sensor to obtain target probability distribution of the motion state, whether the initial probability distribution and the target probability distribution meet consistency conditions or not is judged; and if the initial probability distribution and the target probability distribution do not meet the consistency condition, pushing alarm information to prompt a user that the movable platform is abnormal.
Optionally, the processor 1102 is specifically configured to: and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not in a chi-square test mode.
Optionally, the processor 1102 is further configured to: after the target probability distribution of the motion state is determined according to the updated probability value of each value taking point, the absolute value of the motion state of the object is determined according to the value taking point of the motion state of the movable platform and the value taking point in the target probability distribution of the motion state of the corresponding object.
Optionally, the plurality of sensors comprises at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
Optionally, the motion state includes at least one of: position parameters, orientation parameters, velocity parameters, acceleration parameters of the object.
The Processor according to the present Application may be a Motor Control Unit (MCU), a Central Processing Unit (CPU), or other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The present application also provides a computer program product comprising computer instructions for implementing the steps of the above-described method embodiments. The content and effect of the method can refer to the embodiment of the method, and are not described in detail.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (27)

1. An object state acquisition method is characterized in that a movable platform carries a plurality of sensors, and the sensors are used for carrying out data acquisition on the environment where the movable platform is located, and the method comprises the following steps:
acquiring initial probability distribution of motion states of objects in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion states;
updating the probability value of each value-taking point according to the data collected by the target sensor;
and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point.
2. The method of claim 1, wherein the updating the probability values of the respective value points according to the data collected by the target sensor comprises:
for any one value taking point, determining the posterior probability of the value taking point according to data collected by a target sensor, wherein the posterior probability of the value taking point is the probability of the value taking point under the condition of collecting the data collected by the target sensor;
and updating the probability value of each value-taking point according to the posterior probability of each value-taking point.
3. The method of claim 2, wherein determining the posterior probability of the valued point based on data collected by the target sensor comprises:
determining the likelihood probability of the value taking point according to the data collected by the target sensor, wherein the likelihood probability of the value taking point is the collection probability of the data collected by the target sensor under the condition of obtaining the value taking point;
and calculating the product of the probability value of the value taking point and the likelihood probability to obtain the posterior probability of the value taking point.
4. The method of claim 1, further comprising:
setting a value range by taking a value taking point with the maximum probability value in the initial probability distribution as a center and taking the fusion precision value of the target sensor as a radius;
and determining each value taking point at equal intervals in the value taking range.
5. The method of claim 1, further comprising:
and determining each value taking point in a value range corresponding to the motion state according to the probability density of the initial probability distribution, wherein the interval between a value taking point with the larger probability density and an adjacent value taking point is smaller.
6. The method according to any one of claims 1 to 5,
the data acquired by the target sensor to the environment are point cloud data, the object is represented by a point cloud cluster, each value-taking point is the position of each point cloud particle in the point cloud cluster, and the probability value of each value-taking point is the probability value of the position of each point cloud particle.
7. The method of claim 6, further comprising:
determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value;
detecting whether point cloud particles with the distance to the point cloud particles to be detected smaller than a preset distance exist in second point cloud clusters corresponding to other objects;
if the point cloud particles with the distance to the point cloud particles to be detected smaller than the preset distance exist in the second point cloud cluster, calculating the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster;
and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
8. The method according to any one of claims 1 to 5,
the initial probability distribution is obtained based on data collected by a plurality of sensors in a current frame and a previous frame;
the updating the probability value of each value-taking point according to the data collected by the target sensor comprises the following steps:
and updating the probability value of the value-taking point according to the data acquired by the target sensor in the current frame and the previous frame.
9. The method according to any one of claims 1-5, wherein after determining the target probability distribution of the motion state according to the updated probability values of the respective value-taking points, further comprising:
judging whether the initial probability distribution and the target probability distribution meet consistency conditions or not;
and if the initial probability distribution and the target probability distribution do not meet the consistency condition, pushing alarm information to prompt a user that the movable platform is abnormal.
10. The method of claim 9, wherein determining whether the initial probability distribution and the target probability distribution satisfy a consistency condition comprises:
and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not in a chi-square test mode.
11. The method according to any one of claims 1-5, wherein after determining the target probability distribution of the motion state according to the updated probability values of the respective value-taking points, further comprising:
and determining the absolute value of the motion state of the object according to the value taking point of the motion state of the movable platform and the value taking point in the target probability distribution corresponding to the motion state of the object.
12. The method of any one of claims 1-5, wherein the plurality of sensors comprises at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
13. The method according to any one of claims 1-5, wherein the motion state comprises at least one of: a position parameter, an orientation parameter, a velocity parameter, an acceleration parameter of the object.
14. A movable platform carrying a plurality of sensors for data acquisition of an environment in which the movable platform is located, the movable platform comprising: a processor to:
acquiring initial probability distribution of motion states of objects in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors and comprises probability values of value-taking points corresponding to the motion states;
updating the probability value of each value-taking point according to the data collected by the target sensor;
and determining the target probability distribution of the motion state according to the updated probability value of each value-taking point.
15. The movable platform of claim 14, wherein the processor is specifically configured to:
for any one value taking point, determining the posterior probability of the value taking point according to data collected by a target sensor, wherein the posterior probability of the value taking point is the probability of the value taking point under the condition of collecting the data collected by the target sensor;
and updating the probability value of each value-taking point according to the posterior probability of each value-taking point.
16. The movable platform of claim 15, wherein the processor is specifically configured to:
determining the likelihood probability of the value taking point according to the data collected by the target sensor, wherein the likelihood probability of the value taking point is the collection probability of the data collected by the target sensor under the condition of obtaining the value taking point;
and calculating the product of the probability value of the value taking point and the likelihood probability to obtain the posterior probability of the value taking point.
17. The movable platform of claim 14, wherein the processor is further configured to:
setting a value range by taking a value taking point with the maximum probability value in the initial probability distribution as a center and taking the fusion precision value of the target sensor as a radius;
and determining each value taking point at equal intervals in the value taking range.
18. The movable platform of claim 14, wherein the processor is further configured to:
and determining each value taking point in a value range corresponding to the motion state according to the probability density of the initial probability distribution, wherein the interval between a value taking point with the larger probability density and an adjacent value taking point is smaller.
19. The movable platform of any one of claims 14-18,
the data acquired by the target sensor to the environment are point cloud data, the object is represented by a point cloud cluster, each value-taking point is the position of each point cloud particle in the point cloud cluster, and the probability value of each value-taking point is the probability value of the position of each point cloud particle.
20. The movable platform of claim 19, wherein the processor is further configured to:
determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value;
detecting whether point cloud particles with the distance to the point cloud particles to be detected smaller than a preset distance exist in second point cloud clusters corresponding to other objects;
if the point cloud particles with the distance to the point cloud particles to be detected smaller than the preset distance exist in the second point cloud cluster, calculating the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster;
and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
21. The movable platform of any one of claims 14-18,
the initial probability distribution is obtained based on data collected by a plurality of sensors in a current frame and a previous frame;
the processor is specifically configured to:
and updating the probability value of the value-taking point according to the data acquired by the target sensor in the current frame and the previous frame.
22. The movable platform of any one of claims 14-18, the processor further to:
after the probability values of the value-taking points are updated according to data collected by a target sensor to obtain target probability distribution of the motion state, judging whether the initial probability distribution and the target probability distribution meet consistency conditions or not;
and if the initial probability distribution and the target probability distribution do not meet the consistency condition, pushing alarm information to prompt a user that the movable platform is abnormal.
23. The movable platform of claim 22, wherein the processor is specifically configured to:
and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not in a chi-square test mode.
24. The movable platform of any one of claims 14-18, wherein the processor is further configured to:
after the target probability distribution of the motion state is determined according to the updated probability value of each value taking point, the absolute value of the motion state of the object is determined according to the value taking point of the motion state of the movable platform and the value taking point in the target probability distribution corresponding to the motion state of the object.
25. The movable platform of any one of claims 14-18, wherein the plurality of sensors comprises at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
26. The movable platform of any one of claims 14-18, wherein the motion state comprises at least one of: a position parameter, an orientation parameter, a velocity parameter, an acceleration parameter of the object.
27. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises computer instructions for implementing the method according to any one of claims 1-13.
CN201980041121.1A 2019-11-26 2019-11-26 Object state acquisition method, movable platform and storage medium Active CN112313536B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/120911 WO2021102676A1 (en) 2019-11-26 2019-11-26 Object state acquisition method, mobile platform and storage medium

Publications (2)

Publication Number Publication Date
CN112313536A true CN112313536A (en) 2021-02-02
CN112313536B CN112313536B (en) 2024-04-05

Family

ID=74336330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980041121.1A Active CN112313536B (en) 2019-11-26 2019-11-26 Object state acquisition method, movable platform and storage medium

Country Status (2)

Country Link
CN (1) CN112313536B (en)
WO (1) WO2021102676A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052907A (en) * 2021-04-12 2021-06-29 深圳大学 Positioning method of mobile robot in dynamic environment
CN113115253A (en) * 2021-03-19 2021-07-13 西北大学 Method and system for estimating height and density deployment of millimeter wave unmanned aerial vehicle under dynamic blocking
CN113997989A (en) * 2021-11-29 2022-02-01 中国人民解放军国防科技大学 Safety detection method, device, equipment and medium for single-point suspension system of maglev train
CN114239643A (en) * 2021-11-23 2022-03-25 清华大学 Detection method of abnormal state of spatial target based on micro-motion and multivariate Gaussian distribution
CN115235482A (en) * 2021-09-28 2022-10-25 上海仙途智能科技有限公司 Map update method, device, computer equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118779004B (en) * 2024-06-26 2025-02-07 寒序科技(北京)有限公司 Accelerator card, node status determination method and chip

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147468A (en) * 2011-01-07 2011-08-10 西安电子科技大学 Bayesian theory-based multi-sensor detecting and tracking combined processing method
CN103472850A (en) * 2013-09-29 2013-12-25 合肥工业大学 Multi-unmanned aerial vehicle collaborative search method based on Gaussian distribution prediction
CN105717505A (en) * 2016-02-17 2016-06-29 国家电网公司 Data association method for utilizing sensing network to carry out multi-target tracking
WO2018119912A1 (en) * 2016-12-29 2018-07-05 深圳大学 Target tracking method and device based on parallel fuzzy gaussian and particle filter
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109996205A (en) * 2019-04-12 2019-07-09 成都工业学院 Data Fusion of Sensor method, apparatus, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728608B2 (en) * 2002-08-23 2004-04-27 Applied Perception, Inc. System and method for the creation of a terrain density model
CN105425820B (en) * 2016-01-05 2016-12-28 合肥工业大学 A kind of multiple no-manned plane collaboratively searching method for the moving target with perception
CN105678076B (en) * 2016-01-07 2018-06-22 福州华鹰重工机械有限公司 The method and device of point cloud measurement data quality evaluation optimization
CN105700555B (en) * 2016-03-14 2018-04-27 北京航空航天大学 A kind of multiple no-manned plane collaboratively searching method based on gesture game
US9996944B2 (en) * 2016-07-06 2018-06-12 Qualcomm Incorporated Systems and methods for mapping an environment
CN108509918B (en) * 2018-04-03 2021-01-08 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108764168B (en) * 2018-05-31 2020-02-07 合肥工业大学 Method and system for searching moving target on multi-obstacle sea surface by imaging satellite
CN108717540B (en) * 2018-08-03 2024-02-06 浙江梧斯源通信科技股份有限公司 Method and device for distinguishing pedestrians and vehicles based on 2D laser radar
CN109523129B (en) * 2018-10-22 2021-08-13 吉林大学 A method for real-time fusion of multi-sensor information of unmanned vehicles
CN110389595B (en) * 2019-06-17 2022-04-19 中国工程物理研究院电子工程研究所 Dual-attribute probability map optimized unmanned aerial vehicle cluster cooperative target searching method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147468A (en) * 2011-01-07 2011-08-10 西安电子科技大学 Bayesian theory-based multi-sensor detecting and tracking combined processing method
CN103472850A (en) * 2013-09-29 2013-12-25 合肥工业大学 Multi-unmanned aerial vehicle collaborative search method based on Gaussian distribution prediction
CN105717505A (en) * 2016-02-17 2016-06-29 国家电网公司 Data association method for utilizing sensing network to carry out multi-target tracking
WO2018119912A1 (en) * 2016-12-29 2018-07-05 深圳大学 Target tracking method and device based on parallel fuzzy gaussian and particle filter
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109996205A (en) * 2019-04-12 2019-07-09 成都工业学院 Data Fusion of Sensor method, apparatus, electronic equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115253A (en) * 2021-03-19 2021-07-13 西北大学 Method and system for estimating height and density deployment of millimeter wave unmanned aerial vehicle under dynamic blocking
CN113052907A (en) * 2021-04-12 2021-06-29 深圳大学 Positioning method of mobile robot in dynamic environment
CN113052907B (en) * 2021-04-12 2023-08-15 深圳大学 Positioning method of mobile robot in dynamic environment
CN115235482A (en) * 2021-09-28 2022-10-25 上海仙途智能科技有限公司 Map update method, device, computer equipment and medium
CN114239643A (en) * 2021-11-23 2022-03-25 清华大学 Detection method of abnormal state of spatial target based on micro-motion and multivariate Gaussian distribution
CN114239643B (en) * 2021-11-23 2024-11-08 清华大学 Abnormal state detection method of space target based on micro motion and multivariate Gaussian distribution
CN113997989A (en) * 2021-11-29 2022-02-01 中国人民解放军国防科技大学 Safety detection method, device, equipment and medium for single-point suspension system of maglev train
CN113997989B (en) * 2021-11-29 2024-03-29 中国人民解放军国防科技大学 Safety detection method, device, equipment and medium for single-point suspension system of maglev train

Also Published As

Publication number Publication date
CN112313536B (en) 2024-04-05
WO2021102676A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
CN112313536B (en) Object state acquisition method, movable platform and storage medium
CN108243623B (en) Automobile anti-collision early warning method and system based on binocular stereo vision
CN111222568B (en) Vehicle networking data fusion method and device
EP2657644B1 (en) Positioning apparatus and positioning method
US10867191B2 (en) Method for detecting and/or tracking objects
JP5944781B2 (en) Mobile object recognition system, mobile object recognition program, and mobile object recognition method
KR101572851B1 (en) How to map your mobile platform in a dynamic environment
US11506502B2 (en) Robust localization
US10650271B2 (en) Image processing apparatus, imaging device, moving object device control system, and image processing method
KR101628155B1 (en) Method for detecting and tracking unidentified multiple dynamic object in real time using Connected Component Labeling
CN110286389B (en) Grid management method for obstacle identification
US9361696B2 (en) Method of determining a ground plane on the basis of a depth image
KR101711964B1 (en) Free space map construction method, free space map construction system, foreground/background extraction method using the free space map, and foreground/background extraction system using the free space map
Rodríguez Flórez et al. Multi-modal object detection and localization for high integrity driving assistance
US9002513B2 (en) Estimating apparatus, estimating method, and computer product
JP2006234492A (en) Object recognition device
CN111915675B (en) Particle drift-based particle filtering point cloud positioning method, device and system thereof
CN111612818A (en) Novel binocular vision multi-target tracking method and system
CN114035187A (en) Perception fusion method of automatic driving system
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
CN113511194A (en) Longitudinal collision avoidance early warning method and related device
CN110426714B (en) Obstacle identification method
CN115151836A (en) Method for detecting a moving object in the surroundings of a vehicle and motor vehicle
EP2913999A1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, disparity value deriving method, and computer-readable storage medium
JP5655038B2 (en) Mobile object recognition system, mobile object recognition program, and mobile object recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240515

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Patentee after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China