[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115272493A - Abnormal target detection method and device based on continuous time sequence point cloud superposition - Google Patents

Abnormal target detection method and device based on continuous time sequence point cloud superposition Download PDF

Info

Publication number
CN115272493A
CN115272493A CN202211145212.XA CN202211145212A CN115272493A CN 115272493 A CN115272493 A CN 115272493A CN 202211145212 A CN202211145212 A CN 202211145212A CN 115272493 A CN115272493 A CN 115272493A
Authority
CN
China
Prior art keywords
point
abnormal
point cloud
points
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211145212.XA
Other languages
Chinese (zh)
Other versions
CN115272493B (en
Inventor
黄倩
刘云涛
朱永东
赵志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202211145212.XA priority Critical patent/CN115272493B/en
Publication of CN115272493A publication Critical patent/CN115272493A/en
Application granted granted Critical
Publication of CN115272493B publication Critical patent/CN115272493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses an abnormal target detection method and device based on continuous time sequence point cloud superposition, which adopts a time sequence superposition method, utilizes continuous time sequence point cloud data frame mapping superposition to generate a background depth map, converts non-fixed and disordered point clouds into a fixed and ordered range map, identifies abnormal point clouds based on the depth values of mapping points of the abnormal target point cloud and the depth value difference of the background depth map under corresponding coordinates, adds semantic category information to the abnormal point clouds by utilizing example area categories of the abnormal point clouds on a space-time alignment image, improves the anti-interference capability of the abnormal point clouds by combining the space and semantic distances of the point clouds, forms accurate and independent point cloud target clusters, and calculates and generates the detection information of the target clusters. According to the method, the background depth map is constructed by a continuous time sequence superposition method to detect the abnormal point cloud target, so that the problem of low accuracy of detection by directly adopting point cloud or images due to the fact that the size and the distance of the abnormal target are not fixed is solved.

Description

Abnormal target detection method and device based on continuous time sequence point cloud superposition
Technical Field
The invention relates to the technical field of intelligent perception, in particular to an abnormal target detection method and device based on continuous time sequence point cloud superposition.
Background
Along with the reduction of the cost of the sensor, more and more security monitoring scenes realize abnormity detection and alarm by installing the sensor. The method is characterized in that abnormal intrusion targets including pedestrians, non-motor vehicles, animals and the like are monitored and identified through fusion perception calculation of multiple sensors in a garden prevention and control range, and the method is an important application of realizing safety management of an unmanned monitoring area by using sensor equipment in a garden. The low-cost solid-state laser radar is usually used for sensing the position of a target, but because the size and the distance of an abnormal target are not fixed, a deep learning method is directly adopted, detection and identification are carried out based on radar point cloud data, and the accuracy rate is low. Meanwhile, the detection range of the solid laser radar is wide, the formed point cloud space range is large, the arrangement is disordered, and the difficulty of detecting the abnormal point cloud target in the point cloud space range based on the background point cloud is high. On the other hand, the single camera cannot accurately identify the target position due to the fact that depth information cannot be obtained, and the accuracy of detection and identification of an abnormal target directly through point cloud or images is low due to the fact that the size of the abnormal target is not fixed and the position distance is not fixed.
Therefore, aiming at the problem that the solid-state laser radar and the camera cannot independently and accurately detect the abnormal target, the invention provides a continuous time sequence point cloud superposition method, which is used for constructing a background depth map to detect the abnormal point cloud target and realizing high-precision detection of the abnormal target.
Disclosure of Invention
The invention aims to provide an abnormal target detection method and device based on continuous time sequence point cloud superposition, which aims to overcome the defects of the prior art, and the abnormal target detection method and device based on continuous time sequence point cloud superposition are characterized in that a background depth map is generated by mapping and superposing continuous time sequence point cloud data frames, large-range and disordered point clouds are converted into a fixed-range and ordered depth map, abnormal target depth values and the depth value difference of the background depth map are utilized to identify abnormal point clouds, semantic category information is added to the abnormal point clouds by fusing image semantic information, and finally, the abnormal targets are clustered and identified by combining the space and the semantic distance of the point clouds, so that the problem of low accuracy in detection and identification of the abnormal targets directly adopting the point clouds or images due to the problems of unfixed size and unfixed position distance is solved.
The purpose of the invention is realized by the following technical scheme: in a first aspect, the invention provides an abnormal target detection method based on continuous time sequence point cloud superposition, which comprises the following steps:
the method comprises the following steps: collecting a plurality of frames of point cloud data frames with continuous time sequence by a solid-state laser radar, mapping the point clouds in all the point cloud data frames to a depth map by using an affine transformation matrix from the point cloud data to image data, superposing the depth values of mapping points with the same mapping coordinates and calculating an average depth value, and updating the depth value of the corresponding coordinate of the depth map by using the obtained average depth value; repeating the step until the depth value before and after any coordinate of the depth map is updated does not change any more, wherein the finally updated depth map is the background depth map;
step two: acquiring a point cloud data frame of the solid-state laser radar in real time, mapping all point clouds in the point cloud data frame to a background depth map by using an affine transformation matrix from the point cloud data to image data, and judging that any point in the point clouds is a newly-increased abnormal point if the difference between the mapping point depth value and the depth value of the background depth map under the corresponding coordinate is greater than a threshold value;
step three: acquiring an image data frame which is in time-space alignment with a corresponding point cloud data frame, segmenting all target instances in the image data frame by adopting a semantic segmentation method, respectively mapping newly increased abnormal points to the image data frame, and adding semantic type information to the newly increased abnormal points according to the semantic type of an image target instance region where the mapping points are located;
step four: clustering all the newly added abnormal points based on the space semantic joint distance between the points to form a cluster;
step five: and calculating the volume and the center point coordinates of each cluster, identifying the cluster with the volume larger than the threshold as an abnormal target, and generating the detection information of the abnormal target.
Further, the step one includes the following steps:
(1.1) defining a blank depth map, wherein the depth value of each coordinate in the blank depth map is initialized to be 0; the size of the blank depth map is consistent with that of an image shot by a camera in space-time alignment with the solid-state laser radar;
(1.2) acquiring an affine transformation matrix from point cloud data of the solid-state laser radar to image data of a camera aligned in time and space; the method specifically comprises the following steps: controlling the time synchronization of data frames of a laser radar and a camera in a hardware line control mode, carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system to obtain internal parameters and external parameters matrixes, and generating an affine transformation matrix from point cloud data to image data according to the obtained internal parameters and external parameters;
assuming a calibrated internal reference matrix of
Figure 220224DEST_PATH_IMAGE001
The external reference matrix is
Figure 80733DEST_PATH_IMAGE002
Affine transformation matrix of point cloud data to image data
Figure 748475DEST_PATH_IMAGE003
Comprises the following steps:
Figure 230403DEST_PATH_IMAGE004
wherein the internal reference matrix
Figure 82821DEST_PATH_IMAGE001
Dimension 3 x 3, external reference matrix of
Figure 305992DEST_PATH_IMAGE002
Dimension of 3 x 4, affine transformation matrix
Figure 885747DEST_PATH_IMAGE003
The dimension is 3 x 4;
(1.3) setting the solid-state laser radar to be in a non-repetitive scanning mode, continuously acquiring N frames of point cloud data frames with continuous time sequence according to a certain frequency, respectively mapping point clouds in all the point cloud data frames to a blank depth map by using an affine transformation matrix from the point cloud data to image data, superposing depth values of mapping points with the same mapping coordinates, and recording the superposition times of the depth values; the method specifically comprises the following steps: for any one point in N frames of continuous time sequence point cloud data frames, the coordinate of the point cloud in the point cloud is assumed to be
Figure 877973DEST_PATH_IMAGE005
Mapping the point cloud data to the affine transformation matrix of the image data to the coordinates of the mapping points in the blank depth map as
Figure 901293DEST_PATH_IMAGE006
Depth value of
Figure 80602DEST_PATH_IMAGE007
Respectively, as follows:
Figure 965512DEST_PATH_IMAGE008
where ceil denotes rounding up,
Figure 77825DEST_PATH_IMAGE009
respectively representing floating point coordinate values of mapping points of the point cloud to the depth map,
Figure 6467DEST_PATH_IMAGE010
is that
Figure 204230DEST_PATH_IMAGE011
Divided by depth value
Figure 128717DEST_PATH_IMAGE012
Backward rounded up mappingInteger coordinate values of the points;
executing the mapping operation on the points in all the point cloud data frames, superposing the depth values of the mapping points with the same integral coordinate values in an adding mode, and recording the superposition times of the depth values under all the coordinates;
(1.4) for each mapping point coordinate, calculating the average depth value of the coordinate by using the superposition depth value and the superposition times under the coordinate, and updating the depth value of the corresponding coordinate of the blank depth map by using the obtained average depth value; the method specifically comprises the following steps: assuming that the superposition depth value under the coordinate of a certain mapping point is SumDepth, and the superposition frequency is NumD, the average depth value depth of the coordinate is expressed as:
Figure 95536DEST_PATH_IMAGE013
calculating an average depth value for all mapping point coordinates, and updating the depth value of the blank depth map at the corresponding coordinate by using the obtained average depth value;
and (1.5) repeating the step (1.3) and the step (1.4) until the depth value of any coordinate of the blank depth map is not changed before and after updating, wherein the blank depth map updated for the last time is the background depth map.
Further, in the second step, for any point in the point cloud, the coordinate of the point in the point cloud is assumed to be
Figure 195079DEST_PATH_IMAGE014
The coordinates of the mapping points mapped into the background depth map are
Figure 880138DEST_PATH_IMAGE015
Depth value of
Figure 841272DEST_PATH_IMAGE016
Corresponding to the background depth map coordinates of
Figure 787231DEST_PATH_IMAGE017
At a depth value of
Figure 198621DEST_PATH_IMAGE018
If it satisfies
Figure 479298DEST_PATH_IMAGE019
If the point is a new abnormal point, the point is judged to be a point in the abnormal target and not a point in the background; wherein
Figure 368757DEST_PATH_IMAGE020
The empirical value is obtained by observing the difference in depth values between the abnormal target point and the background point.
Further, the third step includes the following steps:
(3.1) segmenting all target examples in the image data frame by adopting a Mask-RCNN-based semantic segmentation method;
(3.2) for any newly added abnormal point, assuming that the coordinate of the abnormal point in the point cloud is
Figure 434802DEST_PATH_IMAGE021
Mapping the point cloud data to the coordinates of the mapping points in the image data frame by using the affine transformation matrix of the point cloud data to the image data
Figure 485935DEST_PATH_IMAGE022
Is represented as follows:
Figure 755373DEST_PATH_IMAGE023
wherein ceil means rounding up,
Figure 448523DEST_PATH_IMAGE022
integer coordinate values representing mapping points of the point cloud data to image data,
Figure 103495DEST_PATH_IMAGE024
the depth value of the mapping point is obtained;
(3.3) hypothetical coordinates
Figure 233518DEST_PATH_IMAGE022
And a set of image coordinate points PixelCols contained in a certain target example, and satisfies
Figure 114887DEST_PATH_IMAGE025
If the new abnormal point is added with the semantic category information, the result is expressed as
Figure 470782DEST_PATH_IMAGE026
Where cls is the semantic class of the target instance.
Further, the fourth step includes the following steps:
(4.1) searching and determining a core anomaly point; the method comprises the following specific steps: set the radius of the space to
Figure DEST_PATH_IMAGE027
The minimum number of neighbors is minS; for any newly added abnormal point
Figure 465414DEST_PATH_IMAGE028
Assuming its coordinates as
Figure 123928DEST_PATH_IMAGE029
Semantic categories of
Figure 617226DEST_PATH_IMAGE030
Then it is represented as
Figure 652178DEST_PATH_IMAGE031
Go through the newly added abnormal point
Figure 531010DEST_PATH_IMAGE032
Radius of space
Figure 626005DEST_PATH_IMAGE033
All new anomaly points within the range, to their spatial radius
Figure 606600DEST_PATH_IMAGE027
Any newly added abnormal point in the range
Figure 179663DEST_PATH_IMAGE034
Assuming its coordinates as
Figure 680046DEST_PATH_IMAGE035
Semantic categories of
Figure 945942DEST_PATH_IMAGE036
Then it is represented as
Figure 882674DEST_PATH_IMAGE037
If the space semantic joint distance between two points
Figure 890121DEST_PATH_IMAGE038
Satisfies the following conditions:
Figure 369643DEST_PATH_IMAGE039
the new abnormal point is considered
Figure 931075DEST_PATH_IMAGE034
Is newly added with abnormal points
Figure 496048DEST_PATH_IMAGE032
The neighbor of (2); if the abnormal point is newly added
Figure 286281DEST_PATH_IMAGE032
Spatial radius of
Figure 620310DEST_PATH_IMAGE027
If the number of newly added abnormal points satisfying the above formula in the range is not less than minS, determining the abnormal points
Figure 352643DEST_PATH_IMAGE032
Is a core anomaly point, otherwise is a non-core anomaly point;
wherein Dis is the euclidean spatial distance between two points, sclas is the semantic distance between two points, and is respectively expressed as:
Figure 404913DEST_PATH_IMAGE040
wherein
Figure 497371DEST_PATH_IMAGE041
In order to be the spatial distance weight,
Figure 951486DEST_PATH_IMAGE042
in order to be a semantic distance weight,
Figure 120299DEST_PATH_IMAGE043
the distance threshold values are spatial semantic union distance threshold values which are empirical values;
executing the step (4.1) on all the newly added abnormal points until all the newly added abnormal points are confirmed to be core abnormal points or not; performing subsequent processing on the core abnormal point, and directly discarding the non-core abnormal point;
(4.2) clustering the core abnormal points to form a cluster; the method comprises the following specific steps: starting from any one core anomaly point, the spatial radius of the core anomaly point is equal to that of the space
Figure 394286DEST_PATH_IMAGE033
The core abnormal points of the inner neighbors are clustered into a class, and the core abnormal points of the neighbors are continuously searched and clustered into a class from any core abnormal point of the neighbors until the core abnormal points of the neighbors cannot be found, the clustered core abnormal points are a cluster, and the cluster type is the semantic type of the initial core abnormal point; repeating the step (4.2) from any remaining core abnormal point until no new cluster is formed;
(4.3) the remaining unclustered core outliers are outliers and are discarded directly.
Further, in the fifth step, for each cluster, calculating the volume of the cluster according to the two core abnormal points with the largest spatial distance; the coordinates of two core abnormal points with the maximum space distance are respectively assumed to be
Figure 791900DEST_PATH_IMAGE044
And
Figure 100522DEST_PATH_IMAGE045
semantic categories are
Figure 174657DEST_PATH_IMAGE046
Are then respectively represented as
Figure 935940DEST_PATH_IMAGE047
Volume of the cluster
Figure 373131DEST_PATH_IMAGE048
Expressed as:
Figure 660893DEST_PATH_IMAGE049
coordinates of center point
Figure 781295DEST_PATH_IMAGE050
Expressed as:
Figure 170820DEST_PATH_IMAGE051
if it is
Figure 34870DEST_PATH_IMAGE048
Greater than a threshold value
Figure 177139DEST_PATH_IMAGE052
Then the cluster is considered as an abnormal target, wherein
Figure 734022DEST_PATH_IMAGE052
If the upper limit value of the volume of the small target object forming the interference is an empirical value, the abnormal target detection information is: the object class is
Figure 843798DEST_PATH_IMAGE053
Distance solid state laser radar mounting origin
Figure 245961DEST_PATH_IMAGE054
Rice, andthe included angle of the abscissa of the solid-state laser radar coordinate system is
Figure 508315DEST_PATH_IMAGE055
In a second aspect, the invention provides an abnormal target detection device based on continuous time-series point cloud superposition, which comprises a memory and one or more processors, wherein the memory stores executable codes, and the processors are used for implementing the steps of the abnormal target detection method based on continuous time-series point cloud superposition when executing the executable codes.
In a third aspect, the present invention provides a computer-readable storage medium, on which a program is stored, which, when executed by a processor, implements the steps of the above-described abnormal object detection method based on continuous time-series point cloud overlay.
The invention has the beneficial effects that: the method solves the problem that the accuracy of detecting and identifying the abnormal target with variable size and distance by a deep learning method is low by sensing the abnormal target with the solid laser radar. A background depth map based on point clouds is constructed through a continuous time sequence superposition method, abnormal point clouds are detected by utilizing the difference between the mapping depth value of the abnormal target point clouds and the depth value of the background depth map, and the clustering accuracy of the abnormal point clouds is improved by adding semantic category information to the abnormal point clouds, so that clustering clusters and detection information of the abnormal targets are accurately generated, and reliable technical support is provided for unmanned safety monitoring of a park.
Drawings
Fig. 1 is a flowchart of an abnormal target detection method based on continuous time-series point cloud overlay according to the present invention.
FIG. 2 is a background depth map based on point cloud constructed by the continuous time sequence superposition method.
FIG. 3 is a graph of the effect of the point cloud containing abnormal objects of the present invention mapping to a background depth map.
FIG. 4 is a graph of the effect of the present invention of mapping a point cloud containing an anomalous target onto a spatio-temporally aligned image.
Fig. 5 is a structural diagram of an abnormal object detection device based on continuous time-series point cloud superposition according to the present invention.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an abnormal target detection method based on continuous time sequence point cloud superposition, which is used for solving the problem of low detection reliability of an abnormal target caused by low accuracy of detection and identification directly by a deep learning method when the abnormal target with a large volume and a long and short distance is sensed by a solid laser radar or a camera.
According to the method, a background depth map based on the point cloud is constructed through a continuous time sequence point cloud superposition method, the abnormal point cloud is detected by utilizing the difference between the mapping depth value of the abnormal point cloud and the depth value of the background depth map, and image semantic category information is added to the abnormal point cloud through introducing time synchronization image features, so that the information content of the point cloud is improved. The method for combining the space and the semantic distance of the point clouds can effectively improve the anti-interference capability of clustering abnormal point clouds by adopting a density clustering method, and form accurate and independent target clusters, thereby generating accurate detection information of targets.
As shown in fig. 1, the present invention specifically includes the following steps:
the method comprises the following steps: mapping point clouds in the continuous time sequence point cloud data frames to a depth map, calculating an average depth value, and updating the depth map to obtain a background depth map, wherein the method specifically comprises the following steps: the method comprises the steps of setting a solid-state laser radar to be in a non-repetitive scanning mode, collecting a plurality of frames of point cloud data frames with continuous time sequence according to a certain frequency, mapping point clouds in all the point cloud data frames to a depth map by using an affine transformation matrix from the point cloud data to image data, superposing depth values of mapping points with the same mapping coordinates, calculating an average depth value, and updating the depth value of a coordinate corresponding to the depth map by using the obtained average depth value. The process is repeated until the depth value before and after any coordinate of the depth map is updated does not change any more, and the finally updated depth map is the background depth map.
And the number of the continuous time sequence point cloud data frames of the plurality of frames is determined according to the scanning characteristics of the selected solid-state laser radar equipment. In this embodiment, the model of the selected solid-state laser radar is voyadvia, the scanning mode is set to be a non-repetitive scanning mode, when the scanning period is 10HZ, the number of frames of continuously acquired point cloud data is 4, and when the number of updating times of the background image is 10, a stable background depth map can be constructed, that is, the depth value of any coordinate of the background depth map does not change any more before and after updating.
A solid-state laser radar and a camera which are installed on a roadside lamp post in an unattended monitoring area of a park are utilized to collect point cloud data and image data which are aligned in a space-time mode, for the collected point cloud data, a background depth map which is constructed by the continuous time sequence point cloud superposition method and is based on the point cloud is shown in the attached drawing 2, the size of the background depth map is consistent with that of the image data, the generated depth map mapping points are very dense, and the depth value of the background depth map cannot be changed by adding a point cloud time sequence data frame.
Further, the step one includes the following steps:
(1.1) defining a blank depth map with the size of W x H, and initializing the depth value at each coordinate in the blank depth map to 0. And the size of the blank depth map is consistent with the size of an image shot by a camera aligned with the solid-state laser radar in space-time.
And (1.2) acquiring an affine transformation matrix from the solid-state laser radar point cloud data to the camera image data aligned in time and space.
The method comprises the following specific steps: controlling the time synchronization of data frames of a laser radar and a camera in a hardware line control mode, carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system to obtain internal parameters and external parameters matrixes, and generating an affine transformation matrix from point cloud data to image data according to the obtained internal parameters and external parameters;
assume that the calibrated internal reference matrix is
Figure 236099DEST_PATH_IMAGE001
The external reference matrix is
Figure 69057DEST_PATH_IMAGE056
Affine transformation matrix of point cloud data to image data
Figure 274911DEST_PATH_IMAGE057
Comprises the following steps:
Figure 391771DEST_PATH_IMAGE058
wherein the internal reference matrix
Figure 24878DEST_PATH_IMAGE001
Dimension size of 3 x 3, external reference matrix of
Figure 112176DEST_PATH_IMAGE002
Dimension of 3 x 4, affine transformation matrix
Figure 121720DEST_PATH_IMAGE057
The dimension is 3 x 4;
and (1.3) setting the solid-state laser radar to be in a non-repetitive scanning mode, continuously acquiring N frames of point cloud data frames with continuous time sequence according to a certain frequency, respectively mapping point clouds in all the point cloud data frames to a blank depth map by using an affine transformation matrix from the point cloud data to image data, superposing the depth values of mapping points with the same mapping coordinates, and recording the superposition times of the depth values.
The method specifically comprises the following steps: for any one point in N frames of continuous time sequence point cloud data frames, the coordinate of the point cloud in the point cloud is assumed to be
Figure 827508DEST_PATH_IMAGE005
Mapping the point cloud data to the affine transformation matrix of the image data to the coordinates of the mapping points in the blank depth map as
Figure 772462DEST_PATH_IMAGE059
Depth value of
Figure 704645DEST_PATH_IMAGE007
Which may be represented as follows:
Figure 642514DEST_PATH_IMAGE060
wherein ceil means rounding up,
Figure 609333DEST_PATH_IMAGE009
respectively representing floating point coordinate values of mapping points of the point cloud to the depth map,
Figure 958144DEST_PATH_IMAGE010
is that
Figure 377624DEST_PATH_IMAGE011
Divided by depth value
Figure 119184DEST_PATH_IMAGE007
The integral coordinate value of the mapping point after the upward integration is carried out;
and performing the mapping operation on the points in all the point cloud data frames, superposing the depth values of the mapping points with the same integral coordinate values in an adding mode, and recording the superposition times of the depth values under all the coordinates.
And (1.4) for each mapping point coordinate, calculating the average depth value of the coordinate by using the superposition depth value and the superposition times under the coordinate, and updating the depth value of the corresponding coordinate of the blank depth map by using the obtained average depth value.
The method specifically comprises the following steps: let the coordinates of a mapping point be
Figure 674930DEST_PATH_IMAGE061
The superposition depth value is SumDepth, the superposition times is NumD, and the coordinate of the position is
Figure 696107DEST_PATH_IMAGE061
Can be expressed as:
Figure 602883DEST_PATH_IMAGE013
and calculating the average depth value of all mapping point coordinates, and updating the depth value of the blank depth map at the corresponding coordinate by using the obtained average depth value.
And (1.5) repeating the step (1.3) and the step (1.4) until the depth value of any coordinate of the blank depth map is not changed before and after updating, wherein the blank depth map updated at the last time is the background depth map.
The method for acquiring the affine transformation matrix from the point cloud data of the solid-state laser radar to the image data of the camera aligned in time and space comprises the steps of firstly utilizing the solid-state laser radar and the camera which are installed on a roadside lamp pole in a garden, controlling the time synchronization of data frames of the laser radar and the camera in a hardware line control mode, and respectively carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system. The camera internal reference calibration is generally carried out by adopting a checkerboard, acquiring checkerboard data of a plurality of angles and distances by utilizing the characteristics of clear black and white checkerboard and easy angular point finding, generating a plurality of groups of two-dimensional image angular point coordinates and three-dimensional space angular point coordinates, and solving and generating internal reference parameters based on a least square method. The method comprises the steps of calibrating external parameters from a solid-state laser radar coordinate system to a camera coordinate system in a combined mode, recording multi-section time-aligned image data and point cloud data by an ROS tool through placing a plurality of white boards different in size and distance, and marking four corner points of each white board according to the image data and the point cloud data aligned in each frame time. And correcting external parameter parameters for any group of corner points by adopting a BP neural network method through multiple iterations until the mapping deviation generated by the external parameter is stabilized within a threshold range. By utilizing the calibrated camera internal reference and the external reference from the laser radar coordinate system to the camera coordinate system, the three-dimensional point cloud can be mapped to a two-dimensional image, and the conversion from the point cloud space to the image space is realized.
Step two: comparing the depth value of the point cloud obtained in real time with the depth value of the background depth map, and judging whether the point is a newly added abnormal point, specifically comprising the following steps: and acquiring a point cloud data frame of the solid-state laser radar in real time, mapping all point clouds in the point cloud data frame to a background depth map by using an affine transformation matrix from the point cloud data to image data, and judging that any point in the point clouds is a newly added abnormal point if the difference between the mapping point depth value and the depth value of the background depth map under the corresponding coordinate is greater than a threshold value.
The point cloud data frame of the solid-state laser radar is obtained in real time, the obtaining period is 10HZ, namely, the point cloud data of the solid-state laser radar is collected once every 100 ms.
The method specifically comprises the following steps: for any point in the point cloud, the coordinate of the point in the point cloud is assumed to be
Figure 148134DEST_PATH_IMAGE014
The coordinates of the mapping points mapped into the background depth map are
Figure 558387DEST_PATH_IMAGE062
Depth value of
Figure 517509DEST_PATH_IMAGE063
Corresponding to the background depth map coordinates of
Figure 911581DEST_PATH_IMAGE017
At a depth value of
Figure 729364DEST_PATH_IMAGE018
If it satisfies
Figure 525282DEST_PATH_IMAGE064
Then the point is determined to be a new outlier, i.e., the point is likely to be a point in the outlier target, not a point in the background, where
Figure 888261DEST_PATH_IMAGE020
The empirical value can be obtained by observing the difference between the depth values of the abnormal target point and the background point.
As shown in fig. 3, the point cloud containing the abnormal target is mapped to the background depth mapAnd (5) effect diagrams. It can be observed that after the point cloud of the abnormal target is mapped to the background depth map, occlusion occurs at a corresponding position of the background depth map, so that the depth value of the abnormal target and the depth value of the background depth map under the corresponding coordinate have a larger difference. From the observation, the depth values differ
Figure 628684DEST_PATH_IMAGE065
When the value is set to 0.5 (unit: meter), abnormal point clouds can be distinguished.
Step three: acquiring an image data frame aligned with a point cloud space-time, segmenting a target instance, and adding semantic category information for a newly added abnormal point, wherein the method specifically comprises the following steps: the method comprises the steps of obtaining an image data frame which is aligned with a corresponding point cloud data frame in a time-space mode through a camera, segmenting all target examples in the corresponding image data frame by adopting a semantic segmentation method, respectively mapping newly increased abnormal points to the image data frame, and adding semantic type information to the newly increased abnormal points according to the semantic type of an image target example area where the mapping points are located. The method comprises the following specific steps:
and (3.1) segmenting all target examples in the corresponding image data frame by adopting a Mask-RCNN-based semantic segmentation method.
(3.2) for any newly added abnormal point, assuming the coordinate of the abnormal point in the point cloud to be
Figure 125525DEST_PATH_IMAGE066
Mapping the point cloud data to the coordinates of the mapping points in the image data frame by using the affine transformation matrix of the point cloud data to the image data
Figure 149850DEST_PATH_IMAGE022
Can be expressed as follows:
Figure 73944DEST_PATH_IMAGE023
wherein ceil means rounding up,
Figure 567242DEST_PATH_IMAGE022
representing mapping of point cloud data into image dataThe integer coordinate values of the mapping points,
Figure 602194DEST_PATH_IMAGE024
are mapped point depth values.
(3.3) hypothetical coordinates
Figure 982491DEST_PATH_IMAGE022
And a set of image coordinate points PixelCols contained in a certain target instance, satisfy
Figure 811907DEST_PATH_IMAGE025
If the new abnormal point is added with the semantic category information, the new abnormal point can be expressed as
Figure 792501DEST_PATH_IMAGE067
Where cls is the semantic class of the target instance.
As shown in fig. 4, is an effect diagram of a point cloud containing an abnormal object mapped onto a spatio-temporally aligned image. In the present embodiment, it is considered that the abnormal object includes a pedestrian, a non-motor vehicle, an animal, other movable obstacle, and the like. The image is segmented through a Mask-RCNN semantic segmentation method, instance objects such as backgrounds, pedestrians, non-motor vehicles, animals and other movable obstacles in the image are segmented, all instance areas (including the backgrounds) in an image data frame are endowed with semantic category information, and the backgrounds in the image comprise static category objects such as sky, green plants, road surfaces and buildings. It can be observed that abnormal targets such as pedestrians and other movable obstacles appear in fig. 4, and semantic category information can be added to the point clouds falling in the corresponding areas according to semantic categories of the example target areas corresponding to the images mapped by the abnormal target point clouds.
Step four: and clustering the newly added abnormal points based on the space semantic united distance between the points to form a cluster.
Further, the fourth step includes the following steps:
(4.1) for all new abnormal points, searching and determining the core abnormal point. The method specifically comprises the following steps: set the space radius to
Figure 631144DEST_PATH_IMAGE068
The minimum number of neighbors is minS; for any newly added abnormal point
Figure 644710DEST_PATH_IMAGE032
Assuming its coordinates as
Figure 645028DEST_PATH_IMAGE069
Semantic categories of
Figure 847339DEST_PATH_IMAGE070
Then it is represented as
Figure 224093DEST_PATH_IMAGE071
Traversing newly added abnormal points
Figure 578983DEST_PATH_IMAGE032
Radius of space
Figure 15780DEST_PATH_IMAGE027
All new anomaly points within the range, to their spatial radius
Figure 705387DEST_PATH_IMAGE027
Any newly added abnormal point in the range
Figure 885833DEST_PATH_IMAGE034
Assuming its coordinates as
Figure 593764DEST_PATH_IMAGE072
Semantic categories of
Figure 467042DEST_PATH_IMAGE036
Then it is represented as
Figure 112787DEST_PATH_IMAGE073
If the space semantic joint distance between two points
Figure 972290DEST_PATH_IMAGE074
Satisfies the following conditions:
Figure 160825DEST_PATH_IMAGE039
the new abnormal point is considered
Figure 329639DEST_PATH_IMAGE034
Is newly added with abnormal points
Figure 603625DEST_PATH_IMAGE032
The neighbor of (2); if an abnormal point is newly added
Figure 237125DEST_PATH_IMAGE032
Spatial radius of
Figure 545747DEST_PATH_IMAGE027
If the number of newly added abnormal points satisfying the above formula in the range is not less than minS, the abnormal points are determined
Figure 885461DEST_PATH_IMAGE032
Is the core exception point, otherwise is the non-core exception point.
Wherein Dis is the euclidean spatial distance between two points, and Sclass is the semantic distance between two points, which can be respectively expressed as:
Figure 646744DEST_PATH_IMAGE040
wherein
Figure 582470DEST_PATH_IMAGE041
Is a weight of the spatial distance and is,
Figure 11177DEST_PATH_IMAGE042
in order to be a semantic distance weight,
Figure 990635DEST_PATH_IMAGE043
the distance threshold values are spatial semantic joint distance threshold values which are empirical values.
Executing the step (4.1) on all the newly added abnormal points until all the newly added abnormal points are confirmed to be core abnormal points or not; and (4) performing subsequent processing on the core abnormal points, and directly discarding the non-core abnormal points.
And (4.2) clustering the core abnormal points and forming a cluster. The method specifically comprises the following steps: starting from any one core anomaly point, the spatial radius of the core anomaly point is equal to that of the space
Figure 504793DEST_PATH_IMAGE027
The core abnormal points of the inner neighbors are gathered into one class, and the core abnormal points of the neighbors are continuously searched and gathered into one class from any core abnormal point of the neighbors until the core abnormal points of the neighbors can not be found, the core abnormal points gathered together in the process are a cluster, and the cluster class is the semantic class of the initial core abnormal point. And (4) repeating the step (4.2) from any remaining core anomaly point until no new cluster is formed.
(4.3) the remaining unclustered core outliers are outliers and are discarded directly.
The space radius is
Figure 477165DEST_PATH_IMAGE027
Set as 1 (unit: meter), the minimum neighbor number minS is set as 2, and the space semantic united distance between two points
Figure 885013DEST_PATH_IMAGE075
Set to 1, parameter
Figure 176317DEST_PATH_IMAGE041
Figure 787558DEST_PATH_IMAGE042
The space distance of the two points is considered, whether the semantic categories of the two points are the same or not needs to be considered, if the space distance of the two points is short but the semantic categories are inconsistent, the final space semantic union distance is smaller than a threshold value, and the two points do not belong to a cluster. Meanwhile, because the number of the point clouds is large, the background point cloud is easy to be falsely detected as the abnormal target point cloud when the abnormal point is detected,by introducing semantic information, false detection of background point clouds can be further eliminated. Parameter(s)
Figure 455300DEST_PATH_IMAGE041
Figure 717654DEST_PATH_IMAGE042
The arrangement mode of (2) emphasizes considering the space distance, and points with close space distance are considered to be larger possibly to be the same cluster.
Step five: and calculating the volume and the central point coordinates of the clustering cluster, judging whether the clustering cluster is an abnormal target, and generating detection information of the abnormal target.
Specifically, for each cluster, the volume of the cluster is calculated according to the two core abnormal points with the largest spatial distance. The coordinates of two core abnormal points with the maximum space distance are assumed to be respectively
Figure 179859DEST_PATH_IMAGE044
And
Figure 514282DEST_PATH_IMAGE045
semantic categories are
Figure 985715DEST_PATH_IMAGE046
Are then respectively represented as
Figure 102575DEST_PATH_IMAGE047
Then the volume of the cluster
Figure 735682DEST_PATH_IMAGE048
Can be expressed as:
Figure 321515DEST_PATH_IMAGE049
coordinates of center point
Figure 331059DEST_PATH_IMAGE050
Can be expressed as:
Figure 302427DEST_PATH_IMAGE076
if it is
Figure 372014DEST_PATH_IMAGE048
Greater than a threshold value
Figure 678099DEST_PATH_IMAGE052
Then the cluster is considered as an abnormal target, wherein
Figure 491334DEST_PATH_IMAGE052
If the upper limit value of the volume of the small target object forming the interference is an empirical value, the abnormal target detection information is: the object class is
Figure 317208DEST_PATH_IMAGE053
Distance solid state laser radar mounting origin
Figure 557696DEST_PATH_IMAGE054
The included angle between the meter and the abscissa of a solid-state laser radar coordinate system is
Figure 852542DEST_PATH_IMAGE055
The volume threshold of the cluster
Figure 203889DEST_PATH_IMAGE052
As empirically observed, it was set to 0.008 (unit:
Figure 149849DEST_PATH_IMAGE077
). Because in daily scene, often appear fallen leaves, rubbish, plastic bag, carton case etc. and produce the less object of interference and volume to the testing result, consequently need set up the threshold value volume for clustering, avoid false retrieval, false retrieval.
Corresponding to the embodiment of the abnormal target detection method based on continuous time sequence point cloud superposition, the invention also provides an embodiment of an abnormal target detection device based on continuous time sequence point cloud superposition.
Referring to fig. 5, an abnormal target detection apparatus based on continuous time-series point cloud overlay according to an embodiment of the present invention includes a memory and one or more processors, where the memory stores executable codes, and when the processors execute the executable codes, the abnormal target detection apparatus based on continuous time-series point cloud overlay is used to implement the abnormal target detection method based on continuous time-series point cloud overlay in the foregoing embodiment.
The embodiment of the abnormal target detection device based on continuous time-series point cloud superposition can be applied to any equipment with data processing capability, such as computers and other equipment or devices. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 5, a hardware structure diagram of an arbitrary device with data processing capability where an abnormal object detection apparatus based on continuous time-sequence point cloud superposition is located according to the present invention is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, in an embodiment, the arbitrary device with data processing capability where the apparatus is located may generally include other hardware according to an actual function of the arbitrary device with data processing capability, which is not described again.
The specific details of the implementation process of the functions and actions of each unit in the above device are the implementation processes of the corresponding steps in the above method, and are not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention also provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the abnormal target detection method based on continuous time sequence point cloud superposition in the above embodiment is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the appended claims.

Claims (8)

1. An abnormal target detection method based on continuous time sequence point cloud superposition is characterized by comprising the following steps:
the method comprises the following steps: collecting a plurality of frames of point cloud data frames with continuous time sequence by a solid-state laser radar, mapping the point clouds in all the point cloud data frames to a depth map by using an affine transformation matrix from the point cloud data to image data, superposing the depth values of mapping points with the same mapping coordinates and calculating an average depth value, and updating the depth value of the corresponding coordinate of the depth map by using the obtained average depth value; repeating the step until the depth value before and after any coordinate of the depth map is updated does not change any more, wherein the updated depth map is the background depth map;
step two: acquiring a point cloud data frame of the solid-state laser radar in real time, mapping all point clouds in the point cloud data frame to a background depth map by using an affine transformation matrix from the point cloud data to image data, and judging that any point in the point clouds is a newly-increased abnormal point if the difference between the mapping point depth value and the depth value of the background depth map under the corresponding coordinate is greater than a threshold value;
step three: acquiring an image data frame which is in time-space alignment with a corresponding point cloud data frame, segmenting all target instances in the image data frame by adopting a semantic segmentation method, respectively mapping newly increased abnormal points to the image data frame, and adding semantic type information to the newly increased abnormal points according to the semantic type of an image target instance region where the mapping points are located;
step four: clustering all the newly added abnormal points based on the space semantic joint distance between the points to form a cluster;
step five: and calculating the volume and the center point coordinates of each cluster, identifying the cluster with the volume larger than the threshold as an abnormal target, and generating the detection information of the abnormal target.
2. The method according to claim 1, wherein the first step comprises the following steps:
(1.1) defining a blank depth map, wherein the depth value of each coordinate in the blank depth map is initialized to be 0; the size of the blank depth map is consistent with that of an image shot by a camera aligned with the solid-state laser radar in time and space;
(1.2) acquiring an affine transformation matrix from point cloud data of the solid-state laser radar to image data of a camera aligned in time and space; the method specifically comprises the following steps: controlling the time synchronization of data frames of a laser radar and a camera by adopting a hardware line control mode, carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system to obtain internal parameters and external parameter matrixes, and generating an affine transformation matrix from point cloud data to image data according to the obtained internal parameters and external parameter matrixes;
assuming a calibrated internal reference matrix of
Figure 148098DEST_PATH_IMAGE001
The external reference matrix is
Figure 845665DEST_PATH_IMAGE002
Affine transformation matrix of point cloud data to image data
Figure 953298DEST_PATH_IMAGE003
Comprises the following steps:
Figure 817349DEST_PATH_IMAGE004
wherein the internal reference matrix
Figure 710350DEST_PATH_IMAGE001
Dimension size of 3 x 3, external reference matrix of
Figure 126287DEST_PATH_IMAGE002
Dimension of 3 x 4, affine transformation matrix
Figure 596583DEST_PATH_IMAGE003
The dimension is 3 x 4;
(1.3) setting the solid-state laser radar to be in a non-repetitive scanning mode, continuously acquiring N frames of point cloud data frames with continuous time sequence according to a certain frequency, respectively mapping point clouds in all the point cloud data frames to a blank depth map by using an affine transformation matrix from the point cloud data to image data, superposing depth values of mapping points with the same mapping coordinates, and recording the superposition times of the depth values; the method specifically comprises the following steps: for collected N frames of continuous time sequence point cloud data frames, for their anybodyMeaning a point, assuming its coordinates in the point cloud as
Figure 641156DEST_PATH_IMAGE005
Mapping the point cloud data to the affine transformation matrix of the image data to the coordinates of the mapping points in the blank depth map as
Figure 778876DEST_PATH_IMAGE006
Depth value of
Figure 100136DEST_PATH_IMAGE007
Respectively, as follows:
Figure 667515DEST_PATH_IMAGE008
where ceil denotes rounding up,
Figure 309586DEST_PATH_IMAGE009
respectively representing floating point coordinate values of mapping points of the point cloud to the depth map,
Figure 364130DEST_PATH_IMAGE010
is that
Figure 341444DEST_PATH_IMAGE011
Divided by depth value
Figure 910966DEST_PATH_IMAGE007
The integral coordinate value of the mapping point after the upward integration is carried out;
executing the mapping operation on the points in all the point cloud data frames, superposing the depth values of the mapping points with the same integral coordinate values in an adding mode, and recording the superposition times of the depth values under all the coordinates;
(1.4) for each mapping point coordinate, calculating the average depth value of the coordinate by using the superposition depth value and the superposition times under the coordinate, and updating the depth value of the corresponding coordinate of the blank depth map by using the obtained average depth value; the method specifically comprises the following steps: assuming that the superposition depth value under the coordinate of a certain mapping point is SumDepth, and the superposition frequency is NumD, the average depth value depth of the coordinate is expressed as:
Figure 766183DEST_PATH_IMAGE012
calculating an average depth value for all mapping point coordinates, and updating the depth value of the blank depth map at the corresponding coordinate by using the obtained average depth value;
and (1.5) repeating the step (1.3) and the step (1.4) until the depth value of any coordinate of the blank depth map is not changed before and after updating, wherein the blank depth map updated for the last time is the background depth map.
3. The method according to claim 1, wherein in the second step, for any point in the point cloud, the coordinate of the point cloud in the point cloud is assumed to be
Figure 471971DEST_PATH_IMAGE013
The coordinates of the mapping points mapped into the background depth map are
Figure 275979DEST_PATH_IMAGE014
Depth value of
Figure 349108DEST_PATH_IMAGE015
Corresponding to the background depth map coordinates of
Figure 896764DEST_PATH_IMAGE016
At a depth value of
Figure 722638DEST_PATH_IMAGE017
If it satisfies
Figure 228705DEST_PATH_IMAGE018
If the point is a new abnormal point, the point is judged to be a point in the abnormal target and not a point in the background; wherein
Figure 756507DEST_PATH_IMAGE019
The empirical value is obtained by observing the difference in depth values between the abnormal target point and the background point.
4. The abnormal target detection method based on continuous time series point cloud superposition as claimed in claim 1, wherein the step three comprises the following steps:
(3.1) segmenting all target instances in the image data frame by adopting a Mask-RCNN-based semantic segmentation method;
(3.2) for any newly added abnormal point, assuming that the coordinate of the abnormal point in the point cloud is
Figure 373434DEST_PATH_IMAGE020
Mapping the point cloud data to the coordinates of the mapping points in the image data frame by using the affine transformation matrix of the point cloud data to the image data
Figure 53814DEST_PATH_IMAGE021
Is represented as follows:
Figure 730783DEST_PATH_IMAGE022
wherein ceil means rounding up,
Figure 247346DEST_PATH_IMAGE021
integer coordinate values representing mapping points of the point cloud data to image data,
Figure 261438DEST_PATH_IMAGE023
the depth value of the mapping point is obtained;
(3.3) hypothetical coordinates
Figure 202849DEST_PATH_IMAGE021
And a set of image coordinate points PixelCols contained in a certain target instance, satisfy
Figure 161971DEST_PATH_IMAGE024
If the new abnormal point is added with the semantic category information, the new abnormal point is expressed as
Figure 556044DEST_PATH_IMAGE025
Where cls is the semantic class of the target instance.
5. The method for detecting the abnormal target based on the superposition of the continuous time-series point clouds of claim 1, wherein the fourth step comprises the following steps:
(4.1) searching and determining a core anomaly point; the method comprises the following specific steps: set the space radius to
Figure 373827DEST_PATH_IMAGE026
The minimum number of neighbors is minS; for any newly added abnormal point
Figure 904165DEST_PATH_IMAGE027
Assuming its coordinates as
Figure 267145DEST_PATH_IMAGE028
Semantic categories of
Figure 414092DEST_PATH_IMAGE029
Then it is represented as
Figure 769987DEST_PATH_IMAGE030
Traversing newly added abnormal points
Figure 685991DEST_PATH_IMAGE027
Radius of space
Figure 452827DEST_PATH_IMAGE026
All new outliers in the range, for their spatial radius
Figure 821492DEST_PATH_IMAGE026
Any newly-added abnormal point in the range
Figure 981078DEST_PATH_IMAGE031
Assuming its coordinates as
Figure 751588DEST_PATH_IMAGE032
Semantic categories of
Figure 456370DEST_PATH_IMAGE033
Then it is represented as
Figure 312330DEST_PATH_IMAGE034
If the space semantic joint distance between two points
Figure 275607DEST_PATH_IMAGE035
Satisfies the following conditions:
Figure 635044DEST_PATH_IMAGE036
the new abnormal point is considered
Figure 289490DEST_PATH_IMAGE031
Is newly added with abnormal points
Figure 367168DEST_PATH_IMAGE027
The neighbors of (2); if the abnormal point is newly added
Figure 134135DEST_PATH_IMAGE027
Radius of space of
Figure 613658DEST_PATH_IMAGE026
New within the range satisfying the above formulaIf the number of the abnormal points is more than or equal to minS, determining the abnormal points
Figure 925822DEST_PATH_IMAGE027
Is a core anomaly point, otherwise is a non-core anomaly point;
wherein Dis is the euclidean spatial distance between two points, and Sclass is the semantic distance between two points, which are respectively expressed as:
Figure 490795DEST_PATH_IMAGE037
wherein
Figure 795875DEST_PATH_IMAGE038
In order to be the spatial distance weight,
Figure 129904DEST_PATH_IMAGE039
in order to be a semantic distance weight,
Figure 111504DEST_PATH_IMAGE040
the distance threshold values are spatial semantic union distance threshold values which are empirical values;
executing the step (4.1) on all the newly added abnormal points until all the newly added abnormal points are confirmed to be core abnormal points or not; performing subsequent processing on the core abnormal point, and directly discarding the non-core abnormal point;
(4.2) clustering the core abnormal points to form a cluster; the method specifically comprises the following steps: starting from any one core anomaly point, the spatial radius of the core anomaly point is equal to that of the space
Figure 163774DEST_PATH_IMAGE041
Neighbor core abnormal points in the cluster are clustered, and the neighbor core abnormal points are continuously searched and clustered from any neighbor core abnormal point until the neighbor core abnormal points cannot be found, the clustered core abnormal points are a cluster, and the cluster category is the semantic category of the initial core abnormal point; repeating the step (4.2) from any core abnormal point) Until no new cluster is formed;
(4.3) the remaining non-clustered core outliers are outliers and are discarded directly.
6. The abnormal target detection method based on continuous time series point cloud superposition as claimed in claim 1, wherein in the fifth step, for each cluster, the volume of the cluster is calculated according to the two core abnormal points with the largest spatial distance; the coordinates of two core abnormal points with the maximum space distance are respectively assumed to be
Figure 741386DEST_PATH_IMAGE042
And
Figure 70867DEST_PATH_IMAGE043
semantic categories are
Figure 849467DEST_PATH_IMAGE044
Are then respectively represented as
Figure 513667DEST_PATH_IMAGE045
Volume of the cluster
Figure 35915DEST_PATH_IMAGE046
Expressed as:
Figure 721368DEST_PATH_IMAGE047
coordinates of center point
Figure 670869DEST_PATH_IMAGE048
Expressed as:
Figure 556786DEST_PATH_IMAGE049
if it is
Figure 882725DEST_PATH_IMAGE046
Greater than a threshold value
Figure 921219DEST_PATH_IMAGE050
Then the cluster is considered as an abnormal target, wherein
Figure 41622DEST_PATH_IMAGE051
If the upper limit value of the volume of the small target object forming the interference is an empirical value, the abnormal target detection information is as follows: the object class is
Figure 414834DEST_PATH_IMAGE052
Distance solid state laser radar mounting origin
Figure 278885DEST_PATH_IMAGE053
The included angle between the meter and the abscissa of the coordinate system of the solid-state laser radar is
Figure 670421DEST_PATH_IMAGE054
7. An abnormal target detection device based on continuous time-series point cloud superposition, comprising a memory and one or more processors, wherein the memory is stored with executable codes, and the processors are used for implementing the steps of the abnormal target detection method based on continuous time-series point cloud superposition according to any one of claims 1 to 6 when executing the executable codes.
8. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the method for detecting an anomalous target based on superposition of successive time-series point clouds according to any one of claims 1 to 6.
CN202211145212.XA 2022-09-20 2022-09-20 Abnormal target detection method and device based on continuous time sequence point cloud superposition Active CN115272493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211145212.XA CN115272493B (en) 2022-09-20 2022-09-20 Abnormal target detection method and device based on continuous time sequence point cloud superposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211145212.XA CN115272493B (en) 2022-09-20 2022-09-20 Abnormal target detection method and device based on continuous time sequence point cloud superposition

Publications (2)

Publication Number Publication Date
CN115272493A true CN115272493A (en) 2022-11-01
CN115272493B CN115272493B (en) 2022-12-27

Family

ID=83757249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211145212.XA Active CN115272493B (en) 2022-09-20 2022-09-20 Abnormal target detection method and device based on continuous time sequence point cloud superposition

Country Status (1)

Country Link
CN (1) CN115272493B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037120A (en) * 2023-10-09 2023-11-10 之江实验室 Target perception method and device based on time sequence selection
CN118154588A (en) * 2024-05-09 2024-06-07 中铁七局集团第三工程有限公司 Large-diameter pressure steel pipe quality detection method and system based on contour extraction

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389221A (en) * 2018-01-30 2018-08-10 深圳市菲森科技有限公司 The scan method and system of 3-D view
WO2018176440A1 (en) * 2017-04-01 2018-10-04 深圳市速腾聚创科技有限公司 Method for fusing point cloud and planar image, intelligent device and non-volatile computer-readable storage medium
US20190051056A1 (en) * 2017-08-11 2019-02-14 Sri International Augmenting reality using semantic segmentation
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN111652179A (en) * 2020-06-15 2020-09-11 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN111798475A (en) * 2020-05-29 2020-10-20 浙江工业大学 Indoor environment 3D semantic map construction method based on point cloud deep learning
CN112348867A (en) * 2020-11-18 2021-02-09 南通市测绘院有限公司 Method and system for constructing city high-precision three-dimensional terrain based on LiDAR point cloud data
CN113111887A (en) * 2021-04-26 2021-07-13 河海大学常州校区 Semantic segmentation method and system based on information fusion of camera and laser radar
CN113128348A (en) * 2021-03-25 2021-07-16 西安电子科技大学 Laser radar target detection method and system fusing semantic information
CN113393514A (en) * 2021-06-11 2021-09-14 中国科学院自动化研究所 Three-dimensional disordered point cloud data processing method, system and equipment
CN114266960A (en) * 2021-12-01 2022-04-01 国网智能科技股份有限公司 Point cloud information and deep learning combined obstacle detection method
WO2022141912A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
CN114724120A (en) * 2022-06-10 2022-07-08 东揽(南京)智能科技有限公司 Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN114758504A (en) * 2022-06-13 2022-07-15 之江实验室 Online vehicle overspeed early warning method and system based on filtering correction
CN114782519A (en) * 2022-03-11 2022-07-22 陕西天视致远航空技术有限公司 Method, device and medium for positioning spherical or quasi-spherical object based on point cloud information
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN114937081A (en) * 2022-07-20 2022-08-23 之江实验室 Internet vehicle position estimation method and device based on independent non-uniform incremental sampling

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176440A1 (en) * 2017-04-01 2018-10-04 深圳市速腾聚创科技有限公司 Method for fusing point cloud and planar image, intelligent device and non-volatile computer-readable storage medium
US20190051056A1 (en) * 2017-08-11 2019-02-14 Sri International Augmenting reality using semantic segmentation
CN108389221A (en) * 2018-01-30 2018-08-10 深圳市菲森科技有限公司 The scan method and system of 3-D view
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN111798475A (en) * 2020-05-29 2020-10-20 浙江工业大学 Indoor environment 3D semantic map construction method based on point cloud deep learning
CN111652179A (en) * 2020-06-15 2020-09-11 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN112348867A (en) * 2020-11-18 2021-02-09 南通市测绘院有限公司 Method and system for constructing city high-precision three-dimensional terrain based on LiDAR point cloud data
WO2022141912A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
CN113128348A (en) * 2021-03-25 2021-07-16 西安电子科技大学 Laser radar target detection method and system fusing semantic information
CN113111887A (en) * 2021-04-26 2021-07-13 河海大学常州校区 Semantic segmentation method and system based on information fusion of camera and laser radar
CN113393514A (en) * 2021-06-11 2021-09-14 中国科学院自动化研究所 Three-dimensional disordered point cloud data processing method, system and equipment
CN114266960A (en) * 2021-12-01 2022-04-01 国网智能科技股份有限公司 Point cloud information and deep learning combined obstacle detection method
CN114782519A (en) * 2022-03-11 2022-07-22 陕西天视致远航空技术有限公司 Method, device and medium for positioning spherical or quasi-spherical object based on point cloud information
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN114724120A (en) * 2022-06-10 2022-07-08 东揽(南京)智能科技有限公司 Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN114758504A (en) * 2022-06-13 2022-07-15 之江实验室 Online vehicle overspeed early warning method and system based on filtering correction
CN114937081A (en) * 2022-07-20 2022-08-23 之江实验室 Internet vehicle position estimation method and device based on independent non-uniform incremental sampling

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
QIAN HUANG等: "Recognition of Key Targets of Locomotive Bottom Based on 3D Point Cloud Data", 《2017 FAR EAST NDT NEW TECHNOLOGY & APPLICATION FORUM (FENDT)》 *
吴庆祝: "基于视觉与激光雷达数据融合的多目标识别算法研究", 《万方数据知识服务平台》 *
岳文涛: "基于图像与点云融合的三维目标检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
张婷婷等: "基于深度学习的图像目标检测算法综述", 《电信科学》 *
王东敏等: "视觉与激光点云融合的深度图像获取方法", 《军事交通学院学报》 *
赵永杰: "动态环境下融合语义信息的视觉SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037120A (en) * 2023-10-09 2023-11-10 之江实验室 Target perception method and device based on time sequence selection
CN117037120B (en) * 2023-10-09 2024-02-09 之江实验室 Target perception method and device based on time sequence selection
CN118154588A (en) * 2024-05-09 2024-06-07 中铁七局集团第三工程有限公司 Large-diameter pressure steel pipe quality detection method and system based on contour extraction

Also Published As

Publication number Publication date
CN115272493B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN110675431B (en) Three-dimensional multi-target tracking method fusing image and laser point cloud
CN110163904B (en) Object labeling method, movement control method, device, equipment and storage medium
Behrendt et al. A deep learning approach to traffic lights: Detection, tracking, and classification
CN109034018B (en) Low-altitude small unmanned aerial vehicle obstacle sensing method based on binocular vision
WO2022188663A1 (en) Target detection method and apparatus
CN115272493B (en) Abnormal target detection method and device based on continuous time sequence point cloud superposition
CN111753609A (en) Target identification method and device and camera
CN111507327B (en) Target detection method and device
CN110501036A (en) The calibration inspection method and device of sensor parameters
US20210350705A1 (en) Deep-learning-based driving assistance system and method thereof
CN112991389A (en) Target tracking method and device and mobile robot
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113012215A (en) Method, system and equipment for space positioning
CN109636828A (en) Object tracking methods and device based on video image
CN116681730A (en) Target tracking method, device, computer equipment and storage medium
Muresan et al. Real-time object detection using a sparse 4-layer LIDAR
CN117475355A (en) Security early warning method and device based on monitoring video, equipment and storage medium
CN116977362A (en) Target tracking method, device, computer equipment and storage medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
CN113628251B (en) Smart hotel terminal monitoring method
CN116912877A (en) Method and system for monitoring space-time contact behavior sequence of urban public space crowd
CN115542271A (en) Radar coordinate and video coordinate calibration method, equipment and related device
CN111372051A (en) Multi-camera linkage blind area detection method and device and electronic equipment
Dekkiche et al. Vehicles detection in stereo vision based on disparity map segmentation and objects classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant