CN115272493A - Abnormal target detection method and device based on continuous time sequence point cloud superposition - Google Patents
Abnormal target detection method and device based on continuous time sequence point cloud superposition Download PDFInfo
- Publication number
- CN115272493A CN115272493A CN202211145212.XA CN202211145212A CN115272493A CN 115272493 A CN115272493 A CN 115272493A CN 202211145212 A CN202211145212 A CN 202211145212A CN 115272493 A CN115272493 A CN 115272493A
- Authority
- CN
- China
- Prior art keywords
- point
- abnormal
- point cloud
- points
- mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000002159 abnormal effect Effects 0.000 title claims abstract description 185
- 238000001514 detection method Methods 0.000 title claims abstract description 45
- 238000013507 mapping Methods 0.000 claims abstract description 81
- 238000000034 method Methods 0.000 claims abstract description 63
- 239000011159 matrix material Substances 0.000 claims description 40
- 230000009466 transformation Effects 0.000 claims description 28
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 7
- 239000007787 solid Substances 0.000 claims description 6
- 230000003252 repetitive effect Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 230000002547 anomalous effect Effects 0.000 claims description 2
- 230000010354 integration Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 241000287196 Asthenes Species 0.000 description 1
- 241001464837 Viridiplantae Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses an abnormal target detection method and device based on continuous time sequence point cloud superposition, which adopts a time sequence superposition method, utilizes continuous time sequence point cloud data frame mapping superposition to generate a background depth map, converts non-fixed and disordered point clouds into a fixed and ordered range map, identifies abnormal point clouds based on the depth values of mapping points of the abnormal target point cloud and the depth value difference of the background depth map under corresponding coordinates, adds semantic category information to the abnormal point clouds by utilizing example area categories of the abnormal point clouds on a space-time alignment image, improves the anti-interference capability of the abnormal point clouds by combining the space and semantic distances of the point clouds, forms accurate and independent point cloud target clusters, and calculates and generates the detection information of the target clusters. According to the method, the background depth map is constructed by a continuous time sequence superposition method to detect the abnormal point cloud target, so that the problem of low accuracy of detection by directly adopting point cloud or images due to the fact that the size and the distance of the abnormal target are not fixed is solved.
Description
Technical Field
The invention relates to the technical field of intelligent perception, in particular to an abnormal target detection method and device based on continuous time sequence point cloud superposition.
Background
Along with the reduction of the cost of the sensor, more and more security monitoring scenes realize abnormity detection and alarm by installing the sensor. The method is characterized in that abnormal intrusion targets including pedestrians, non-motor vehicles, animals and the like are monitored and identified through fusion perception calculation of multiple sensors in a garden prevention and control range, and the method is an important application of realizing safety management of an unmanned monitoring area by using sensor equipment in a garden. The low-cost solid-state laser radar is usually used for sensing the position of a target, but because the size and the distance of an abnormal target are not fixed, a deep learning method is directly adopted, detection and identification are carried out based on radar point cloud data, and the accuracy rate is low. Meanwhile, the detection range of the solid laser radar is wide, the formed point cloud space range is large, the arrangement is disordered, and the difficulty of detecting the abnormal point cloud target in the point cloud space range based on the background point cloud is high. On the other hand, the single camera cannot accurately identify the target position due to the fact that depth information cannot be obtained, and the accuracy of detection and identification of an abnormal target directly through point cloud or images is low due to the fact that the size of the abnormal target is not fixed and the position distance is not fixed.
Therefore, aiming at the problem that the solid-state laser radar and the camera cannot independently and accurately detect the abnormal target, the invention provides a continuous time sequence point cloud superposition method, which is used for constructing a background depth map to detect the abnormal point cloud target and realizing high-precision detection of the abnormal target.
Disclosure of Invention
The invention aims to provide an abnormal target detection method and device based on continuous time sequence point cloud superposition, which aims to overcome the defects of the prior art, and the abnormal target detection method and device based on continuous time sequence point cloud superposition are characterized in that a background depth map is generated by mapping and superposing continuous time sequence point cloud data frames, large-range and disordered point clouds are converted into a fixed-range and ordered depth map, abnormal target depth values and the depth value difference of the background depth map are utilized to identify abnormal point clouds, semantic category information is added to the abnormal point clouds by fusing image semantic information, and finally, the abnormal targets are clustered and identified by combining the space and the semantic distance of the point clouds, so that the problem of low accuracy in detection and identification of the abnormal targets directly adopting the point clouds or images due to the problems of unfixed size and unfixed position distance is solved.
The purpose of the invention is realized by the following technical scheme: in a first aspect, the invention provides an abnormal target detection method based on continuous time sequence point cloud superposition, which comprises the following steps:
the method comprises the following steps: collecting a plurality of frames of point cloud data frames with continuous time sequence by a solid-state laser radar, mapping the point clouds in all the point cloud data frames to a depth map by using an affine transformation matrix from the point cloud data to image data, superposing the depth values of mapping points with the same mapping coordinates and calculating an average depth value, and updating the depth value of the corresponding coordinate of the depth map by using the obtained average depth value; repeating the step until the depth value before and after any coordinate of the depth map is updated does not change any more, wherein the finally updated depth map is the background depth map;
step two: acquiring a point cloud data frame of the solid-state laser radar in real time, mapping all point clouds in the point cloud data frame to a background depth map by using an affine transformation matrix from the point cloud data to image data, and judging that any point in the point clouds is a newly-increased abnormal point if the difference between the mapping point depth value and the depth value of the background depth map under the corresponding coordinate is greater than a threshold value;
step three: acquiring an image data frame which is in time-space alignment with a corresponding point cloud data frame, segmenting all target instances in the image data frame by adopting a semantic segmentation method, respectively mapping newly increased abnormal points to the image data frame, and adding semantic type information to the newly increased abnormal points according to the semantic type of an image target instance region where the mapping points are located;
step four: clustering all the newly added abnormal points based on the space semantic joint distance between the points to form a cluster;
step five: and calculating the volume and the center point coordinates of each cluster, identifying the cluster with the volume larger than the threshold as an abnormal target, and generating the detection information of the abnormal target.
Further, the step one includes the following steps:
(1.1) defining a blank depth map, wherein the depth value of each coordinate in the blank depth map is initialized to be 0; the size of the blank depth map is consistent with that of an image shot by a camera in space-time alignment with the solid-state laser radar;
(1.2) acquiring an affine transformation matrix from point cloud data of the solid-state laser radar to image data of a camera aligned in time and space; the method specifically comprises the following steps: controlling the time synchronization of data frames of a laser radar and a camera in a hardware line control mode, carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system to obtain internal parameters and external parameters matrixes, and generating an affine transformation matrix from point cloud data to image data according to the obtained internal parameters and external parameters;
assuming a calibrated internal reference matrix ofThe external reference matrix isAffine transformation matrix of point cloud data to image dataComprises the following steps:
wherein the internal reference matrixDimension 3 x 3, external reference matrix ofDimension of 3 x 4, affine transformation matrixThe dimension is 3 x 4;
(1.3) setting the solid-state laser radar to be in a non-repetitive scanning mode, continuously acquiring N frames of point cloud data frames with continuous time sequence according to a certain frequency, respectively mapping point clouds in all the point cloud data frames to a blank depth map by using an affine transformation matrix from the point cloud data to image data, superposing depth values of mapping points with the same mapping coordinates, and recording the superposition times of the depth values; the method specifically comprises the following steps: for any one point in N frames of continuous time sequence point cloud data frames, the coordinate of the point cloud in the point cloud is assumed to beMapping the point cloud data to the affine transformation matrix of the image data to the coordinates of the mapping points in the blank depth map asDepth value ofRespectively, as follows:
where ceil denotes rounding up,respectively representing floating point coordinate values of mapping points of the point cloud to the depth map,is thatDivided by depth valueBackward rounded up mappingInteger coordinate values of the points;
executing the mapping operation on the points in all the point cloud data frames, superposing the depth values of the mapping points with the same integral coordinate values in an adding mode, and recording the superposition times of the depth values under all the coordinates;
(1.4) for each mapping point coordinate, calculating the average depth value of the coordinate by using the superposition depth value and the superposition times under the coordinate, and updating the depth value of the corresponding coordinate of the blank depth map by using the obtained average depth value; the method specifically comprises the following steps: assuming that the superposition depth value under the coordinate of a certain mapping point is SumDepth, and the superposition frequency is NumD, the average depth value depth of the coordinate is expressed as:
calculating an average depth value for all mapping point coordinates, and updating the depth value of the blank depth map at the corresponding coordinate by using the obtained average depth value;
and (1.5) repeating the step (1.3) and the step (1.4) until the depth value of any coordinate of the blank depth map is not changed before and after updating, wherein the blank depth map updated for the last time is the background depth map.
Further, in the second step, for any point in the point cloud, the coordinate of the point in the point cloud is assumed to beThe coordinates of the mapping points mapped into the background depth map areDepth value ofCorresponding to the background depth map coordinates ofAt a depth value ofIf it satisfiesIf the point is a new abnormal point, the point is judged to be a point in the abnormal target and not a point in the background; whereinThe empirical value is obtained by observing the difference in depth values between the abnormal target point and the background point.
Further, the third step includes the following steps:
(3.1) segmenting all target examples in the image data frame by adopting a Mask-RCNN-based semantic segmentation method;
(3.2) for any newly added abnormal point, assuming that the coordinate of the abnormal point in the point cloud isMapping the point cloud data to the coordinates of the mapping points in the image data frame by using the affine transformation matrix of the point cloud data to the image dataIs represented as follows:
wherein ceil means rounding up,integer coordinate values representing mapping points of the point cloud data to image data,the depth value of the mapping point is obtained;
(3.3) hypothetical coordinatesAnd a set of image coordinate points PixelCols contained in a certain target example, and satisfiesIf the new abnormal point is added with the semantic category information, the result is expressed asWhere cls is the semantic class of the target instance.
Further, the fourth step includes the following steps:
(4.1) searching and determining a core anomaly point; the method comprises the following specific steps: set the radius of the space toThe minimum number of neighbors is minS; for any newly added abnormal pointAssuming its coordinates asSemantic categories ofThen it is represented asGo through the newly added abnormal pointRadius of spaceAll new anomaly points within the range, to their spatial radiusAny newly added abnormal point in the rangeAssuming its coordinates asSemantic categories ofThen it is represented asIf the space semantic joint distance between two pointsSatisfies the following conditions:
the new abnormal point is consideredIs newly added with abnormal pointsThe neighbor of (2); if the abnormal point is newly addedSpatial radius ofIf the number of newly added abnormal points satisfying the above formula in the range is not less than minS, determining the abnormal pointsIs a core anomaly point, otherwise is a non-core anomaly point;
wherein Dis is the euclidean spatial distance between two points, sclas is the semantic distance between two points, and is respectively expressed as:
whereinIn order to be the spatial distance weight,in order to be a semantic distance weight,the distance threshold values are spatial semantic union distance threshold values which are empirical values;
executing the step (4.1) on all the newly added abnormal points until all the newly added abnormal points are confirmed to be core abnormal points or not; performing subsequent processing on the core abnormal point, and directly discarding the non-core abnormal point;
(4.2) clustering the core abnormal points to form a cluster; the method comprises the following specific steps: starting from any one core anomaly point, the spatial radius of the core anomaly point is equal to that of the spaceThe core abnormal points of the inner neighbors are clustered into a class, and the core abnormal points of the neighbors are continuously searched and clustered into a class from any core abnormal point of the neighbors until the core abnormal points of the neighbors cannot be found, the clustered core abnormal points are a cluster, and the cluster type is the semantic type of the initial core abnormal point; repeating the step (4.2) from any remaining core abnormal point until no new cluster is formed;
(4.3) the remaining unclustered core outliers are outliers and are discarded directly.
Further, in the fifth step, for each cluster, calculating the volume of the cluster according to the two core abnormal points with the largest spatial distance; the coordinates of two core abnormal points with the maximum space distance are respectively assumed to beAndsemantic categories areAre then respectively represented asVolume of the clusterExpressed as:
if it isGreater than a threshold valueThen the cluster is considered as an abnormal target, whereinIf the upper limit value of the volume of the small target object forming the interference is an empirical value, the abnormal target detection information is: the object class isDistance solid state laser radar mounting originRice, andthe included angle of the abscissa of the solid-state laser radar coordinate system is。
In a second aspect, the invention provides an abnormal target detection device based on continuous time-series point cloud superposition, which comprises a memory and one or more processors, wherein the memory stores executable codes, and the processors are used for implementing the steps of the abnormal target detection method based on continuous time-series point cloud superposition when executing the executable codes.
In a third aspect, the present invention provides a computer-readable storage medium, on which a program is stored, which, when executed by a processor, implements the steps of the above-described abnormal object detection method based on continuous time-series point cloud overlay.
The invention has the beneficial effects that: the method solves the problem that the accuracy of detecting and identifying the abnormal target with variable size and distance by a deep learning method is low by sensing the abnormal target with the solid laser radar. A background depth map based on point clouds is constructed through a continuous time sequence superposition method, abnormal point clouds are detected by utilizing the difference between the mapping depth value of the abnormal target point clouds and the depth value of the background depth map, and the clustering accuracy of the abnormal point clouds is improved by adding semantic category information to the abnormal point clouds, so that clustering clusters and detection information of the abnormal targets are accurately generated, and reliable technical support is provided for unmanned safety monitoring of a park.
Drawings
Fig. 1 is a flowchart of an abnormal target detection method based on continuous time-series point cloud overlay according to the present invention.
FIG. 2 is a background depth map based on point cloud constructed by the continuous time sequence superposition method.
FIG. 3 is a graph of the effect of the point cloud containing abnormal objects of the present invention mapping to a background depth map.
FIG. 4 is a graph of the effect of the present invention of mapping a point cloud containing an anomalous target onto a spatio-temporally aligned image.
Fig. 5 is a structural diagram of an abnormal object detection device based on continuous time-series point cloud superposition according to the present invention.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the present invention with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an abnormal target detection method based on continuous time sequence point cloud superposition, which is used for solving the problem of low detection reliability of an abnormal target caused by low accuracy of detection and identification directly by a deep learning method when the abnormal target with a large volume and a long and short distance is sensed by a solid laser radar or a camera.
According to the method, a background depth map based on the point cloud is constructed through a continuous time sequence point cloud superposition method, the abnormal point cloud is detected by utilizing the difference between the mapping depth value of the abnormal point cloud and the depth value of the background depth map, and image semantic category information is added to the abnormal point cloud through introducing time synchronization image features, so that the information content of the point cloud is improved. The method for combining the space and the semantic distance of the point clouds can effectively improve the anti-interference capability of clustering abnormal point clouds by adopting a density clustering method, and form accurate and independent target clusters, thereby generating accurate detection information of targets.
As shown in fig. 1, the present invention specifically includes the following steps:
the method comprises the following steps: mapping point clouds in the continuous time sequence point cloud data frames to a depth map, calculating an average depth value, and updating the depth map to obtain a background depth map, wherein the method specifically comprises the following steps: the method comprises the steps of setting a solid-state laser radar to be in a non-repetitive scanning mode, collecting a plurality of frames of point cloud data frames with continuous time sequence according to a certain frequency, mapping point clouds in all the point cloud data frames to a depth map by using an affine transformation matrix from the point cloud data to image data, superposing depth values of mapping points with the same mapping coordinates, calculating an average depth value, and updating the depth value of a coordinate corresponding to the depth map by using the obtained average depth value. The process is repeated until the depth value before and after any coordinate of the depth map is updated does not change any more, and the finally updated depth map is the background depth map.
And the number of the continuous time sequence point cloud data frames of the plurality of frames is determined according to the scanning characteristics of the selected solid-state laser radar equipment. In this embodiment, the model of the selected solid-state laser radar is voyadvia, the scanning mode is set to be a non-repetitive scanning mode, when the scanning period is 10HZ, the number of frames of continuously acquired point cloud data is 4, and when the number of updating times of the background image is 10, a stable background depth map can be constructed, that is, the depth value of any coordinate of the background depth map does not change any more before and after updating.
A solid-state laser radar and a camera which are installed on a roadside lamp post in an unattended monitoring area of a park are utilized to collect point cloud data and image data which are aligned in a space-time mode, for the collected point cloud data, a background depth map which is constructed by the continuous time sequence point cloud superposition method and is based on the point cloud is shown in the attached drawing 2, the size of the background depth map is consistent with that of the image data, the generated depth map mapping points are very dense, and the depth value of the background depth map cannot be changed by adding a point cloud time sequence data frame.
Further, the step one includes the following steps:
(1.1) defining a blank depth map with the size of W x H, and initializing the depth value at each coordinate in the blank depth map to 0. And the size of the blank depth map is consistent with the size of an image shot by a camera aligned with the solid-state laser radar in space-time.
And (1.2) acquiring an affine transformation matrix from the solid-state laser radar point cloud data to the camera image data aligned in time and space.
The method comprises the following specific steps: controlling the time synchronization of data frames of a laser radar and a camera in a hardware line control mode, carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system to obtain internal parameters and external parameters matrixes, and generating an affine transformation matrix from point cloud data to image data according to the obtained internal parameters and external parameters;
assume that the calibrated internal reference matrix isThe external reference matrix isAffine transformation matrix of point cloud data to image dataComprises the following steps:
wherein the internal reference matrixDimension size of 3 x 3, external reference matrix ofDimension of 3 x 4, affine transformation matrixThe dimension is 3 x 4;
and (1.3) setting the solid-state laser radar to be in a non-repetitive scanning mode, continuously acquiring N frames of point cloud data frames with continuous time sequence according to a certain frequency, respectively mapping point clouds in all the point cloud data frames to a blank depth map by using an affine transformation matrix from the point cloud data to image data, superposing the depth values of mapping points with the same mapping coordinates, and recording the superposition times of the depth values.
The method specifically comprises the following steps: for any one point in N frames of continuous time sequence point cloud data frames, the coordinate of the point cloud in the point cloud is assumed to beMapping the point cloud data to the affine transformation matrix of the image data to the coordinates of the mapping points in the blank depth map asDepth value ofWhich may be represented as follows:
wherein ceil means rounding up,respectively representing floating point coordinate values of mapping points of the point cloud to the depth map,is thatDivided by depth valueThe integral coordinate value of the mapping point after the upward integration is carried out;
and performing the mapping operation on the points in all the point cloud data frames, superposing the depth values of the mapping points with the same integral coordinate values in an adding mode, and recording the superposition times of the depth values under all the coordinates.
And (1.4) for each mapping point coordinate, calculating the average depth value of the coordinate by using the superposition depth value and the superposition times under the coordinate, and updating the depth value of the corresponding coordinate of the blank depth map by using the obtained average depth value.
The method specifically comprises the following steps: let the coordinates of a mapping point beThe superposition depth value is SumDepth, the superposition times is NumD, and the coordinate of the position isCan be expressed as:
and calculating the average depth value of all mapping point coordinates, and updating the depth value of the blank depth map at the corresponding coordinate by using the obtained average depth value.
And (1.5) repeating the step (1.3) and the step (1.4) until the depth value of any coordinate of the blank depth map is not changed before and after updating, wherein the blank depth map updated at the last time is the background depth map.
The method for acquiring the affine transformation matrix from the point cloud data of the solid-state laser radar to the image data of the camera aligned in time and space comprises the steps of firstly utilizing the solid-state laser radar and the camera which are installed on a roadside lamp pole in a garden, controlling the time synchronization of data frames of the laser radar and the camera in a hardware line control mode, and respectively carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system. The camera internal reference calibration is generally carried out by adopting a checkerboard, acquiring checkerboard data of a plurality of angles and distances by utilizing the characteristics of clear black and white checkerboard and easy angular point finding, generating a plurality of groups of two-dimensional image angular point coordinates and three-dimensional space angular point coordinates, and solving and generating internal reference parameters based on a least square method. The method comprises the steps of calibrating external parameters from a solid-state laser radar coordinate system to a camera coordinate system in a combined mode, recording multi-section time-aligned image data and point cloud data by an ROS tool through placing a plurality of white boards different in size and distance, and marking four corner points of each white board according to the image data and the point cloud data aligned in each frame time. And correcting external parameter parameters for any group of corner points by adopting a BP neural network method through multiple iterations until the mapping deviation generated by the external parameter is stabilized within a threshold range. By utilizing the calibrated camera internal reference and the external reference from the laser radar coordinate system to the camera coordinate system, the three-dimensional point cloud can be mapped to a two-dimensional image, and the conversion from the point cloud space to the image space is realized.
Step two: comparing the depth value of the point cloud obtained in real time with the depth value of the background depth map, and judging whether the point is a newly added abnormal point, specifically comprising the following steps: and acquiring a point cloud data frame of the solid-state laser radar in real time, mapping all point clouds in the point cloud data frame to a background depth map by using an affine transformation matrix from the point cloud data to image data, and judging that any point in the point clouds is a newly added abnormal point if the difference between the mapping point depth value and the depth value of the background depth map under the corresponding coordinate is greater than a threshold value.
The point cloud data frame of the solid-state laser radar is obtained in real time, the obtaining period is 10HZ, namely, the point cloud data of the solid-state laser radar is collected once every 100 ms.
The method specifically comprises the following steps: for any point in the point cloud, the coordinate of the point in the point cloud is assumed to beThe coordinates of the mapping points mapped into the background depth map areDepth value ofCorresponding to the background depth map coordinates ofAt a depth value ofIf it satisfiesThen the point is determined to be a new outlier, i.e., the point is likely to be a point in the outlier target, not a point in the background, whereThe empirical value can be obtained by observing the difference between the depth values of the abnormal target point and the background point.
As shown in fig. 3, the point cloud containing the abnormal target is mapped to the background depth mapAnd (5) effect diagrams. It can be observed that after the point cloud of the abnormal target is mapped to the background depth map, occlusion occurs at a corresponding position of the background depth map, so that the depth value of the abnormal target and the depth value of the background depth map under the corresponding coordinate have a larger difference. From the observation, the depth values differWhen the value is set to 0.5 (unit: meter), abnormal point clouds can be distinguished.
Step three: acquiring an image data frame aligned with a point cloud space-time, segmenting a target instance, and adding semantic category information for a newly added abnormal point, wherein the method specifically comprises the following steps: the method comprises the steps of obtaining an image data frame which is aligned with a corresponding point cloud data frame in a time-space mode through a camera, segmenting all target examples in the corresponding image data frame by adopting a semantic segmentation method, respectively mapping newly increased abnormal points to the image data frame, and adding semantic type information to the newly increased abnormal points according to the semantic type of an image target example area where the mapping points are located. The method comprises the following specific steps:
and (3.1) segmenting all target examples in the corresponding image data frame by adopting a Mask-RCNN-based semantic segmentation method.
(3.2) for any newly added abnormal point, assuming the coordinate of the abnormal point in the point cloud to beMapping the point cloud data to the coordinates of the mapping points in the image data frame by using the affine transformation matrix of the point cloud data to the image dataCan be expressed as follows:
wherein ceil means rounding up,representing mapping of point cloud data into image dataThe integer coordinate values of the mapping points,are mapped point depth values.
(3.3) hypothetical coordinatesAnd a set of image coordinate points PixelCols contained in a certain target instance, satisfyIf the new abnormal point is added with the semantic category information, the new abnormal point can be expressed asWhere cls is the semantic class of the target instance.
As shown in fig. 4, is an effect diagram of a point cloud containing an abnormal object mapped onto a spatio-temporally aligned image. In the present embodiment, it is considered that the abnormal object includes a pedestrian, a non-motor vehicle, an animal, other movable obstacle, and the like. The image is segmented through a Mask-RCNN semantic segmentation method, instance objects such as backgrounds, pedestrians, non-motor vehicles, animals and other movable obstacles in the image are segmented, all instance areas (including the backgrounds) in an image data frame are endowed with semantic category information, and the backgrounds in the image comprise static category objects such as sky, green plants, road surfaces and buildings. It can be observed that abnormal targets such as pedestrians and other movable obstacles appear in fig. 4, and semantic category information can be added to the point clouds falling in the corresponding areas according to semantic categories of the example target areas corresponding to the images mapped by the abnormal target point clouds.
Step four: and clustering the newly added abnormal points based on the space semantic united distance between the points to form a cluster.
Further, the fourth step includes the following steps:
(4.1) for all new abnormal points, searching and determining the core abnormal point. The method specifically comprises the following steps: set the space radius toThe minimum number of neighbors is minS; for any newly added abnormal pointAssuming its coordinates asSemantic categories ofThen it is represented asTraversing newly added abnormal pointsRadius of spaceAll new anomaly points within the range, to their spatial radiusAny newly added abnormal point in the rangeAssuming its coordinates asSemantic categories ofThen it is represented asIf the space semantic joint distance between two pointsSatisfies the following conditions:
the new abnormal point is consideredIs newly added with abnormal pointsThe neighbor of (2); if an abnormal point is newly addedSpatial radius ofIf the number of newly added abnormal points satisfying the above formula in the range is not less than minS, the abnormal points are determinedIs the core exception point, otherwise is the non-core exception point.
Wherein Dis is the euclidean spatial distance between two points, and Sclass is the semantic distance between two points, which can be respectively expressed as:
whereinIs a weight of the spatial distance and is,in order to be a semantic distance weight,the distance threshold values are spatial semantic joint distance threshold values which are empirical values.
Executing the step (4.1) on all the newly added abnormal points until all the newly added abnormal points are confirmed to be core abnormal points or not; and (4) performing subsequent processing on the core abnormal points, and directly discarding the non-core abnormal points.
And (4.2) clustering the core abnormal points and forming a cluster. The method specifically comprises the following steps: starting from any one core anomaly point, the spatial radius of the core anomaly point is equal to that of the spaceThe core abnormal points of the inner neighbors are gathered into one class, and the core abnormal points of the neighbors are continuously searched and gathered into one class from any core abnormal point of the neighbors until the core abnormal points of the neighbors can not be found, the core abnormal points gathered together in the process are a cluster, and the cluster class is the semantic class of the initial core abnormal point. And (4) repeating the step (4.2) from any remaining core anomaly point until no new cluster is formed.
(4.3) the remaining unclustered core outliers are outliers and are discarded directly.
The space radius isSet as 1 (unit: meter), the minimum neighbor number minS is set as 2, and the space semantic united distance between two pointsSet to 1, parameter、The space distance of the two points is considered, whether the semantic categories of the two points are the same or not needs to be considered, if the space distance of the two points is short but the semantic categories are inconsistent, the final space semantic union distance is smaller than a threshold value, and the two points do not belong to a cluster. Meanwhile, because the number of the point clouds is large, the background point cloud is easy to be falsely detected as the abnormal target point cloud when the abnormal point is detected,by introducing semantic information, false detection of background point clouds can be further eliminated. Parameter(s)、The arrangement mode of (2) emphasizes considering the space distance, and points with close space distance are considered to be larger possibly to be the same cluster.
Step five: and calculating the volume and the central point coordinates of the clustering cluster, judging whether the clustering cluster is an abnormal target, and generating detection information of the abnormal target.
Specifically, for each cluster, the volume of the cluster is calculated according to the two core abnormal points with the largest spatial distance. The coordinates of two core abnormal points with the maximum space distance are assumed to be respectivelyAndsemantic categories areAre then respectively represented asThen the volume of the clusterCan be expressed as:
if it isGreater than a threshold valueThen the cluster is considered as an abnormal target, whereinIf the upper limit value of the volume of the small target object forming the interference is an empirical value, the abnormal target detection information is: the object class isDistance solid state laser radar mounting originThe included angle between the meter and the abscissa of a solid-state laser radar coordinate system is。
The volume threshold of the clusterAs empirically observed, it was set to 0.008 (unit:). Because in daily scene, often appear fallen leaves, rubbish, plastic bag, carton case etc. and produce the less object of interference and volume to the testing result, consequently need set up the threshold value volume for clustering, avoid false retrieval, false retrieval.
Corresponding to the embodiment of the abnormal target detection method based on continuous time sequence point cloud superposition, the invention also provides an embodiment of an abnormal target detection device based on continuous time sequence point cloud superposition.
Referring to fig. 5, an abnormal target detection apparatus based on continuous time-series point cloud overlay according to an embodiment of the present invention includes a memory and one or more processors, where the memory stores executable codes, and when the processors execute the executable codes, the abnormal target detection apparatus based on continuous time-series point cloud overlay is used to implement the abnormal target detection method based on continuous time-series point cloud overlay in the foregoing embodiment.
The embodiment of the abnormal target detection device based on continuous time-series point cloud superposition can be applied to any equipment with data processing capability, such as computers and other equipment or devices. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 5, a hardware structure diagram of an arbitrary device with data processing capability where an abnormal object detection apparatus based on continuous time-sequence point cloud superposition is located according to the present invention is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, in an embodiment, the arbitrary device with data processing capability where the apparatus is located may generally include other hardware according to an actual function of the arbitrary device with data processing capability, which is not described again.
The specific details of the implementation process of the functions and actions of each unit in the above device are the implementation processes of the corresponding steps in the above method, and are not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention also provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the abnormal target detection method based on continuous time sequence point cloud superposition in the above embodiment is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the appended claims.
Claims (8)
1. An abnormal target detection method based on continuous time sequence point cloud superposition is characterized by comprising the following steps:
the method comprises the following steps: collecting a plurality of frames of point cloud data frames with continuous time sequence by a solid-state laser radar, mapping the point clouds in all the point cloud data frames to a depth map by using an affine transformation matrix from the point cloud data to image data, superposing the depth values of mapping points with the same mapping coordinates and calculating an average depth value, and updating the depth value of the corresponding coordinate of the depth map by using the obtained average depth value; repeating the step until the depth value before and after any coordinate of the depth map is updated does not change any more, wherein the updated depth map is the background depth map;
step two: acquiring a point cloud data frame of the solid-state laser radar in real time, mapping all point clouds in the point cloud data frame to a background depth map by using an affine transformation matrix from the point cloud data to image data, and judging that any point in the point clouds is a newly-increased abnormal point if the difference between the mapping point depth value and the depth value of the background depth map under the corresponding coordinate is greater than a threshold value;
step three: acquiring an image data frame which is in time-space alignment with a corresponding point cloud data frame, segmenting all target instances in the image data frame by adopting a semantic segmentation method, respectively mapping newly increased abnormal points to the image data frame, and adding semantic type information to the newly increased abnormal points according to the semantic type of an image target instance region where the mapping points are located;
step four: clustering all the newly added abnormal points based on the space semantic joint distance between the points to form a cluster;
step five: and calculating the volume and the center point coordinates of each cluster, identifying the cluster with the volume larger than the threshold as an abnormal target, and generating the detection information of the abnormal target.
2. The method according to claim 1, wherein the first step comprises the following steps:
(1.1) defining a blank depth map, wherein the depth value of each coordinate in the blank depth map is initialized to be 0; the size of the blank depth map is consistent with that of an image shot by a camera aligned with the solid-state laser radar in time and space;
(1.2) acquiring an affine transformation matrix from point cloud data of the solid-state laser radar to image data of a camera aligned in time and space; the method specifically comprises the following steps: controlling the time synchronization of data frames of a laser radar and a camera by adopting a hardware line control mode, carrying out combined calibration on internal parameters of the camera and external parameters from a laser radar coordinate system to a camera coordinate system to obtain internal parameters and external parameter matrixes, and generating an affine transformation matrix from point cloud data to image data according to the obtained internal parameters and external parameter matrixes;
assuming a calibrated internal reference matrix ofThe external reference matrix isAffine transformation matrix of point cloud data to image dataComprises the following steps:
wherein the internal reference matrixDimension size of 3 x 3, external reference matrix ofDimension of 3 x 4, affine transformation matrixThe dimension is 3 x 4;
(1.3) setting the solid-state laser radar to be in a non-repetitive scanning mode, continuously acquiring N frames of point cloud data frames with continuous time sequence according to a certain frequency, respectively mapping point clouds in all the point cloud data frames to a blank depth map by using an affine transformation matrix from the point cloud data to image data, superposing depth values of mapping points with the same mapping coordinates, and recording the superposition times of the depth values; the method specifically comprises the following steps: for collected N frames of continuous time sequence point cloud data frames, for their anybodyMeaning a point, assuming its coordinates in the point cloud asMapping the point cloud data to the affine transformation matrix of the image data to the coordinates of the mapping points in the blank depth map asDepth value ofRespectively, as follows:
where ceil denotes rounding up,respectively representing floating point coordinate values of mapping points of the point cloud to the depth map,is thatDivided by depth valueThe integral coordinate value of the mapping point after the upward integration is carried out;
executing the mapping operation on the points in all the point cloud data frames, superposing the depth values of the mapping points with the same integral coordinate values in an adding mode, and recording the superposition times of the depth values under all the coordinates;
(1.4) for each mapping point coordinate, calculating the average depth value of the coordinate by using the superposition depth value and the superposition times under the coordinate, and updating the depth value of the corresponding coordinate of the blank depth map by using the obtained average depth value; the method specifically comprises the following steps: assuming that the superposition depth value under the coordinate of a certain mapping point is SumDepth, and the superposition frequency is NumD, the average depth value depth of the coordinate is expressed as:
calculating an average depth value for all mapping point coordinates, and updating the depth value of the blank depth map at the corresponding coordinate by using the obtained average depth value;
and (1.5) repeating the step (1.3) and the step (1.4) until the depth value of any coordinate of the blank depth map is not changed before and after updating, wherein the blank depth map updated for the last time is the background depth map.
3. The method according to claim 1, wherein in the second step, for any point in the point cloud, the coordinate of the point cloud in the point cloud is assumed to beThe coordinates of the mapping points mapped into the background depth map areDepth value ofCorresponding to the background depth map coordinates ofAt a depth value ofIf it satisfiesIf the point is a new abnormal point, the point is judged to be a point in the abnormal target and not a point in the background; whereinThe empirical value is obtained by observing the difference in depth values between the abnormal target point and the background point.
4. The abnormal target detection method based on continuous time series point cloud superposition as claimed in claim 1, wherein the step three comprises the following steps:
(3.1) segmenting all target instances in the image data frame by adopting a Mask-RCNN-based semantic segmentation method;
(3.2) for any newly added abnormal point, assuming that the coordinate of the abnormal point in the point cloud isMapping the point cloud data to the coordinates of the mapping points in the image data frame by using the affine transformation matrix of the point cloud data to the image dataIs represented as follows:
wherein ceil means rounding up,integer coordinate values representing mapping points of the point cloud data to image data,the depth value of the mapping point is obtained;
5. The method for detecting the abnormal target based on the superposition of the continuous time-series point clouds of claim 1, wherein the fourth step comprises the following steps:
(4.1) searching and determining a core anomaly point; the method comprises the following specific steps: set the space radius toThe minimum number of neighbors is minS; for any newly added abnormal pointAssuming its coordinates asSemantic categories ofThen it is represented asTraversing newly added abnormal pointsRadius of spaceAll new outliers in the range, for their spatial radiusAny newly-added abnormal point in the rangeAssuming its coordinates asSemantic categories ofThen it is represented asIf the space semantic joint distance between two pointsSatisfies the following conditions:
the new abnormal point is consideredIs newly added with abnormal pointsThe neighbors of (2); if the abnormal point is newly addedRadius of space ofNew within the range satisfying the above formulaIf the number of the abnormal points is more than or equal to minS, determining the abnormal pointsIs a core anomaly point, otherwise is a non-core anomaly point;
wherein Dis is the euclidean spatial distance between two points, and Sclass is the semantic distance between two points, which are respectively expressed as:
whereinIn order to be the spatial distance weight,in order to be a semantic distance weight,the distance threshold values are spatial semantic union distance threshold values which are empirical values;
executing the step (4.1) on all the newly added abnormal points until all the newly added abnormal points are confirmed to be core abnormal points or not; performing subsequent processing on the core abnormal point, and directly discarding the non-core abnormal point;
(4.2) clustering the core abnormal points to form a cluster; the method specifically comprises the following steps: starting from any one core anomaly point, the spatial radius of the core anomaly point is equal to that of the spaceNeighbor core abnormal points in the cluster are clustered, and the neighbor core abnormal points are continuously searched and clustered from any neighbor core abnormal point until the neighbor core abnormal points cannot be found, the clustered core abnormal points are a cluster, and the cluster category is the semantic category of the initial core abnormal point; repeating the step (4.2) from any core abnormal point) Until no new cluster is formed;
(4.3) the remaining non-clustered core outliers are outliers and are discarded directly.
6. The abnormal target detection method based on continuous time series point cloud superposition as claimed in claim 1, wherein in the fifth step, for each cluster, the volume of the cluster is calculated according to the two core abnormal points with the largest spatial distance; the coordinates of two core abnormal points with the maximum space distance are respectively assumed to beAndsemantic categories areAre then respectively represented asVolume of the clusterExpressed as:
if it isGreater than a threshold valueThen the cluster is considered as an abnormal target, whereinIf the upper limit value of the volume of the small target object forming the interference is an empirical value, the abnormal target detection information is as follows: the object class isDistance solid state laser radar mounting originThe included angle between the meter and the abscissa of the coordinate system of the solid-state laser radar is。
7. An abnormal target detection device based on continuous time-series point cloud superposition, comprising a memory and one or more processors, wherein the memory is stored with executable codes, and the processors are used for implementing the steps of the abnormal target detection method based on continuous time-series point cloud superposition according to any one of claims 1 to 6 when executing the executable codes.
8. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the method for detecting an anomalous target based on superposition of successive time-series point clouds according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211145212.XA CN115272493B (en) | 2022-09-20 | 2022-09-20 | Abnormal target detection method and device based on continuous time sequence point cloud superposition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211145212.XA CN115272493B (en) | 2022-09-20 | 2022-09-20 | Abnormal target detection method and device based on continuous time sequence point cloud superposition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115272493A true CN115272493A (en) | 2022-11-01 |
CN115272493B CN115272493B (en) | 2022-12-27 |
Family
ID=83757249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211145212.XA Active CN115272493B (en) | 2022-09-20 | 2022-09-20 | Abnormal target detection method and device based on continuous time sequence point cloud superposition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272493B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117037120A (en) * | 2023-10-09 | 2023-11-10 | 之江实验室 | Target perception method and device based on time sequence selection |
CN118154588A (en) * | 2024-05-09 | 2024-06-07 | 中铁七局集团第三工程有限公司 | Large-diameter pressure steel pipe quality detection method and system based on contour extraction |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108389221A (en) * | 2018-01-30 | 2018-08-10 | 深圳市菲森科技有限公司 | The scan method and system of 3-D view |
WO2018176440A1 (en) * | 2017-04-01 | 2018-10-04 | 深圳市速腾聚创科技有限公司 | Method for fusing point cloud and planar image, intelligent device and non-volatile computer-readable storage medium |
US20190051056A1 (en) * | 2017-08-11 | 2019-02-14 | Sri International | Augmenting reality using semantic segmentation |
CN109961440A (en) * | 2019-03-11 | 2019-07-02 | 重庆邮电大学 | A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map |
CN111652179A (en) * | 2020-06-15 | 2020-09-11 | 东风汽车股份有限公司 | Semantic high-precision map construction and positioning method based on dotted line feature fusion laser |
CN111798475A (en) * | 2020-05-29 | 2020-10-20 | 浙江工业大学 | Indoor environment 3D semantic map construction method based on point cloud deep learning |
CN112348867A (en) * | 2020-11-18 | 2021-02-09 | 南通市测绘院有限公司 | Method and system for constructing city high-precision three-dimensional terrain based on LiDAR point cloud data |
CN113111887A (en) * | 2021-04-26 | 2021-07-13 | 河海大学常州校区 | Semantic segmentation method and system based on information fusion of camera and laser radar |
CN113128348A (en) * | 2021-03-25 | 2021-07-16 | 西安电子科技大学 | Laser radar target detection method and system fusing semantic information |
CN113393514A (en) * | 2021-06-11 | 2021-09-14 | 中国科学院自动化研究所 | Three-dimensional disordered point cloud data processing method, system and equipment |
CN114266960A (en) * | 2021-12-01 | 2022-04-01 | 国网智能科技股份有限公司 | Point cloud information and deep learning combined obstacle detection method |
WO2022141912A1 (en) * | 2021-01-01 | 2022-07-07 | 杜豫川 | Vehicle-road collaboration-oriented sensing information fusion representation and target detection method |
CN114724120A (en) * | 2022-06-10 | 2022-07-08 | 东揽(南京)智能科技有限公司 | Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion |
CN114758504A (en) * | 2022-06-13 | 2022-07-15 | 之江实验室 | Online vehicle overspeed early warning method and system based on filtering correction |
CN114782519A (en) * | 2022-03-11 | 2022-07-22 | 陕西天视致远航空技术有限公司 | Method, device and medium for positioning spherical or quasi-spherical object based on point cloud information |
CN114862901A (en) * | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
CN114937081A (en) * | 2022-07-20 | 2022-08-23 | 之江实验室 | Internet vehicle position estimation method and device based on independent non-uniform incremental sampling |
-
2022
- 2022-09-20 CN CN202211145212.XA patent/CN115272493B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018176440A1 (en) * | 2017-04-01 | 2018-10-04 | 深圳市速腾聚创科技有限公司 | Method for fusing point cloud and planar image, intelligent device and non-volatile computer-readable storage medium |
US20190051056A1 (en) * | 2017-08-11 | 2019-02-14 | Sri International | Augmenting reality using semantic segmentation |
CN108389221A (en) * | 2018-01-30 | 2018-08-10 | 深圳市菲森科技有限公司 | The scan method and system of 3-D view |
CN109961440A (en) * | 2019-03-11 | 2019-07-02 | 重庆邮电大学 | A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map |
CN111798475A (en) * | 2020-05-29 | 2020-10-20 | 浙江工业大学 | Indoor environment 3D semantic map construction method based on point cloud deep learning |
CN111652179A (en) * | 2020-06-15 | 2020-09-11 | 东风汽车股份有限公司 | Semantic high-precision map construction and positioning method based on dotted line feature fusion laser |
CN112348867A (en) * | 2020-11-18 | 2021-02-09 | 南通市测绘院有限公司 | Method and system for constructing city high-precision three-dimensional terrain based on LiDAR point cloud data |
WO2022141912A1 (en) * | 2021-01-01 | 2022-07-07 | 杜豫川 | Vehicle-road collaboration-oriented sensing information fusion representation and target detection method |
CN113128348A (en) * | 2021-03-25 | 2021-07-16 | 西安电子科技大学 | Laser radar target detection method and system fusing semantic information |
CN113111887A (en) * | 2021-04-26 | 2021-07-13 | 河海大学常州校区 | Semantic segmentation method and system based on information fusion of camera and laser radar |
CN113393514A (en) * | 2021-06-11 | 2021-09-14 | 中国科学院自动化研究所 | Three-dimensional disordered point cloud data processing method, system and equipment |
CN114266960A (en) * | 2021-12-01 | 2022-04-01 | 国网智能科技股份有限公司 | Point cloud information and deep learning combined obstacle detection method |
CN114782519A (en) * | 2022-03-11 | 2022-07-22 | 陕西天视致远航空技术有限公司 | Method, device and medium for positioning spherical or quasi-spherical object based on point cloud information |
CN114862901A (en) * | 2022-04-26 | 2022-08-05 | 青岛慧拓智能机器有限公司 | Road-end multi-source sensor fusion target sensing method and system for surface mine |
CN114724120A (en) * | 2022-06-10 | 2022-07-08 | 东揽(南京)智能科技有限公司 | Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion |
CN114758504A (en) * | 2022-06-13 | 2022-07-15 | 之江实验室 | Online vehicle overspeed early warning method and system based on filtering correction |
CN114937081A (en) * | 2022-07-20 | 2022-08-23 | 之江实验室 | Internet vehicle position estimation method and device based on independent non-uniform incremental sampling |
Non-Patent Citations (6)
Title |
---|
QIAN HUANG等: "Recognition of Key Targets of Locomotive Bottom Based on 3D Point Cloud Data", 《2017 FAR EAST NDT NEW TECHNOLOGY & APPLICATION FORUM (FENDT)》 * |
吴庆祝: "基于视觉与激光雷达数据融合的多目标识别算法研究", 《万方数据知识服务平台》 * |
岳文涛: "基于图像与点云融合的三维目标检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
张婷婷等: "基于深度学习的图像目标检测算法综述", 《电信科学》 * |
王东敏等: "视觉与激光点云融合的深度图像获取方法", 《军事交通学院学报》 * |
赵永杰: "动态环境下融合语义信息的视觉SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117037120A (en) * | 2023-10-09 | 2023-11-10 | 之江实验室 | Target perception method and device based on time sequence selection |
CN117037120B (en) * | 2023-10-09 | 2024-02-09 | 之江实验室 | Target perception method and device based on time sequence selection |
CN118154588A (en) * | 2024-05-09 | 2024-06-07 | 中铁七局集团第三工程有限公司 | Large-diameter pressure steel pipe quality detection method and system based on contour extraction |
Also Published As
Publication number | Publication date |
---|---|
CN115272493B (en) | 2022-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675431B (en) | Three-dimensional multi-target tracking method fusing image and laser point cloud | |
CN110163904B (en) | Object labeling method, movement control method, device, equipment and storage medium | |
Behrendt et al. | A deep learning approach to traffic lights: Detection, tracking, and classification | |
CN109034018B (en) | Low-altitude small unmanned aerial vehicle obstacle sensing method based on binocular vision | |
WO2022188663A1 (en) | Target detection method and apparatus | |
CN115272493B (en) | Abnormal target detection method and device based on continuous time sequence point cloud superposition | |
CN111753609A (en) | Target identification method and device and camera | |
CN111507327B (en) | Target detection method and device | |
CN110501036A (en) | The calibration inspection method and device of sensor parameters | |
US20210350705A1 (en) | Deep-learning-based driving assistance system and method thereof | |
CN112991389A (en) | Target tracking method and device and mobile robot | |
CN115376109B (en) | Obstacle detection method, obstacle detection device, and storage medium | |
CN113012215A (en) | Method, system and equipment for space positioning | |
CN109636828A (en) | Object tracking methods and device based on video image | |
CN116681730A (en) | Target tracking method, device, computer equipment and storage medium | |
Muresan et al. | Real-time object detection using a sparse 4-layer LIDAR | |
CN117475355A (en) | Security early warning method and device based on monitoring video, equipment and storage medium | |
CN116977362A (en) | Target tracking method, device, computer equipment and storage medium | |
CN114241448A (en) | Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle | |
WO2023283929A1 (en) | Method and apparatus for calibrating external parameters of binocular camera | |
CN113628251B (en) | Smart hotel terminal monitoring method | |
CN116912877A (en) | Method and system for monitoring space-time contact behavior sequence of urban public space crowd | |
CN115542271A (en) | Radar coordinate and video coordinate calibration method, equipment and related device | |
CN111372051A (en) | Multi-camera linkage blind area detection method and device and electronic equipment | |
Dekkiche et al. | Vehicles detection in stereo vision based on disparity map segmentation and objects classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |