[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113536883B - Obstacle detection method, vehicle, apparatus, and computer storage medium - Google Patents

Obstacle detection method, vehicle, apparatus, and computer storage medium Download PDF

Info

Publication number
CN113536883B
CN113536883B CN202110306554.4A CN202110306554A CN113536883B CN 113536883 B CN113536883 B CN 113536883B CN 202110306554 A CN202110306554 A CN 202110306554A CN 113536883 B CN113536883 B CN 113536883B
Authority
CN
China
Prior art keywords
point cloud
point
initial
track
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110306554.4A
Other languages
Chinese (zh)
Other versions
CN113536883A (en
Inventor
胡荣东
万波
谢伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Intelligent Driving Research Institute Co Ltd
Original Assignee
Changsha Intelligent Driving Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Intelligent Driving Research Institute Co Ltd filed Critical Changsha Intelligent Driving Research Institute Co Ltd
Priority to CN202110306554.4A priority Critical patent/CN113536883B/en
Publication of CN113536883A publication Critical patent/CN113536883A/en
Priority to PCT/CN2022/081631 priority patent/WO2022199472A1/en
Application granted granted Critical
Publication of CN113536883B publication Critical patent/CN113536883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an obstacle detection method, a vehicle, equipment and a computer storage medium. The obstacle detection method is applied to a vehicle, and comprises the following steps: k initial point cloud data acquired from a target track and L Zhang Chushi images are acquired; determining first point clouds from K initial point cloud data based on each initial image respectively, wherein the first point clouds are point clouds belonging to a track in a target track; and detecting a target obstacle in the target track according to the K initial point cloud data by taking the first point cloud as a reference point cloud. According to the method and the device for detecting the obstacle, the initial image and the initial point cloud data are combined, the first point cloud related to the target track can be accurately obtained, the target track can be used as a stable reference object, the obstacle in the target track is detected based on the first point cloud related to the target track, and the obstacle detection precision can be effectively improved.

Description

Obstacle detection method, vehicle, apparatus, and computer storage medium
Technical Field
The application belongs to the technical field of information processing, and particularly relates to an obstacle detection method, a vehicle, equipment and a computer storage medium.
Background
As is well known, rail transit generally has the characteristics of large carrying capacity and high running speed, and belongs to an important component in transportation. In order to ensure safe running of trains in rail transit, it is often necessary to detect obstacles that may be present in the track.
In the prior art, when a radar sensor is used for detecting an obstacle, the ground on which a track is positioned is generally fitted based on point cloud data, and then the obstacle is judged according to the height of the point cloud positioned in the track range relative to the fitted ground. However, due to the complex environment inside the track, for example, there may be sleepers, crushed stones, etc., the error in fitting the ground tends to be large, and thus the obstacle detection accuracy is low.
Disclosure of Invention
The embodiment of the application provides a method, a vehicle, equipment and a computer storage medium for detecting obstacles, which are used for solving the problem that in the prior art, the detection accuracy is low because the obstacle detection is performed based on fitting of the ground where a track is located.
In a first aspect, an embodiment of the present application provides an obstacle detection method, applied to a vehicle, including:
k initial point cloud data and L Zhang Chushi images acquired from a target track are acquired, wherein K and L are positive integers;
Determining first point clouds from K initial point cloud data based on each initial image respectively, wherein the first point clouds are point clouds belonging to a track in a target track;
and detecting a target obstacle in the target track according to the K initial point cloud data by taking the first point cloud as a reference point cloud, wherein the target obstacle is associated with a second point cloud in the K initial point cloud data, and the second point cloud and the first point cloud meet a preset position relation.
In a second aspect, embodiments of the present application provide a vehicle, including:
the acquisition module is used for acquiring K initial point cloud data acquired from a target track and L Zhang Chushi images, wherein K and L are positive integers;
the first determining module is used for determining first point clouds from K initial point cloud data based on each initial image respectively, wherein the first point clouds are point clouds belonging to a track in a target track;
the detection module is used for detecting a target obstacle in the target track based on K initial point cloud data by taking the first point cloud as a reference point cloud, wherein the target obstacle is associated with a second point cloud in the K initial point cloud data, and a preset position relation is met between the second point cloud and the first point cloud.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
The processor, when executing the computer program instructions, implements the obstacle detection method described above.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored thereon computer program instructions that, when executed by a processor, implement the above-described obstacle detection method.
According to the obstacle detection method provided by the embodiment of the application, K initial point cloud data acquired for the target track and L Zhang Chushi images are acquired, and based on each initial image, first point clouds of the track belonging to the target track are determined from the K initial point cloud data, so that a target obstacle meeting a preset position relation between the associated second point clouds and the first point clouds can be detected. In the embodiment of the application, the initial image and the initial point cloud data are combined, so that the first point cloud of the track belonging to the target track can be accurately acquired, meanwhile, the track can be generally used as a relatively stable reference object, the target obstacle in the target track is detected based on the first point cloud belonging to the track, and the obstacle detection precision can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
Fig. 1 is a schematic flow chart of an obstacle detection method according to an embodiment of the present disclosure;
FIG. 2 is an exemplary diagram of pixel interval division for an initial rail pixel point in an embodiment of the present application;
FIG. 3 is an exemplary diagram of a target track exiting a curve with an object-occluding track in an embodiment of the present application;
FIG. 4 is an exemplary diagram of a first set of projection points resulting from projecting a third point cloud into a target plane in an embodiment of the present application;
fig. 5 is a schematic flow chart of an application example of the obstacle detection method provided in the embodiment of the present application;
FIG. 6 is a schematic structural view of a vehicle provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In order to solve the problems in the prior art, embodiments of the present application provide a method, an apparatus, a device, and a computer storage medium for detecting an obstacle. The following first describes an obstacle detection method provided in the embodiment of the present application.
Fig. 1 shows a flow chart of an obstacle detection method according to an embodiment of the present application.
As shown in fig. 1, the method includes:
step 101, acquiring K initial point cloud data and L Zhang Chushi images acquired from a target track, wherein K and L are positive integers;
102, determining a first point cloud from K initial point cloud data based on each initial image, wherein the first point cloud is a point cloud belonging to a rail in a target track;
step 103, detecting a target obstacle in the target track based on the K initial point cloud data by taking the first point cloud as the reference point cloud, wherein the target obstacle is associated with a second point cloud in the K initial point cloud data, and the second point cloud and the first point cloud meet a preset position relationship.
The obstacle detection method provided by the embodiment can be applied to rail transit vehicles in the conventional sense, such as trains, subways, light rails or trams; of course, the method can also be applied to other types of rail vehicles, such as rail cars with mine cars, or in factories, etc. For simplicity of explanation, the above vehicles to which the obstacle detection method is applicable may be referred to as rail transit vehicles.
It is readily understood that in the operating environment of the vehicle, a track is typically present; by means of the arrangement of the sensors, information about the track can be collected. For example, by setting a camera, an initial image including a track image can be acquired; for another example, initial point cloud data including a track point cloud may be acquired by providing a lidar.
Typically, these sensors may be mounted on the vehicle body and may collect at least environmental information in the direction of travel of the vehicle. That is, the sensor can collect the related information of the track where the vehicle is located; the target track may refer to a track in which the vehicle is currently located.
Of course, in practical applications, the target track may also be a track adjacent to or intersecting with the track where the vehicle is currently located, so as to consider the situation of lane change that may exist in the vehicle. In order to simplify the description, the following mainly takes a target track as an example of a track where a vehicle is currently located, and describes an obstacle detection method provided in an embodiment of the present application.
In general, a target track includes a rail, a sleeper, and crushed stone, and the rail is usually a rail (or a rail), and the form thereof, such as a width and a surface flatness, is relatively fixed.
The number of the various sensors mounted on the vehicle may be set according to actual needs. For example, one camera may be mounted on the vehicle, and a plurality of cameras may be mounted on the vehicle for redundancy arrangement, or for the sake of clearly capturing track images of different distance segments. Similarly, for lidar, the specific number of installations may also be one or more.
As for the process of collecting K initial point cloud data and L Zhang Chushi images in step 101, the following description may be given in connection with an example of an application scenario: in the running process of the vehicle, each camera arranged on the vehicle can shoot a target track in front of the vehicle to obtain at least one initial image; each lidar mounted on the vehicle may scan a target track in front of the vehicle to obtain at least one initial point cloud data. The process of shooting the target track by the camera and the process of scanning by the laser radar can be regarded as the process of collecting the target track.
In general, in the initial image, it is possible to have an image of other objects, such as a utility pole or surrounding vegetation, in addition to the image including the target track; similarly, in the initial point cloud data, there may be point clouds of other objects in addition to the point clouds having the target track.
In this embodiment, the initial image and the initial point cloud described above may be combined to distinguish the point cloud of the track belonging to the target track from the point cloud of the other object.
Specifically, in step 102, a first point cloud of a rail belonging to a target track may be determined based on a processing manner of fusing the image with the point cloud. The following mainly describes a process of determining a first point cloud from K initial point cloud data based on a certain Zhang Chushi image, for example, a fusion process of the image and the point cloud.
For example, based on the deep learning model, a track of a target track in the initial image may be identified, pixel points of the track in the initial image may be obtained based on pixel segmentation, and coordinate positions of the respective pixel points in an image coordinate system may also be obtained.
For each sensor on the vehicle, information such as the relative positional relationship and the angle of installation may be known, and therefore, the conversion relationship between the coordinate systems to which each sensor is associated may be predetermined. For example, for an initial point cloud data collected by a laser radar, the laser radar may be fixed (i.e. may be associated with a vehicle body coordinate system) with respect to a camera, and the camera coordinate system and image coordinate system of the camera are generally known. Therefore, the point cloud in the initial point cloud data can be mapped to the image coordinate system by the correspondence relationship of the radar coordinate system-the vehicle body coordinate system-the camera coordinate system-the image coordinate system.
Therefore, the K initial point cloud data may be mapped to an image coordinate system, and a point cloud falling within a position range of a pixel point of a track may be considered to be a first point cloud belonging to the track to some extent.
In some alternative embodiments, the point cloud falling into the position range of the pixel point of the track may be further filtered, so as to further obtain the first point cloud attributed to the track.
Of course, in practical application, the track may be located in the image coordinate system by means of a fitting equation, and after the K initial point cloud data are mapped into the image coordinate system, the first point cloud belonging to the track may be determined according to a distance relationship with the fitting equation.
In addition, in another example, the position of the track in the initial image may be obtained by other means, such as radon transform, etc., which are not illustrated herein.
In general, since the position, the trend, the shape, or the like of the track is relatively fixed, when the first point cloud belonging to the track is determined, the target obstacle in the target track can be detected relatively accurately with the first point cloud as the reference point cloud.
It should be emphasized that, in the target track mentioned herein, the area between two rails in a common scene of rail traffic may be referred to as an area within a certain width range of a rail, and the like, which is not specifically limited herein. In general, a target obstacle in a target track may be considered as an obstacle located in a traveling area of a rail transit vehicle, which may affect the traveling of the vehicle.
It is easy to understand that the target obstacle may be detected by the lidar and reflected to the K initial point cloud data, and from another perspective, the target obstacle may have associated point clouds, that is, the second point clouds, among the K initial point cloud data, and these second point clouds generally satisfy a preset positional relationship with the first point cloud.
For example, in combination with the actual scenario, when a certain obstacle is located between two rails and is significantly higher than the rails, it may be considered as an obstacle that may have an influence on the running of the vehicle. The obstacle is located between the two rails and higher than the determination condition of the rails, and can be generally reflected in a preset positional relationship between the first point cloud and the second point cloud.
Of course, the above is merely an example of the preset position condition required to be set for detecting the target obstacle, and in practical application, the preset position condition may be determined according to need. For example, in a single track rail transportation application scenario, the target obstacle may be located within a preset distance range on both sides of the single track rail, and so on.
In combination with the above description, in step 103, the second point cloud associated with the target obstacle may be determined from the K initial point cloud data according to the determined first point cloud and the preset position relationship.
According to the obstacle detection method provided by the embodiment of the application, K initial point cloud data acquired for the target track and L Zhang Chushi images are acquired, and based on each initial image, first point clouds of the track belonging to the target track are determined from the K initial point cloud data, so that a target obstacle meeting a preset position relation between the associated second point clouds and the first point clouds can be detected. In the embodiment of the application, the initial image and the initial point cloud data are combined, so that the first point cloud of the track belonging to the target track can be accurately acquired, meanwhile, the track can be generally used as a relatively stable reference object, the target obstacle in the target track is detected based on the first point cloud belonging to the track, and the obstacle detection precision can be effectively improved.
In one example, the K initial point cloud data may be a plurality of initial point cloud data acquired by a plurality of lidars; that is, K herein may be specifically an integer greater than 1.
In combination with a specific application scenario, in some rail transit vehicles (such as trains, subways, etc.) running at a relatively high speed, the point cloud of the target track obtained by a single lidar may be relatively sparse. Under the condition, a plurality of initial point cloud data are acquired, so that the point cloud density of the target track can be effectively improved, the number of points of the first point cloud belonging to the track is increased, and further the detection effect on the target obstacle is improved.
In addition, by arranging a plurality of laser radars, the blind area of a single laser radar can be supplemented. For example, one lidar may be used as the primary radar, while the remaining lidars may be used as blind-complement radars, and so on.
Of course, it is readily understood that this example is merely one illustration of the manner in which K initial point cloud data is obtained. In practical application, the K initial point cloud data may also be single initial point cloud data collected by a single laser radar; or, one or more point cloud data obtained by screening according to a preset quality evaluation index (such as point cloud density) from a plurality of candidate point cloud data acquired by a plurality of laser radars may be also used.
In one example, the L initial images may be a plurality of initial images acquired by a plurality of cameras; that is, L herein may be specifically an integer greater than 1.
For example, in an initial image, the actual extension length of the target track may be long, and in the case where the focal length of the camera is fixed, there may be a large difference in the quality of the images of the target track at different distances in the initial image acquired by a single camera.
In this example, among the plurality of cameras, there may be a difference in focal length between at least two cameras. For example, there may be a camera a and a camera B on the vehicle; the focal length of the camera A is shorter, and an initial image a of a target track at a distance of 1-100 m can be acquired clearly; the focal length of the camera B is longer, and the initial image B of the target track at the distance of 100-500 m can be acquired relatively clearly.
Thus, based on the initial image a and K initial point cloud data, a target obstacle at a distance of 1-100 m can be detected relatively accurately; based on the initial image b and K initial point cloud data, a target obstacle at a distance of 100-500 m can be detected relatively accurately; thereby improving the detection effect on the target obstacle.
In addition, a plurality of cameras are provided similarly to the arrangement of a plurality of lidars, and may be used for a main camera and a blind-complement camera, respectively, so as to take a picture of a blind area of a single camera.
Of course, in practical application, the L Zhang Chushi image may also be a single initial image acquired by a single camera; alternatively, it may be an initial image or the like having the highest quality, which is determined from a plurality of candidate images acquired by a plurality of cameras having the same focal length, and will not be described in detail.
Optionally, step 102 above determines, based on each initial image, a first point cloud from K initial point cloud data, including:
fitting a track fitting equation of a track in a first initial image in an image coordinate system, wherein the first initial image is any initial image in L initial images;
and mapping the K initial point cloud data into an image coordinate system, and screening the K initial point cloud data according to a rail fitting equation to obtain a first point cloud.
In this embodiment, for the L Zhang Chushi image, detection of the target obstacle may be performed by combining K initial point cloud data, respectively; therefore, the detection process of the target obstacle can be described below mainly based on a certain initial image; and the certain initial image may be defined as a first initial image.
As shown above, the track in the first initial image may be identified by a deep learning algorithm, etc., to obtain a pixel point of the track in the initial image, or a fitting equation of the track in the image coordinate system.
In this embodiment, by identifying the track, a track fitting equation of the track in the image coordinate system can be obtained. The track fitting equation may be a straight line equation, or may be a quadratic, cubic or higher order curve equation, which is not particularly limited herein.
As described above, for each sensor on the vehicle, information such as the relative positional relationship and the angle of installation may be known, and therefore, the conversion relationship between the coordinate systems to which each sensor is associated may be predetermined.
In other words, the internal and external parameters of each sensor may be pre-calibrated. Based on the calibration of the internal and external parameters, each data (such as point cloud data, images, etc.) can be converted between different coordinate systems, and a specific conversion process, such as a mapping process in each coordinate system, can be realized. Therefore, in some embodiments below, a specific implementation procedure of coordinate system conversion may be omitted for simplicity of description.
Generally, the initial point cloud data is a three-dimensional point cloud, and a corresponding two-dimensional point set is obtained after mapping the initial point cloud data into an image coordinate system; the two-dimensional points are concentrated, and the points falling on the track fitting equation or the points with the distance from the track fitting equation smaller than a certain distance threshold value can be often considered as the points obtained after the first point cloud belonging to the track is mapped into the image coordinate system. Because the mapping relation exists between the three-dimensional point cloud and the two-dimensional point set, aiming at the points belonging to the track determined in the image coordinate system, the points belonging to the track in the three-dimensional point cloud can be searched according to the mapping relation.
In other words, based on the mapping process, K initial point cloud data can be screened in combination with the track fitting equation, so as to obtain a point cloud belonging to the track, that is, the first point cloud.
In this embodiment, the track in the first initial image is characterized by the track fitting equation, so that the screening of the K initial point cloud data is realized by the distance between the mapping points of the K initial point cloud data in the image coordinate system and the track fitting equation, and the screening accuracy of the first point cloud belonging to the track is improved.
In one example, the mapping the K initial point cloud data to the image coordinate system, and filtering the K initial point cloud data according to the track fitting equation to obtain a first point cloud includes:
mapping the K initial point cloud data to an image coordinate system to obtain a mapping point set;
determining a target mapping point with a distance smaller than a second distance threshold value from the mapping point set;
and determining a first point cloud from the K initial point cloud data according to the target mapping points.
In this embodiment, considering that the track generally has a certain width, the point cloud projected in the initial point cloud data and located near the track fitting equation can be determined to be attributed to the first point cloud of the track by determining the second distance threshold, which is helpful for improving the rationality of the obtained first point cloud.
Specifically, the initial point cloud data may be mapped to the image coordinate system according to a conversion relationship between the coordinate system associated with the initial point cloud data and the image coordinate system, to obtain the mapped point set.
And when the distance from one mapping point to the track fitting equation is smaller than a second distance threshold, the mapping point can be considered to be obtained by mapping the first point cloud into the image coordinate system.
Optionally, fitting a track fit equation for the track in the first initial image in the image coordinate system includes:
performing pixel segmentation on the first initial image based on a deep learning model obtained through pre-training to obtain initial rail pixel points belonging to rails in the first initial image;
and in the image coordinate system, fitting the initial rail pixel points to obtain a rail fitting equation.
In this embodiment, the first initial image may be identified using a deep learning model. The deep learning model may be trained in advance based on training samples.
For example, the training samples may be sample images marked with rails, which may be captured by cameras provided on rail transit vehicles; accordingly, the noted rails may be rails in the track where the rail transit vehicles are currently located. Based on the deep learning model obtained by training the sample image, the track of the track (namely the target track) where the vehicle is currently located in the initial image can be identified, and the tracks of other parallel tracks are eliminated; in the subsequent obstacle detection process, the target obstacle in the target track can be detected in a focusing mode, and the target obstacle detection efficiency is improved.
In this embodiment, based on the deep learning model, the pixel points associated with the track may be obtained from the first initial image. For example, in combination with a specific application scene, the deep learning model can segment pixels corresponding to the track to obtain a binary image; the binary image may include foreground data corresponding to the track; clustering the foreground data or searching the connected areas to obtain a plurality of areas; and each region may correspond to a rail, respectively.
The pixels belonging to the track may be pixels of a portion of the first initial image or pixels of a portion of the binary image, and are not limited herein. In general, however, the rails are associated with corresponding pixels, which have corresponding coordinates in the image coordinate system.
Based on the above, in the image coordinate system, the pixel points belonging to the track can be fitted to obtain a track fitting equation.
Optionally, in the case that the number of tracks in the target track is N, where N is an integer greater than 1, fitting the initial track pixel points in the image coordinate system, to obtain a track fitting equation includes:
in an image coordinate system, selecting candidate rail pixel points positioned in a preset image height interval from initial rail pixel points;
dividing candidate rail pixel points belonging to each rail into M pixel regions along a preset direction, wherein M is an integer greater than 1;
acquiring pixel center points of each pixel interval, and respectively fitting M pixel center points corresponding to each track to obtain N first fitting straight lines corresponding to N tracks;
according to the N first fitting straight lines, a perspective transformation matrix is determined, the N first fitting straight lines and candidate rail pixel points are mapped into a bird's eye view according to the perspective transformation matrix, and N second fitting straight lines and candidate mapping pixel points are respectively obtained;
Filtering out first mapping pixel points in the candidate mapping pixel points to obtain second mapping pixel points, wherein the first mapping pixel points are pixel points, in the candidate mapping pixel points, of which the distances with any second fitting straight line are larger than a first distance threshold value;
respectively fitting target rail pixel points belonging to each rail to obtain N rail fitting equations corresponding to the N rails; the target track pixel point is a candidate track pixel point corresponding to the second mapping pixel point.
It is readily understood that in rail transit vehicles, it is possible to divide them into monorail vehicles and double rail vehicles; in an actual track running environment, there may be track merging or track bifurcation. That is, in a track-running environment, there may be a case where a plurality of target tracks are included on a route along which the vehicle is currently running.
In this embodiment, for a dual-track vehicle, the track fitting equation may be more accurately obtained based on the application of the bird's eye view. For simplicity of description, the following description will mainly take an example in which the target track includes a first track and a second track.
The first rail and the second rail can be rails of a target rail on a current running path of the vehicle; in a general double-rail running environment, the first rail and the second rail form a double rail; in the above-mentioned environments of track merging, track branching or special three-rail, the first track and the second track may be two outermost tracks.
Generally, as the length of the track increases, the track is curved or the image quality is more likely to be degraded, so in this embodiment, candidate track pixel points located in a preset image height interval may be first selected from initial track pixel points; for example, the candidate rail pixel points can be pixel points with a certain height at the bottom of the initial image, so that the straight line fitting can be completed with higher quality in the next step.
Referring to fig. 2, fig. 2 shows an exemplary diagram of pixel section division for an initial track pixel point, in which candidate track pixel points belonging to each track may be respectively divided into M pixel sections along an image height direction (corresponding to the above-described preset direction). In general, there may be a plurality of pixel points in each pixel section, and coordinates of the respective pixel points in an image coordinate system; therefore, in one pixel section, the pixel center point of the pixel section can be determined based on the coordinate values of the pixel points located therein.
Thus, corresponding M pixel center points can be determined for the first track and the second track respectively; performing straight line fitting on M pixel center points corresponding to the first track to obtain a corresponding first fitting straight line, which is marked as l 1 The method comprises the steps of carrying out a first treatment on the surface of the Similarly, performing straight line fitting on M pixel center points corresponding to the second track to obtain a corresponding first fitting straight line, which is denoted as l 2
In the image coordinate system, the target track is usually presented in perspective, that is to say, althoughOne track is parallel to the second track in the actual scene, but l 1 And/l 2 The two may be non-parallel; referring to fig. 2, two first fitting straight lines gradually get closer in the image height direction.
In this embodiment, l may be 1 And/l 2 After the two first fitting straight lines are mapped into the aerial view, two second fitting straight lines can be obtained, and the two second fitting straight lines can be parallel to each other.
In connection with FIG. 2, the following pair will be l 1 And/l 2 The manner of mapping to the bird's eye view is illustrated. Can be from l 1 Two points are selected and marked as a and d, and the two points can be directly based on l 1 The two points determined by the equation of (2) may be two pixel center points used for fitting, which are not particularly limited herein. Similarly, one can select from l 2 Two points are selected and marked as b and c.
Generally, the two points a and b can be translated according to a preset translation rule until the ad connecting line is parallel to the bc connecting line. Based on the translation results of a and b, a transformation matrix, i.e. the above-mentioned perspective transformation matrix, can be obtained. From the perspective transformation matrix, then l can be further reduced 1 And/l 2 Respectively mapping to the aerial view, and respectively marking the obtained two second fitting straight lines as l 3 And/l 4
It is easily understood that noise may occur in the obtained rail pixel points due to the influence of the focal length of the camera or due to the influence of the driving environment conditions (e.g. rainy days, foggy days). Generally, the more distant the track from the camera, the more unstable the segmentation result of the corresponding pixel point, and the greater the possibility of noise generation.
In order to filter the noise, in this embodiment, the candidate track pixel points may be mapped to the bird's eye view, and these mapped pixel points are defined as candidate mapped pixel points, if l is equal to l for any one of the candidate mapped pixel points 3 Distance between, and l 4 When the distances between the candidate mapping pixel points are all larger than the first distance threshold value, the candidate mapping pixel points can be determined to be at the same timeThe corresponding pixel points in the first initial image (or the binary image) are noise, and filtering can be performed.
Therefore, among the candidate mapping pixel points obtained in the bird's eye view, the pixel points with the distances between the candidate mapping pixel points and each second fitting straight line being larger than the first distance threshold, namely the first mapping pixel points, can be filtered out from the candidate mapping pixel points; while the remaining second mapped pixels remain.
The second mapping pixel points can be obtained by mapping part of pixel points in the candidate track pixel points to the bird's eye view, so that the second mapping pixel points have corresponding pixel points in the candidate track pixel points, namely the target track pixel points.
At this time, the target rail pixel points belonging to each rail can be fitted, and the obtained rail fitting equation is more matched with the actual state (such as position, trend, etc.) of the rail, so that the accuracy of the subsequent target obstacle detection is further improved.
It is easy to understand that the number of the track fitting equations may be matched with the number of the tracks, and the correspondence between each track and the pixel points for fitting the track fitting equations may be determined in the process of track identification and pixel segmentation, or may be determined according to the distance between each pixel point and the first fitting straight line or the second fitting straight line. In general, according to the target rail pixel points, corresponding rail fitting equations can be respectively fitted to the first rail and the second rail.
Optionally, in the case where the K initial point cloud data are a plurality of initial point cloud data and come from K lidars (i.e., in the case where K is an integer greater than 1), mapping the K initial point cloud data into the image coordinate system includes:
Acquiring a first coordinate system conversion relation between a radar coordinate system of each laser radar and a preset reference coordinate system and a second coordinate system conversion relation between the preset reference coordinate system and an image coordinate system of any initial image;
mapping initial point cloud data acquired by each laser radar into a preset reference coordinate system according to a corresponding first coordinate system conversion relation to obtain mixed point cloud data;
and mapping the mixing point cloud data into an image coordinate system according to the second coordinate system conversion relation.
In other words, in this embodiment, the K value may be an integer greater than 1. At this time, each laser radar collects corresponding initial point cloud data under its own radar coordinate system. The initial point cloud data are mixed, so that the quantity of point clouds which can be used for determining the track and the target obstacle can be effectively improved, the characteristics of the objects are more obvious, and the detection effect is improved.
In order to realize the mixing of the K initial point cloud data, in this embodiment, a preset reference coordinate system may be determined, where the reference coordinate system may be a radar coordinate system of a certain laser radar or a vehicle body coordinate system, and is not specifically limited herein.
As described above, various coordinate systems may be calibrated in advance, and the conversion relationship between different coordinate systems may be predetermined, so that the first coordinate system conversion relationship between the radar coordinate system of each lidar and the preset reference coordinate system, and the second coordinate system conversion relationship between the preset reference coordinate system and the image coordinate system of any initial image may be directly obtained.
In this embodiment, the initial point cloud data collected by each laser radar may be converted into a preset reference coordinate system according to the first coordinate system conversion relationship, so as to obtain the mixing point cloud data. The hybrid point cloud data generally comprises a three-dimensional point cloud, three-dimensional information in the point cloud data is reserved, and the hybrid point cloud data can be used for detecting target obstacles later.
After the mixing point cloud data is obtained, the mixing point cloud data can be further mapped into an image coordinate system according to the second coordinate system conversion relation.
As mentioned in the above embodiments, the first point cloud attributed to the track may be screened according to the distance relationship between the mapping point set of the initial point cloud data in the image coordinate system and the track fitting equation. However, in the scenario shown in fig. 3, a rectangular object (denoted as T) located on the right side of the target track may be an object such as a wire pole that is normal on the side of the curved track, and then, in the image coordinate system, a partial point cloud corresponding to the rectangular object is also mapped onto a line corresponding to the track fitting equation (track corresponding to the target track).
In other words, in some application scenarios, after the initial point cloud data is mapped to the image coordinate system, the first point cloud obtained by screening according to the rail fitting equation may include the point cloud of the object that does not actually belong to the rail.
Based on this, in an alternative embodiment, the determining, according to the target mapping point, the first point cloud from the K initial point cloud data includes:
determining a third point cloud corresponding to the target mapping point from the K initial point cloud data;
projecting the third point cloud into a target plane of a vehicle body coordinate system to obtain a first projection point set, wherein the target plane is a plane determined according to the vehicle running direction and the vehicle height direction;
filtering outliers in the first projection point set to obtain a second projection point set;
and determining the first point cloud from the K initial point cloud data according to the second projection point set.
The third point cloud can be considered as the point cloud belonging to the track, which is obtained by screening based on the track fitting equation only, in the initial point cloud data to a certain extent.
In this embodiment, the third point cloud may be further projected into the target plane of the vehicle body coordinate system. As indicated above, the initial point cloud data may be in a corresponding radar coordinate system, and in some application scenarios, these initial point cloud data may also be pre-translated into a preset reference coordinate system, such as a vehicle body coordinate system.
In general, however, based on a pre-calibrated coordinate system, the third point cloud can be converted into the body coordinate system, whichever coordinate system is in, and can be projected further into the object plane of the body coordinate system.
As shown in fig. 4, fig. 4 shows an exemplary diagram of a first set of projection points resulting from projecting a third point cloud into a target plane. In this example diagram, the target plane may be denoted as the XOZ plane, where the X axis coincides with the vehicle travel direction and the Z axis coincides with the vehicle height direction.
In fig. 4, the lower denser point cloud portion (denoted as a first point cloud portion R1) may correspond to the actual point cloud of the track; in the first point cloud portion R1, there is a gap in the X-axis direction, corresponding to a portion of the track that is blocked by the rectangular object T in fig. 3. Whereas the upper sparse point cloud portion (denoted as the second point cloud portion R2) may be the point cloud associated with the rectangular object T in fig. 3.
As can be seen from fig. 4, for each point cloud point in the second point cloud portion R2, the point cloud point can be considered as an outlier with respect to the first point cloud portion R1, and therefore, the outlier point can be filtered out to obtain the remaining point cloud, i.e. the second projection point set described above. The second projection point set can be generally regarded as being obtained by projecting the point cloud actually associated with the track onto the target plane, and according to the second projection point set, the first point cloud actually belonging to the track can be accurately determined in the initial point cloud data.
The filtering method of the outlier may be performed by a least squares method, a RANSAC algorithm, or a statistical filtering method, which is not specifically limited herein.
Therefore, the problem of false detection of the obstacle at the track curve can be effectively solved, and the accuracy of the first point cloud obtained by determination is improved.
Optionally, in step 103, detecting the target obstacle in the target track based on the K initial point cloud data with the first point cloud as the reference point cloud includes:
determining, in a vehicle body coordinate system, second point clouds associated with each first point cloud point from the first point clouds; the first point cloud point is the point cloud point except the first point cloud point in the K initial point cloud data, the second point cloud point is the point cloud point closest to the first point cloud point on the X axis in the first point cloud, and the X axis is parallel to the running direction of the vehicle;
determining a first point cloud point, the distance between the first point cloud point and an associated second point cloud point on the Y axis of which is parallel to the vehicle width direction, meeting a preset distance condition as a candidate point cloud point;
clustering the candidate point cloud points to obtain fourth point cloud related to at least one candidate obstacle;
and determining a target obstacle from at least one candidate obstacle according to the height difference between each point cloud point in the fourth point cloud and the second point cloud point associated with the point cloud point.
In this embodiment, the target obstacle may be detected based on the first point cloud on the basis of the determination of the first point cloud.
In combination with an application scene, the initial point cloud data can have a plurality of point clouds, and more precisely, each point in the point clouds can be specified; that is, the initial point cloud data may include a plurality of point cloud points; wherein, in the initial point cloud data, the aggregation of point cloud points except the first point cloud can be recorded as P e Any point cloud point can be recorded as p i (corresponding to the first clouds).
The number of the rails can be two, each rail corresponds to a first point cloud, and the set of all point cloud points in the first point cloud corresponding to the two rails is respectively recorded as P l And P r
The vehicle body coordinate system established in the above embodiment, that is, the X axis coincides with the vehicle traveling direction, the Z axis coincides with the vehicle height direction, and the Y axis coincides with the vehicle width direction, may be cited herein. Each point cloud point in the initial point cloud data may have a corresponding coordinate in the vehicle body coordinate system.
For p i Can be from P l And P r Respectively find and p i The nearest point cloud points of the X coordinates of (2) are respectively marked as p l And p is as follows r (all may correspond to a second point cloud). Then compare p i And p is as follows l 、p r If on the Y axis, p i At p l And p is as follows r Between, or p i And p is as follows l The distance between them is less than a threshold value, or p i And p is as follows r The distance between them is less than a thresholdThe value can then be p i And determining the cloud point as a candidate point. Here, p is judged i Whether the candidate point cloud point is the candidate point cloud point or not can be set according to actual needs, and can be embodied specifically through the preset distance conditions.
The purpose of screening the candidate point cloud points based on the preset distance condition can be to screen out the point cloud points positioned between two rails (or further screen out the point cloud points in a certain range outside the rails according to actual needs).
And clustering the candidate point clouds to obtain a fourth point cloud associated with at least one candidate obstacle. The specific clustering algorithm may be selected according to actual needs, which is not specifically limited herein. Here, the association relationship between the candidate obstacle and the fourth point cloud may be understood as that each candidate obstacle has the fourth point cloud attributed thereto.
In the above step, for p determined as a candidate point cloud point i At the same time pass through the corresponding p l And p is as follows r The coordinates in the Z-axis determine a reference height, e.g., p can be found l And p is as follows r Or a weighted average, or a larger or smaller value of the two, etc., as desired) to obtain a reference height Z c And p is i Itself also has Z-axis coordinate Z i 。z c And z i May have a correspondence.
In connection with an example where an obstacle, typically higher than the rail, affects the movement of the vehicle, the Z-axis coordinate Z of each point cloud point in the candidate obstacle may be determined i With a corresponding reference height z c Comparing, when z i >z c When it is considered a valid point cloud point (i.e., the point cloud point may be the point cloud point corresponding to the target obstacle). Of course, it is also possible to further acquire all the point clouds (corresponding to the fourth point cloud) corresponding to the candidate obstacle to satisfy z i >z c In the event that the number is greater than a threshold, determining the candidate obstacle as the target obstacle.
That is, the target obstacle may be determined from at least one candidate obstacle according to a height difference between each point cloud point in the fourth point cloud and its associated second point cloud point.
Of course, the above is merely an example of the process of detecting the target obstacle in the double track application scenario, and the determination may also be performed on the X-axis and p-axis for the single track application scenario i The associated second point cloud point detects the target obstacle according to the coordinate relationship of the Y axis and the Z axis, and is not described herein.
In this embodiment, the first point cloud associated with the track may be used as a reference, the candidate point cloud points are determined from the initial point cloud data, and after the candidate points cloud points are clustered to obtain the candidate obstacle, the target obstacle is determined from the candidate obstacle by further using the height relationship between each point cloud point belonging to the candidate obstacle and the associated second point cloud point. The rail is used as a reference to determine the target obstacle, so that the detection accuracy of the obstacle can be effectively improved.
In one example, in a fourth point cloud associated with the target obstacle, the number of target point clouds is greater than the number threshold, and a maximum value of a height difference between each target point cloud and its associated second point cloud is greater than the second difference threshold, wherein the target point cloud is a point cloud having a height difference between its associated second point cloud that is greater than the first difference threshold.
The conditions required to be met by the fourth point cloud of the determined target obstacle in the initial point cloud data are limited, on one hand, the conditions required to be met by the fourth point cloud of the target obstacle are limited from the number angle of the point clouds (corresponding to the target point clouds) meeting the first difference value threshold, and false detection of the target obstacle caused by noise and the like can be effectively avoided; on the other hand, from the viewpoint of the maximum value of the height difference, the height that the target obstacle should have is defined, and it is avoided that a low obstacle that does not affect the normal running of the vehicle is determined as the target obstacle. As can be seen, the present example may effectively improve the rationality of determining the target obstacle.
Optionally, after determining the target obstacle from the at least one candidate obstacle according to the height difference between each point cloud point in the fourth point cloud and the second point cloud point associated with the point cloud point, the method further includes:
on the X axis of a vehicle body coordinate system, determining the nearest distance between each point cloud point in the fourth point cloud related to the vehicle and the target obstacle as the distance between the vehicle and the target obstacle;
and outputting an alarm signal when the target distance is smaller than a third distance threshold value.
In this embodiment, on the X-axis, the nearest distance between each first point cloud point in the fourth point cloud associated with the vehicle and the target obstacle is determined as the target distance, and when the target distance is smaller than the third distance threshold, an alarm signal is output, so that the detected target obstacle can be timely alarmed, and the driving safety is improved.
In one example, where the origin of the vehicle body coordinate system is at the forefront of the vehicle and the positive half axis of the X-axis is in front of the vehicle, the minimum X-coordinate of each cloud point in the fourth point cloud may be taken as the distance between the vehicle and the target obstacle.
Further, in some scenarios, there may be a plurality of detected target obstacles, and the distance between the vehicle and each target obstacle may be determined according to the above manner.
The following describes an obstacle detection method provided in the embodiment of the present application with reference to a specific application example. Referring to fig. 5, in this application example, the obstacle detection method includes:
step 501, calibrating internal and external parameters of each sensor;
in the application example, L cameras (L is more than or equal to 1) and K laser radars (K is more than or equal to 1) can be adopted.
In some possible embodiments, the range of obstacle detection can be effectively improved by using cameras with different focal lengths and lidars with different Field of View (FOV).
For a camera, its internal parameters and distortion parameters can be calibrated. Alternatively, a lidar may be selected as reference O b Obtaining each camera to a reference through a calibration algorithm of a laser radar and the cameraConversion relation of laser radar coordinate system
Figure BDA0002987956490000191
Conversion relation between other laser radars and a reference laser radar coordinate system is obtained through a laser radar and laser radar calibration algorithm>
Figure BDA0002987956490000192
Furthermore, a reference lidar coordinate system O is obtained by measurement b And the coordinate system O of the vehicle body c Is->
Figure BDA0002987956490000193
Wherein the vehicle body coordinate system O c The position and definition of (c) can be determined according to actual needs.
Since after detecting the obstacle, the position of the obstacle in the train advancing direction is of comparative concern; thus, to simplify the calibration process, for
Figure BDA0002987956490000194
Can simply measure O only b With O c Distance in the direction of train travel.
Of course, in order to more accurately realize detection of an obstacle, the reference lidar coordinate system O b And the coordinate system O of the vehicle body c Is a conversion relation of (a)
Figure BDA0002987956490000195
There may also be more rigorous and accurate determination of O, such as by means of a laser level or the like b With O c Pitch, roll, yaw angle of (c).
Step 502, two-dimensional rail detection;
in the step, one or more cameras can be used for shooting scenes in the running direction of the train, and the track and rail detection in the two-dimensional space can be realized by utilizing a computer vision algorithm for each image shot by the cameras.
In general, in this step, the pixel segmentation of the rail (corresponding to the rail in the target track) on which the train runs may be implemented by the deep learning algorithm to obtain the binary image B, and then filtering is performed according to the foreground data (i.e., the pixel points belonging to the rail) of the binary image B, and fitting is performed to obtain N curve equations describing the rail.
Specifically, the implementation process of this step may mainly include the following steps:
1) Foreground data (such as foreground data in 1/3 height of an image from the bottom of the image) which is close to the bottom of the image and has a certain height in the pixel segmentation result are taken, and clustering or connected region searching is carried out on the foreground data to obtain N regions (corresponding to N tracks);
2) Uniformly dividing pixels in each region into M sections along the image height direction, calculating each section to obtain the center point of each section, and fitting a straight line according to the center points; the N regions may correspond to N straight lines;
3) Based on two straight lines at the leftmost side and the rightmost side of the fitted image, a, b, c, d points shown in fig. 2 are taken, and a perspective transformation matrix H mapped to a bird's-eye view angle is calculated; transforming the whole binary image B according to H to obtain a rail segmentation result B' of the aerial view;
4) Under the aerial view angle, the N straight lines obtained by fitting in the step 2) are parallel to each other. Extending the parallel straight lines on a rail segmentation result B' of the aerial view, and filtering according to the distance between the segmented foreground pixels and the straight lines: if the nearest distance between a certain point and all straight lines is larger than a preset threshold value, the nearest distance is considered as noise and is deleted from the foreground data.
In general, the more unstable the rail segmentation result is the farther from the camera, the more noise can be effectively filtered through the above processing in the bird's eye view.
5) And respectively fitting according to the filtered foreground pixel points belonging to each rail to obtain a corresponding two-dimensional rail curve equation (corresponding rail fitting equation). While marking the leftmost and rightmost rails for use in the next step.
Step 503, screening three-dimensional rail point clouds;
the method is limited by the perception characteristics of the laser radar, rail point clouds obtained on a fast moving train are sparse, and direct and effective analysis is difficult to perform.
The purpose of this step is to find the 3D lidar point cloud on the rail by a two-dimensional rail equation. First, the point cloud obtained by each laser radar is obtained according to the conversion relation determined in step 501
Figure BDA0002987956490000201
Convert it to O b In the coordinate system, the point cloud P is obtained in an accumulated manner 3d (corresponding mixing point cloud data); then according to->
Figure BDA0002987956490000202
Point cloud P 3d Mapping into camera image coordinate system to obtain P 2d (corresponding to the set of mapped points).
At this point the point cloud on the rail will be projected on the two-dimensional image of step 502, calculating P in turn 2d If the distance between each point of the track and the curve equation of the leftmost and rightmost two-dimensional track is smaller than a preset threshold value, the corresponding three-dimensional point cloud is considered to possibly belong to the track corresponding to the curve, and the point clouds are recorded as P '' 3d (corresponding to the third point cloud).
The laser point cloud of the rectangular target T shown in fig. 3 will also have a portion projected onto the two-dimensional rail, i.e. its corresponding three-dimensional point cloud is also contained in P' 3d Is a kind of medium. Statistics of P 'along X-direction of vehicle body coordinate system' 3d Z coordinate values belonging to the rail are continuously changed, and Z values of point clouds of a shielding part of the rectangular target in fig. 3 generate a jump, filtering of outliers can be realized according to a least square method, and the filtered point clouds are recorded as
Figure BDA0002987956490000211
(corresponding to the first point cloud).
FIG. 4 statistics of Point cloud on rails with Point cloud X values on the abscissa and altitude Z values on the ordinate
Figure BDA0002987956490000212
In the figure, the point in R2 is a point cloud of the obstacle beside the track, and the point in R1 is a point cloud of the rail. The filtering may be performed by first segmenting according to the X coordinate of the point cloud, and the track point cloud belonging to each segment may be approximately straight in the trend of fig. 4. And then, a linear equation is taken as a model, a least square method is utilized to fit according to the segment of data, and finally, the distance between each point and the straight line obtained by fitting is calculated to filter out the points in R2.
Step 504, obstacle detection and filtering;
in this step, the point cloud P obtained in step 503 may be referred to 3d According to the track surface point cloud
Figure BDA0002987956490000213
Filtering to obtain point cloud +.>
Figure BDA0002987956490000214
(aggregation of corresponding candidate point cloud points). The filtering method comprises the following steps: p pair P 3d Each point p in i Searching the nearest point p of the X coordinates of the point clouds of the left and right rail surfaces according to the X coordinates l And p is as follows r . Then compare p i And p is as follows l 、p r If p is the Y-coordinate of i At p l And p is as follows r Between them is regarded as p i Belonging to->
Figure BDA0002987956490000215
Simultaneous recording of p l And p is as follows r Is taken as p i Is defined, the reference track plane height of (a).
For a pair of
Figure BDA0002987956490000216
Obtaining possible obstacle Q through clustering algorithm i (corresponding to candidate obstacles), calculating the difference between the Z coordinate value of each point (corresponding to each point in the fourth point cloud) and the height of the corresponding reference track surface in sequence, counting the points with the height difference larger than a preset threshold value and the maximum value of the height difference. If the maximum value of the point number and the height difference are respectively larger than the preset threshold value, the Q is considered to be i Is a real obstacle (corresponding to the target obstacle). Analysis Q i The X-axis data distribution of the point cloud takes the minimum X-coordinate value (namely nearest to the headstock) as the distance D of the target obstacle i
Based on the application example, the obstacle detection method provided by the embodiment of the application can obtain real-time in-orbit obstacle information in a multi-sensor fusion mode, and has high reliability; the two-dimensional rail curve fitting method based on the deep learning rail segmentation can effectively filter noise points, and the rail fitting effect is stable; the problem of false detection of the curve obstacle can be effectively solved based on the outlier filtering mode. In addition, from the perspective of hardware configuration, the sensor used can be simpler, and the installation and maintenance are convenient.
As shown in fig. 6, an embodiment of the present application further provides a vehicle, including:
the acquisition module 601 is configured to acquire K initial point cloud data acquired for a target track and L Zhang Chushi images, where K and L are positive integers;
a first determining module 602, configured to determine, based on each initial image, a first point cloud from K initial point cloud data, where the first point cloud is a point cloud belonging to a track in the target track;
the detection module 603 is configured to detect, based on K initial point cloud data, a target obstacle in a target track, where the target obstacle has associated with the K initial point cloud data a second point cloud, and a preset positional relationship is satisfied between the second point cloud and the first point cloud.
Optionally, the first determining module 602 may include:
the fitting sub-module is used for fitting a track fitting equation of a track in the first initial image in an image coordinate system, wherein the first initial image is any initial image in the L initial images, and the first initial image is any initial image in the L initial images;
and the screening sub-module is used for mapping the K initial point cloud data into an image coordinate system, and screening the K initial point cloud data according to a rail fitting equation to obtain first point cloud.
Optionally, the fitting sub-module may include:
the segmentation acquisition unit is used for carrying out pixel segmentation on the first initial image based on the deep learning model obtained by pre-training to obtain initial rail pixel points belonging to the rail in the first initial image;
and the fitting unit is used for fitting the initial rail pixel points in the image coordinate system to obtain a rail fitting equation.
Alternatively, in the case where the number of tracks in the target track is N, N being an integer greater than 1, the fitting unit may include:
the screening subunit is used for screening candidate rail pixel points positioned in a preset image height interval from the initial rail pixel points;
The dividing subunit is used for dividing the candidate rail pixel points belonging to each rail into M pixel regions respectively, wherein M is an integer greater than 1;
the first fitting subunit is used for obtaining pixel center points of each pixel interval, and respectively fitting M pixel center points corresponding to each track to obtain N first fitting straight lines corresponding to the N tracks;
the first determining subunit is configured to determine a perspective transformation matrix according to the N first fitting straight lines, map the N first fitting straight lines and the candidate rail pixel points to the aerial view according to the perspective transformation matrix, and obtain N second fitting straight lines and candidate mapped pixel points respectively;
the first filtering subunit is used for filtering out first mapping pixel points in the candidate mapping pixel points to obtain second mapping pixel points, wherein the first mapping pixel points are pixel points, of which the distances between any second fitting straight line and the first mapping pixel points are larger than a first distance threshold value;
the second fitting subunit is used for respectively fitting the target rail pixel points belonging to each rail to obtain N rail fitting equations corresponding to the N rails; the target track pixel point is a candidate track pixel point corresponding to the second mapping pixel point.
Optionally, under the condition that K is an integer greater than 1, K initial point cloud data are acquired by K lidars;
accordingly, the screening submodule may include:
an acquisition unit configured to acquire a first coordinate system conversion relationship between a radar coordinate system of each laser radar and a preset reference coordinate system, and a second coordinate system conversion relationship between the preset reference coordinate system and an image coordinate system of any initial image;
the first mapping unit is used for mapping the initial point cloud data acquired by each laser radar into a preset reference coordinate system according to the corresponding first coordinate system conversion relation to obtain mixed point cloud data;
and the second mapping unit is used for mapping the mixing point cloud data into an image coordinate system according to a second coordinate system conversion relation.
Optionally, the screening submodule may include:
the third mapping unit is used for mapping the K initial point cloud data into an image coordinate system to obtain a mapping point set;
a first determining unit, configured to determine, from the mapping point set, a target mapping point having a distance from the rail fitting equation smaller than a second distance threshold;
and the second determining unit is used for determining the first point cloud from the K initial point cloud data according to the target mapping points.
Alternatively, the second determining unit may include:
a second determining subunit, configured to determine a third point cloud corresponding to the target mapping point from the K initial point cloud data;
the acquisition subunit is used for projecting the third point cloud into a target plane of a vehicle body coordinate system to obtain a first projection point set, wherein the target plane is a plane determined according to the vehicle running direction and the vehicle height direction;
the second filtering subunit is used for filtering the outliers in the first projection point set to obtain a second projection point set;
and the third determining subunit determines the first point cloud from the K initial point cloud data according to the second projection point set.
Optionally, the detection module 603 may include:
a first determining sub-module for determining, in a vehicle body coordinate system, second point clouds associated with each first point cloud point from among the first point clouds; the first point cloud point is the point cloud point except the first point cloud point in the K initial point cloud data, the second point cloud point is the point cloud point closest to the first point cloud point on the X axis in the first point cloud, and the X axis is parallel to the running direction of the vehicle;
a second determination sub-module for determining, as candidate point cloud points, first point cloud points whose distances on the Y-axis with the associated second point cloud points satisfy a preset distance condition, the Y-axis being parallel to the vehicle width direction;
The clustering sub-module is used for clustering the candidate point cloud points to obtain fourth point cloud associated with at least one candidate obstacle;
and a third determining sub-module, configured to determine a target obstacle from at least one candidate obstacle according to a height difference between each point cloud point in the fourth point cloud and its associated second point cloud point.
Optionally, in the fourth point cloud associated with the target obstacle, the number of target point clouds is greater than the number threshold, and a maximum value of the height difference between each target point cloud and its associated second point cloud is greater than the second difference threshold, wherein the target point cloud is a point cloud having a height difference between its associated second point cloud that is greater than the first difference threshold.
Optionally, the obstacle detecting apparatus may further include:
the second determining module is used for determining the nearest distance between each first point cloud point in the fourth point cloud related to the vehicle and the target obstacle as the target distance between the vehicle and the target obstacle on the X axis of the vehicle body coordinate system;
and the output module is used for outputting an alarm signal when the target distance is smaller than a third distance threshold value.
Optionally, in the case where L is an integer greater than 1, the L Zhang Chushi images are acquired by L cameras;
Among the L cameras, there is a difference in focal length between at least two cameras.
The vehicle is a vehicle corresponding to the obstacle detection method, and all the implementation manners in the method embodiment are applicable to the vehicle embodiment, so that the same technical effects can be achieved.
Fig. 7 shows a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
A processor 701 may be included in an electronic device, as well as a memory 702 in which computer program instructions are stored.
In particular, the processor 701 described above may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 702 may include mass storage for data or instructions. By way of example, and not limitation, memory 702 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. The memory 702 may include removable or non-removable (or fixed) media, where appropriate. Memory 702 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 702 is a non-volatile solid state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to the methods according to the present disclosure.
The processor 701 implements any one of the obstacle detection methods of the above embodiments by reading and executing computer program instructions stored in the memory 702.
In one example, the electronic device may also include a communication interface 703 and a bus 704. As shown in fig. 7, the processor 701, the memory 702, and the communication interface 703 are connected by a bus 704 and perform communication with each other.
The communication interface 703 is mainly used for implementing communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 704 includes hardware, software, or both that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 704 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, in combination with the obstacle detection method in the above embodiment, the embodiment of the application may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the obstacle detection methods of the above embodiments.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (11)

1. An obstacle detection method applied to a vehicle, the method comprising:
k initial point cloud data and L Zhang Chushi images acquired from a target track are acquired, wherein K and L are positive integers;
determining a first point cloud from the K initial point cloud data based on each initial image, wherein the first point cloud is a point cloud belonging to a track in the target track;
detecting a target obstacle in the target track based on the K initial point cloud data by taking the first point cloud as a reference point cloud, wherein the target obstacle is associated with a second point cloud in the K initial point cloud data, and the second point cloud and the first point cloud meet a preset position relation;
The determining a first point cloud from the K initial point cloud data based on each of the initial images, includes:
fitting a track fitting equation of a track in a first initial image in an image coordinate system, wherein the first initial image is any initial image in the L Zhang Chushi image;
mapping the K initial point cloud data into the image coordinate system, and screening the K initial point cloud data according to the track fitting equation to obtain the first point cloud;
the fitting the track fitting equation of the track in the first initial image in the image coordinate system comprises the following steps:
performing pixel segmentation on the first initial image based on a deep learning model obtained through pre-training to obtain initial rail pixel points belonging to rails in the first initial image;
fitting the initial rail pixel points in the image coordinate system to obtain the rail fitting equation;
in the case that the number of tracks in the target track is N, where N is an integer greater than 1, fitting the initial track pixel point in the image coordinate system, to obtain the track fitting equation includes:
in the image coordinate system, candidate rail pixel points positioned in a preset image height interval are screened from the initial rail pixel points;
Dividing candidate rail pixel points belonging to each rail into M pixel regions along a preset direction, wherein M is an integer greater than 1;
acquiring pixel center points of each pixel interval, and respectively fitting M pixel center points corresponding to each rail to obtain N first fitting straight lines corresponding to N rails;
according to the N first fitting straight lines, a perspective transformation matrix is determined, the N first fitting straight lines and the candidate rail pixel points are mapped into a bird's eye view according to the perspective transformation matrix, and N second fitting straight lines and candidate mapping pixel points are respectively obtained;
filtering out first mapping pixel points in the candidate mapping pixel points to obtain second mapping pixel points, wherein the first mapping pixel points are pixel points in the candidate mapping pixel points, and the distance between each pixel point and any second fitting straight line is larger than a first distance threshold value;
respectively fitting target rail pixel points belonging to each rail to obtain N rail fitting equations corresponding to N rails; the target track pixel point is a candidate track pixel point corresponding to the second mapping pixel point.
2. The method according to claim 1, wherein, in the case where K is an integer greater than 1, the K initial point cloud data are acquired by K lidars;
The mapping the K initial point cloud data into the image coordinate system includes:
acquiring a first coordinate system conversion relation between a radar coordinate system of each laser radar and a preset reference coordinate system and a second coordinate system conversion relation between the preset reference coordinate system and an image coordinate system of any initial image;
mapping the initial point cloud data acquired by each laser radar into the preset reference coordinate system according to the corresponding first coordinate system conversion relation to obtain mixed point cloud data;
and mapping the mixing point cloud data into the image coordinate system according to the second coordinate system conversion relation.
3. The method of claim 1, wherein mapping the K initial point cloud data into the image coordinate system, filtering the K initial point cloud data according to the rail fit equation, and obtaining the first point cloud comprises:
mapping the K initial point cloud data to the image coordinate system to obtain a mapping point set;
determining target mapping points with the distance from the mapping point set smaller than a second distance threshold value from the rail fitting equation;
And determining the first point cloud from the K initial point cloud data according to the target mapping points.
4. The method of claim 3, wherein determining the first point cloud from the K initial point cloud data according to the target mapping points comprises:
determining a third point cloud corresponding to the target mapping point from the K initial point cloud data;
projecting the third point cloud into a target plane of a vehicle body coordinate system to obtain a first projection point set, wherein the target plane is a plane determined according to a vehicle running direction and a vehicle height direction;
filtering outliers in the first projection point set to obtain a second projection point set;
and determining the first point cloud from the K initial point cloud data according to the second projection point set.
5. The method of claim 1, wherein detecting a target obstacle in the target track based on the K initial point cloud data with the first point cloud as a reference point cloud comprises:
determining, in a vehicle body coordinate system, second point clouds associated with each first point cloud point from the first point clouds; the first point cloud points are point cloud points except the first point cloud points in the K initial point cloud data, the second point cloud points are point cloud points closest to the first point cloud points on an X axis in the first point cloud, and the X axis is parallel to the running direction of the vehicle;
Determining a first point cloud point, the distance between the first point cloud point and an associated second point cloud point of which meets a preset distance condition, on a Y axis as a candidate point cloud point, wherein the Y axis is parallel to the vehicle width direction;
clustering the candidate point cloud points to obtain fourth point cloud associated with at least one candidate obstacle;
and determining a target obstacle from the at least one candidate obstacle according to the height difference between each point cloud point in the fourth point cloud and the second point cloud point associated with the point cloud point.
6. The method of claim 5, wherein in the fourth point cloud associated with the target obstacle, the number of target point clouds is greater than a number threshold and a maximum value of a height difference between each target point cloud and its associated second point cloud is greater than a second difference threshold, wherein the target point cloud is a point cloud having a height difference between its associated second point cloud that is greater than the first difference threshold.
7. The method of claim 5, wherein after determining the target obstacle from the at least one candidate obstacle based on a height difference between each cloud point in the fourth point cloud and its associated second point cloud point, the method further comprises:
On an X axis of a vehicle body coordinate system, determining the nearest distance between each point cloud point in a fourth point cloud associated with the vehicle and the target obstacle as a target distance between the vehicle and the target obstacle;
and outputting an alarm signal when the target distance is smaller than a third distance threshold value.
8. The method of claim 1, wherein, in the case where L is an integer greater than 1, the L Zhang Chushi images are acquired by L cameras;
among the L cameras, there is a difference in focal length between at least two cameras.
9. A vehicle, characterized by comprising:
the acquisition module is used for acquiring K initial point cloud data acquired from a target track and L Zhang Chushi images, wherein K and L are positive integers;
the first determining module is used for determining first point clouds from the K initial point clouds based on each initial image respectively, wherein the first point clouds are point clouds belonging to a track in the target track;
the detection module is used for detecting a target obstacle in the target track based on the K initial point cloud data by taking the first point cloud as a reference point cloud, wherein the target obstacle is associated with a second point cloud in the K initial point cloud data, and a preset position relation is met between the second point cloud and the first point cloud;
The first determining module includes:
the fitting sub-module is used for fitting a track fitting equation of a track in a first initial image in an image coordinate system, wherein the first initial image is any initial image in the L Zhang Chushi image;
the screening submodule is used for mapping the K initial point cloud data into the image coordinate system, screening the K initial point cloud data according to the track fitting equation, and obtaining the first point cloud;
the fitting submodule comprises:
the segmentation acquisition unit is used for carrying out pixel segmentation on the first initial image based on a deep learning model obtained through pre-training to obtain initial rail pixel points belonging to rails in the first initial image;
the fitting unit is used for fitting the initial rail pixel points in the image coordinate system to obtain the rail fitting equation;
in the case where the number of tracks in the target track is N, N being an integer greater than 1, the fitting unit includes:
a screening subunit, configured to screen, in the image coordinate system, candidate rail pixel points located in a preset image height interval from the initial rail pixel points;
The dividing subunit is used for dividing the candidate rail pixel points belonging to each rail into M pixel regions along a preset direction, wherein M is an integer greater than 1;
the first fitting subunit is used for obtaining pixel center points of each pixel interval, and respectively fitting M pixel center points corresponding to each rail to obtain N first fitting straight lines corresponding to N rails;
the first determining subunit is configured to determine a perspective transformation matrix according to the N first fitting straight lines, map the N first fitting straight lines and the candidate rail pixel points to the aerial view according to the perspective transformation matrix, and obtain N second fitting straight lines and candidate mapped pixel points respectively;
the first filtering subunit is configured to filter a first mapping pixel point in the candidate mapping pixel points to obtain a second mapping pixel point, where the first mapping pixel point is a pixel point in the candidate mapping pixel points, and the distance between the first mapping pixel point and any second fitting straight line is greater than a first distance threshold;
the second fitting subunit is used for respectively fitting the target rail pixel points belonging to each rail to obtain N rail fitting equations corresponding to the N rails; the target track pixel point is a candidate track pixel point corresponding to the second mapping pixel point.
10. An electronic device, the device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the obstacle detection method as claimed in any one of claims 1-8.
11. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the obstacle detection method according to any one of claims 1-8.
CN202110306554.4A 2021-03-23 2021-03-23 Obstacle detection method, vehicle, apparatus, and computer storage medium Active CN113536883B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110306554.4A CN113536883B (en) 2021-03-23 2021-03-23 Obstacle detection method, vehicle, apparatus, and computer storage medium
PCT/CN2022/081631 WO2022199472A1 (en) 2021-03-23 2022-03-18 Obstacle detection method, and vehicle, device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110306554.4A CN113536883B (en) 2021-03-23 2021-03-23 Obstacle detection method, vehicle, apparatus, and computer storage medium

Publications (2)

Publication Number Publication Date
CN113536883A CN113536883A (en) 2021-10-22
CN113536883B true CN113536883B (en) 2023-05-02

Family

ID=78094376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110306554.4A Active CN113536883B (en) 2021-03-23 2021-03-23 Obstacle detection method, vehicle, apparatus, and computer storage medium

Country Status (2)

Country Link
CN (1) CN113536883B (en)
WO (1) WO2022199472A1 (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536883B (en) * 2021-03-23 2023-05-02 长沙智能驾驶研究院有限公司 Obstacle detection method, vehicle, apparatus, and computer storage medium
CN114120269A (en) * 2021-11-02 2022-03-01 北京埃福瑞科技有限公司 Rail clearance detection method and system
CN114119729A (en) * 2021-11-17 2022-03-01 北京埃福瑞科技有限公司 Obstacle identification method and device
CN116047499B (en) * 2022-01-14 2024-03-26 北京中创恒益科技有限公司 High-precision real-time protection system and method for power transmission line of target construction vehicle
CN114723830B (en) * 2022-03-21 2023-04-18 深圳市正浩创新科技股份有限公司 Obstacle recognition method, device and storage medium
CN114721390B (en) * 2022-04-13 2024-10-29 中国矿业大学 Method for detecting ascending and descending slopes and automatically avoiding obstacles of unmanned monorail crane transport vehicle
CN115797401B (en) * 2022-11-17 2023-06-06 昆易电子科技(上海)有限公司 Verification method and device for alignment parameters, storage medium and electronic equipment
CN115508844B (en) * 2022-11-23 2023-03-21 江苏新宁供应链管理有限公司 Intelligent detection method for deviation of logistics conveyor based on laser radar
CN115824237B (en) * 2022-11-29 2023-09-26 重庆赛迪奇智人工智能科技有限公司 Rail pavement recognition method and device
CN115880252B (en) * 2022-12-13 2023-10-17 北京斯年智驾科技有限公司 Container sling detection method, device, computer equipment and storage medium
CN115965682B (en) * 2022-12-16 2023-09-01 镁佳(北京)科技有限公司 Vehicle passable area determining method and device and computer equipment
CN118270438A (en) * 2022-12-29 2024-07-02 北京极智嘉科技股份有限公司 Equipment control method and device based on environment information
CN115937826B (en) * 2023-02-03 2023-05-09 小米汽车科技有限公司 Target detection method and device
CN115880536B (en) * 2023-02-15 2023-09-01 北京百度网讯科技有限公司 Data processing method, training method, target object detection method and device
CN116246267B (en) * 2023-03-06 2024-08-30 武汉极动智能科技有限公司 Tray identification method and device, computer equipment and storage medium
CN116385528B (en) * 2023-03-28 2024-04-30 小米汽车科技有限公司 Method and device for generating annotation information, electronic equipment, vehicle and storage medium
CN116148878B (en) * 2023-04-18 2023-07-07 浙江华是科技股份有限公司 Ship starboard height identification method and system
WO2024216523A1 (en) * 2023-04-19 2024-10-24 深圳技术大学 Method and system for sensing foreign matter within urban rail train travellng clearance, and apparatus and medium
CN116757918A (en) * 2023-06-01 2023-09-15 北京鉴智科技有限公司 Binocular stereoscopic vision-based vehicle scratch identification method and device
CN116533998B (en) * 2023-07-04 2023-09-29 深圳海星智驾科技有限公司 Automatic driving method, device, equipment, storage medium and vehicle of vehicle
CN116612059B (en) * 2023-07-17 2023-10-13 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN116630390B (en) * 2023-07-21 2023-10-17 山东大学 Obstacle detection method, system, equipment and medium based on depth map template
CN117048596B (en) * 2023-08-04 2024-05-10 广州汽车集团股份有限公司 Method, device, vehicle and storage medium for avoiding obstacle
CN116703922B (en) * 2023-08-08 2023-10-13 青岛华宝伟数控科技有限公司 Intelligent positioning method and system for sawn timber defect position
CN116793245B (en) * 2023-08-24 2023-12-01 济南瑞源智能城市开发有限公司 Tunnel detection method, equipment and medium based on track robot
CN116772887B (en) * 2023-08-25 2023-11-14 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium
CN116824518B (en) * 2023-08-31 2023-11-10 四川嘉乐地质勘察有限公司 Pile foundation static load detection method, device and processor based on image recognition
CN118279250B (en) * 2024-03-21 2024-09-06 深圳前海瑞集科技有限公司 Ship workpiece point cloud processing method and device, equipment and computer medium
CN117934324B (en) * 2024-03-25 2024-06-11 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN118196122B (en) * 2024-05-15 2024-08-09 深圳市木牛机器人科技有限公司 Corner recognition method and device for flat transport vehicle and computer equipment
CN118226422B (en) * 2024-05-24 2024-08-27 智道网联科技(北京)有限公司 Online calibration method and device for road side sensor, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016118672A2 (en) * 2015-01-20 2016-07-28 Solfice Research, Inc. Real time machine vision and point-cloud analysis for remote sensing and vehicle control
CN110967024A (en) * 2019-12-23 2020-04-07 苏州智加科技有限公司 Method, device, equipment and storage medium for detecting travelable area
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374335B (en) * 2014-11-20 2017-09-05 中车青岛四方机车车辆股份有限公司 Rail vehicle Clearance Detection
CN109360239B (en) * 2018-10-24 2021-01-15 长沙智能驾驶研究院有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN109635700B (en) * 2018-12-05 2023-08-08 深圳市易成自动驾驶技术有限公司 Obstacle recognition method, device, system and storage medium
CN110239592A (en) * 2019-07-03 2019-09-17 中铁轨道交通装备有限公司 A kind of active barrier of rail vehicle and derailing detection system
CN110481601B (en) * 2019-09-04 2022-03-08 深圳市镭神智能系统有限公司 Track detection system
CN112154445A (en) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 Method and device for determining lane line in high-precision map
CN111007531A (en) * 2019-12-24 2020-04-14 电子科技大学 Road edge detection method based on laser point cloud data
CN111881752B (en) * 2020-06-27 2023-04-28 武汉中海庭数据技术有限公司 Guardrail detection classification method and device, electronic equipment and storage medium
CN112036274A (en) * 2020-08-19 2020-12-04 江苏智能网联汽车创新中心有限公司 Driving region detection method and device, electronic equipment and storage medium
CN113536883B (en) * 2021-03-23 2023-05-02 长沙智能驾驶研究院有限公司 Obstacle detection method, vehicle, apparatus, and computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016118672A2 (en) * 2015-01-20 2016-07-28 Solfice Research, Inc. Real time machine vision and point-cloud analysis for remote sensing and vehicle control
CN110967024A (en) * 2019-12-23 2020-04-07 苏州智加科技有限公司 Method, device, equipment and storage medium for detecting travelable area
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium

Also Published As

Publication number Publication date
CN113536883A (en) 2021-10-22
WO2022199472A1 (en) 2022-09-29

Similar Documents

Publication Publication Date Title
CN113536883B (en) Obstacle detection method, vehicle, apparatus, and computer storage medium
Fernández Llorca et al. Vision‐based vehicle speed estimation: A survey
Zhangyu et al. A camera and LiDAR data fusion method for railway object detection
CN113468941B (en) Obstacle detection method, device, equipment and computer storage medium
CN111295321A (en) Obstacle detection device
US10748014B2 (en) Processing device, object recognition apparatus, device control system, processing method, and computer-readable recording medium
JP5834933B2 (en) Vehicle position calculation device
JP4940177B2 (en) Traffic flow measuring device
CN110443819B (en) Method and device for detecting track of monorail train
CN109791607B (en) Detection and verification of objects from a series of images of a camera by means of homography matrices
CN114814826B (en) Radar orbit area environment sensing method based on target grid
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
Sochor et al. Brnocompspeed: Review of traffic camera calibration and comprehensive dataset for monocular speed measurement
Kapoor et al. Deep learning based object and railway track recognition using train mounted thermal imaging system
CN116626706A (en) Rail transit tunnel intrusion detection method and system
Kudinov et al. Perspective-2-point solution in the problem of indirectly measuring the distance to a wagon
Pavlović et al. AI powered obstacle distance estimation for onboard autonomous train operation
RU2729512C1 (en) Method for indirect measurement of range from a diesel locomotive shunter to a rail track straight section
Wolf et al. Asset Detection in Railroad Environments using Deep Learning-based Scanline Analysis.
JP2018092608A (en) Information processing device, imaging device, apparatus control system, movable body, information processing method, and program
CN117471463A (en) Obstacle detection method based on 4D radar and image recognition fusion
CN116740295A (en) Virtual scene generation method and device
CN110501699A (en) Obstacle detection system and detection method between a kind of shield door and car body
CN112380927B (en) Rail identification method and device
CN115755094A (en) Obstacle detection method, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant