CN113838125B - Target position determining method, device, electronic equipment and storage medium - Google Patents
Target position determining method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113838125B CN113838125B CN202111093181.3A CN202111093181A CN113838125B CN 113838125 B CN113838125 B CN 113838125B CN 202111093181 A CN202111093181 A CN 202111093181A CN 113838125 B CN113838125 B CN 113838125B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- target
- determining
- target object
- candidate point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 238000003062 neural network model Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Probability & Statistics with Applications (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a target position determining method, a target position determining device, electronic equipment and a storage medium, and belongs to the technical field of automatic driving. The method comprises the following steps: determining a target area of a target object according to image data of the target object in a scene where the vehicle is located; determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in a scene where the vehicle is located; determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds; and determining the position of the target object according to the target point cloud. By the technical scheme, the accuracy of the position determination of the target object is improved, and a new thought is provided for the position determination of the target object in the surrounding environment of the vehicle in automatic driving.
Description
Technical Field
The embodiment of the invention relates to the technical field of automatic driving, in particular to a target position determining method, a target position determining device, electronic equipment and a storage medium.
Background
The driving safety requires that the automatic driving has extremely high-precision sensing capability, the field of view of the sensor is covered completely, and dangerous traffic conditions and traffic behaviors can be effectively sensed. The camera is one of the earliest sensors used by an automatic driving system, is also the first sensor of each automobile manufacturer and automobile researchers, is greatly influenced by external environment changes, and can greatly influence the perception effect of the camera at night, raining, heavy fog and the like; the laser radar irradiates an object with a plurality of more dense laser beams, receives laser light reflected by the object to obtain distance information of the object, has a larger and denser data volume and higher accuracy than the millimeter wave radar, but has a much lower adaptability to extreme environments and is expensive, and thus, improvement is demanded.
Disclosure of Invention
The invention provides a target position determining method, a target position determining device, electronic equipment and a storage medium, so as to improve the accuracy of position determination of target objects around a vehicle.
In a first aspect, an embodiment of the present invention provides a method for determining a target location, where the method includes:
Determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
Determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in a scene where the vehicle is located;
determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and determining the position of the target object according to the target point cloud.
In a second aspect, an embodiment of the present invention further provides a target location determining apparatus, where the apparatus includes:
the target area determining module is used for determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
the candidate point cloud determining module is used for determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in a scene where the vehicle is located;
The target point cloud determining module is used for determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
And the position determining module is used for determining the position of the target object according to the target point cloud.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
One or more processors;
A memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the target location determination method as provided by any of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a target position determination method as provided by any of the embodiments of the present invention.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point cloud of the target object is determined according to the first point cloud data of the target object in the scene where the vehicle is located and the target area, further the target point cloud is determined from the candidate point clouds according to the matching degree between the target area and the candidate point clouds, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data with the point cloud data, so that the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Drawings
FIG. 1 is a flowchart of a method for determining a target position according to a first embodiment of the present invention;
fig. 2 is a flowchart of a target position determining method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a target position determining method according to a third embodiment of the present invention
Fig. 4 is a schematic structural diagram of a target position determining apparatus according to a fourth embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a method for determining a target position according to a first embodiment of the present invention, where the present embodiment is applicable to a situation where a position of a target object around a vehicle is determined during automatic driving, and is particularly applicable to a situation where a position of a target object around a vehicle is determined during automatic driving in an extreme environment (severe weather conditions or dark environments). The method may be performed by a target position determining device, which may be implemented in software and/or hardware, and may be integrated in an electronic device, such as an on-board controller, carrying target position determining functions.
As shown in fig. 1, the method specifically may include:
110. And determining a target area of the target object according to the image data of the target object in the scene where the vehicle is located.
The target object refers to an object in a scene where the vehicle is located, for example, other vehicles and the like. The target area is an area where a target object is located in image data in a scene around a vehicle acquired in an image acquisition apparatus. The image capturing device may be a camera mounted on the vehicle, and may be, for example, four sets of cameras, which may capture image data of the front, rear, left, right, and direction of the vehicle.
In this embodiment, the image acquisition device acquires image data of a target object in a scene where the vehicle is located, and further obtains a target area of the target object in the image data based on the target detection model. Wherein the object detection model may be a YOLO-v3 model.
It should be noted that, after the image data of the target object is acquired, correction processing may be performed on the image data based on the preset correction parameter. Wherein, the preset correction parameters are set by the person skilled in the art according to the actual situation. Further, target object detection is performed on the corrected image data based on the target detection model, and a target area of the target object is obtained.
S120, determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in the scene where the vehicle is located.
The first point cloud data refer to point cloud data of a target object in a scene where a vehicle is located, wherein the point cloud data are acquired through millimeter wave radars. The vehicle is provided with four groups of millimeter wave radars, so that point cloud data of a target object in the front, back, left and right directions of the vehicle can be collected; meanwhile, the acquisition time of the millimeter wave radar and the acquisition time of the image acquisition equipment are required to be synchronized in advance, so that the point cloud data acquired by each frame of millimeter wave radar corresponds to one frame of image data acquired by the image acquisition equipment. The candidate point cloud is point cloud data related to the target area.
In this embodiment, first point cloud data of a target object in a scene where a millimeter wave radar acquisition vehicle is located is acquired, and then the first point cloud data is projected to an image plane coordinate system based on a conversion relationship between the radar plane coordinate system and the image plane coordinate system, so as to obtain second point cloud data. And determining candidate point clouds of the target object according to the position relation between the second point cloud data and the target area. Specifically, the point cloud data falling into the target area in the second point cloud data is used as candidate point clouds of the target object.
S130, determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds.
Optionally, the distance between the candidate point cloud and the target area may be used as the matching degree between the target area and the candidate point cloud, and then the target point cloud is determined from the candidate point cloud according to the matching degree.
The target point cloud refers to point cloud data which is most matched with the target area.
For example, the matching degree may be ranked, and the target point cloud is determined from the candidate point clouds according to the ranking result and the set threshold value. Specifically, the matching degrees are ordered in the order from small to large, and candidate point clouds corresponding to the matching degrees larger than a set threshold value in the ordering result are used as target point clouds. Wherein the setting threshold can be set by a person skilled in the art according to the actual situation.
S140, determining the position of the target object according to the target point cloud.
In this embodiment, basic information of a target object corresponding to a target point cloud acquired in a millimeter wave radar is acquired according to the target point cloud, where the basic information includes information such as a position and a speed of the target object. Further, according to the basic information of the target object, the position of the target object is determined by using a Kalman filtering algorithm.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point cloud of the target object is determined according to the first point cloud data of the target object in the scene where the vehicle is located and the target area, further the target point cloud is determined from the candidate point clouds according to the matching degree between the target area and the candidate point clouds, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data with the point cloud data, so that the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Example two
Fig. 2 is a flowchart of a target position determining method according to a second embodiment of the present invention, and on the basis of the foregoing embodiment, an alternative implementation is provided for further optimization of "determining a candidate point cloud of a target object according to first point cloud data of the target object and a target area".
As shown in fig. 2, the method specifically may include:
S210, determining a target area of the target object according to image data of the target object in the scene where the vehicle is located.
S220, performing three-dimensional conversion on the target area to obtain the point cloud cone.
In this embodiment, the target area may be three-dimensionally converted based on the three-dimensional conversion model, so as to obtain a point cloud cone corresponding to the target area.
S230, based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, the first point cloud data is projected to the image plane coordinate system, and second point cloud data is obtained.
In this embodiment, the first point cloud data is clustered, and clustered point cloud data is obtained. For example, the first point cloud data may be clustered based on DNSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm, resulting in clustered point cloud data.
Further, based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, the clustered point cloud data are projected to the image plane coordinate system, and second point cloud data are obtained.
It can be appreciated that by clustering the first point cloud data, irrelevant point cloud data can be filtered out, thereby making the determination of the subsequent target location more accurate.
S240, determining candidate point clouds of the target object according to the position relation between the second point cloud data and the point cloud cone.
In this embodiment, point cloud data falling into a point cloud cone in the second point cloud data is used as candidate point clouds of the target object; and removing the second point cloud data falling outside the point cloud cone.
S250, determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds.
S260, determining the position of the target object according to the target point cloud.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and the target area is subjected to three-dimensional conversion to obtain the point cloud cone. Based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, projecting the first point cloud data to the image plane coordinate system to obtain second point cloud data, then determining candidate point clouds of the target object according to the position relation between the second point cloud data and the point cloud cone, further determining target point clouds from the candidate point clouds according to the matching degree between the target area and the candidate point clouds, and finally determining the position of the target object according to the target point clouds. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data with the point cloud data, so that the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Example III
Fig. 3 is a flowchart of a target position determining method according to a third embodiment of the present invention, which provides an alternative implementation manner for further optimizing "determining a target point cloud from candidate point clouds according to a degree of matching between a target area and the candidate point clouds" based on the above embodiment.
As shown in fig. 3, the method specifically may include:
s310, determining a target area of the target object according to the image data of the target object in the scene where the vehicle is located.
S320, determining candidate point clouds of the target object according to the first point cloud data of the target object and the target area in the scene where the vehicle is located.
S330, for each candidate point cloud, determining the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition equipment.
The basic information comprises size information of a target area and center point coordinates of the target area; the size information includes the width and height of the target area. The focal length of the acquisition device includes a horizontal-direction focal length and a vertical-direction focal length.
Optionally, the first distance between the target object and the vehicle is determined according to the point cloud coordinates of the candidate point cloud. For example, the square of the abscissa and the square of the ordinate of the point cloud coordinate may be added, and the added result may be taken as the first distance of the target object from the vehicle.
After determining the first distance, an estimated height and an estimated width of the target object may be determined based on the first distance, the width or height in the size information, and the focal length of the acquisition device.
By way of example, the estimated height of the target object may be determined from the first distance, the height in the size information, and the vertical focal length of the acquisition device, e.g. by the following formula:
Where h represents the estimated height of the target object, f y represents the vertical focal length of the acquisition device, |y 2-y1 | represents the height in the size information, and r represents the first distance.
For example, the estimated width of the target object may be determined according to the first distance, the width in the size information, and the horizontal focal length of the acquisition device, and may be determined by the following formula:
Where w represents an estimated width of the target object, f x represents a horizontal focal length of the acquisition device, |x 2-x1 | represents a width in the size information, and r represents a first distance.
After determining the estimated height and width of the target object, a second distance between the target object and the vehicle may be determined based on the width or height of the target object, the focal length of the acquisition device, and the width or height in the size information.
For example, if the height of the target object is known, a second distance between the target object and the vehicle is determined based on the height of the target object, the vertical focal length of the acquisition device, and the height in the size information. For example, it can be determined by the following formula:
Where r 'denotes a second distance between the target object and the vehicle, h' denotes a height of the target object, |y 2-y1 | denotes a height in the size information, and f y denotes a vertical focal length of the acquisition apparatus.
For example, if the width of the target object is known, the second distance of the target object from the vehicle is determined based on the width of the target object, the horizontal focal length of the acquisition device, and the width in the size information. For example, can be determined by the following formula:
Where r 'denotes a second distance between the target object and the vehicle, w' denotes a width of the target object, |x 2-x1 | denotes a width in the size information, and f x denotes a horizontal-direction focal length of the acquisition apparatus.
Further, the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and the basic information are input into a neural network model, and the matching degree of the candidate point cloud and the target area is obtained. The neural network model may be a radial basis neural network, among others.
Specifically, the first distance, the estimated height, the estimated width, the candidate point cloud center point coordinates, the second distance and the basic information are input into a neural network model, and the matching degree of the candidate point cloud and the target area is obtained through the neural network model processing.
It can be appreciated that the accuracy of the matching degree can be enhanced by determining the matching degree of the target region of the candidate point cloud through the neural network model.
S340, determining target point clouds from the candidate point clouds according to the matching degree between each candidate point cloud and the target area.
S350, determining the position of the target object according to the target point cloud.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, candidate point clouds of the target object are determined according to first point cloud data of the target object in the scene where the vehicle is located and the target area, for each candidate point cloud, the matching degree of the candidate point clouds and the target area is determined according to the point cloud coordinates of the candidate point clouds, basic information of the target area and the focal length of the acquisition equipment, further, the target point clouds are determined from the candidate point clouds according to the matching degree of the candidate point clouds and the target area, and finally the position of the target object is determined according to the target point clouds. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data with the point cloud data, so that the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Example IV
Fig. 4 is a schematic structural diagram of a target position determining apparatus according to a fourth embodiment of the present invention, where the present embodiment is applicable to a situation where the positions of target objects around a vehicle are determined during automatic driving, and is particularly applicable to a situation where the positions of target objects around a vehicle are determined during automatic driving in extreme environments (severe weather conditions or dark environments). The apparatus may be implemented in software and/or hardware and may be integrated into an electronic device, such as an on-board controller, that carries the target location determination function.
As shown in fig. 4, the apparatus may specifically include a target area determination module 410, a candidate point cloud determination module 420, a target point cloud determination module 430, and a location determination module 440, wherein,
A target area determining module 410, configured to determine a target area of a target object according to image data of the target object in a scene where the vehicle is located;
The candidate point cloud determining module 420 is configured to determine a candidate point cloud of a target object according to first point cloud data of the target object and a target area in a scene where the vehicle is located;
a target point cloud determining module 430, configured to determine a target point cloud from the candidate point clouds according to a matching degree between the target area and the candidate point clouds;
The location determining module 440 is configured to determine a location of the target object according to the target point cloud.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point cloud of the target object is determined according to the first point cloud data of the target object in the scene where the vehicle is located and the target area, further the target point cloud is determined from the candidate point clouds according to the matching degree between the target area and the candidate point clouds, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data with the point cloud data, so that the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Further, the candidate point cloud determining module 420 includes a point cloud heap obtaining unit, a second point cloud data obtaining unit, and a candidate point cloud determining unit, wherein,
The point cloud pile obtaining unit is used for carrying out three-dimensional conversion on the target area to obtain a point cloud cone;
The second point cloud data obtaining unit is used for projecting the first point cloud data to the image plane coordinate system based on the conversion relation between the radar plane coordinate system and the image plane coordinate system to obtain second point cloud data;
and the candidate point cloud determining unit is used for determining candidate point clouds of the target object according to the position relationship between the second point cloud data and the point cloud cone.
Further, the target point cloud determining module 430 includes a matching degree determining unit and a target point cloud determining unit, wherein,
The matching degree determining unit is used for determining the matching degree of each candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition equipment; the basic information comprises size information of the target area and center point coordinates of the target area;
And the target point cloud determining unit is used for determining the target point cloud from the candidate point clouds according to the matching degree between each candidate point cloud and the target area.
Further, the matching degree determining unit includes a first distance determining subunit, an estimation information determining subunit, a second distance determining subunit, and a matching degree determining subunit, wherein,
A first distance determining subunit, configured to determine a first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
an estimated information determination subunit, configured to determine an estimated height and an estimated width of the target object according to the first distance, the width or the height in the size information, and the focal length of the acquisition device;
A second distance determining subunit, configured to determine a second distance between the target object and the vehicle according to the width or the height of the target object, the focal length of the acquisition device, and the width or the height in the size information;
and the matching degree determining subunit is used for inputting the first distance, the estimated height, the estimated width, the center point coordinates of the candidate point cloud, the second distance and the basic information into the neural network model to obtain the matching degree of the candidate point cloud and the target area.
Further, the candidate point cloud determining module 420 further includes a clustered point cloud determining unit, where the distance point cloud unit is configured to:
clustering the first point cloud data to obtain clustered point cloud data;
Correspondingly, the second point cloud data obtaining unit is further configured to:
and projecting the clustered point cloud data to the image plane coordinate system based on the conversion relation between the radar plane coordinate system and the image plane coordinate system to obtain second point cloud data.
Further, the target point cloud determining module 430 is specifically configured to:
And sequencing the matching degree, and determining the target point cloud from the candidate point clouds according to the sequencing result and the set threshold value.
Further, the device also comprises a correction processing module, wherein the correction processing module is specifically used for:
And carrying out correction processing on the image data based on the preset correction parameters.
The target position determining device can execute the target position determining method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example five
Fig. 5 is a schematic structural diagram of an electronic device provided in a fifth embodiment of the present invention, and fig. 5 is a block diagram of an exemplary device suitable for implementing an embodiment of the present invention. The device shown in fig. 5 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 5, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory (cache 32). The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The system memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, to implement the target position determination method provided by the embodiment of the present invention.
Example six
A sixth embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program (or referred to as computer-executable instructions) which, when executed by a processor, is configured to perform the target position determining method provided by the embodiment of the present invention, the method including:
Determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
Determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in the scene where the vehicle is located;
Determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and determining the position of the target object according to the target point cloud.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the embodiments of the present invention have been described in connection with the above embodiments, the embodiments of the present invention are not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (8)
1. A method for determining a target location, comprising:
Determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
Determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in a scene where the vehicle is located;
determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
Determining the position of the target object according to the target point cloud;
The determining the candidate point cloud of the target object according to the first point cloud data of the target object and the target area includes:
three-dimensional conversion is carried out on the target area to obtain a point cloud cone;
Based on a conversion relation between a radar plane coordinate system and an image plane coordinate system, projecting the first point cloud data to the image plane coordinate system to obtain second point cloud data;
Determining candidate point clouds of the target object according to the position relation between the second point cloud data and the point cloud cone, wherein the candidate point clouds of the target object are the point cloud data falling into the point cloud cone in the second point cloud data;
The determining the target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds includes:
And sequencing the matching degree, and determining a target point cloud from the candidate point clouds according to a sequencing result and a set threshold, wherein the matching degree is the distance between the candidate point cloud and the target area.
2. The method of claim 1, wherein the determining a target point cloud from the candidate point clouds according to a degree of matching between the target region and the candidate point clouds comprises:
For each candidate point cloud, determining the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition equipment; the basic information comprises size information of the target area and center point coordinates of the target area;
and determining a target point cloud from the candidate point clouds according to the matching degree between each candidate point cloud and the target area.
3. The method according to claim 2, wherein the determining the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device includes:
determining a first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
Determining an estimated height and an estimated width of the target object according to the first distance, the width or the height in the size information, and the focal length of the acquisition device;
Determining a second distance between the target object and the vehicle according to the width or the height of the target object, the focal length of the acquisition equipment and the width or the height in the size information;
and inputting the first distance, the estimated height, the estimated width, the candidate point cloud center point coordinates, the second distance and the basic information into a neural network model to obtain the matching degree of the candidate point cloud and the target area.
4. The method according to claim 1, wherein the projecting the first point cloud data onto the image plane coordinate system based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, before obtaining the second point cloud data, further comprises:
clustering the first point cloud data to obtain clustered point cloud data;
Correspondingly, based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, projecting the first point cloud data to the image plane coordinate system to obtain second point cloud data, including:
and based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, projecting the clustered point cloud data to the image plane coordinate system to obtain second point cloud data.
5. The method as recited in claim 1, further comprising:
And carrying out correction processing on the image data based on preset correction parameters.
6. A target position determining apparatus, comprising:
the target area determining module is used for determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
the candidate point cloud determining module is used for determining candidate point clouds of the target object according to first point cloud data of the target object and the target area in a scene where the vehicle is located;
The target point cloud determining module is used for determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
The position determining module is used for determining the position of the target object according to the target point cloud;
the candidate point cloud determining module includes:
the point cloud pile obtaining unit is used for carrying out three-dimensional conversion on the target area to obtain a point cloud cone;
The second point cloud data obtaining unit is used for projecting the first point cloud data to the image plane coordinate system based on the conversion relation between the radar plane coordinate system and the image plane coordinate system to obtain second point cloud data;
A candidate point cloud determining unit, configured to determine a candidate point cloud of the target object according to a positional relationship between the second point cloud data and the point cloud cone, where the candidate point cloud of the target object is point cloud data in the second point cloud data that falls into the point cloud cone;
The target point cloud determining module is specifically configured to sort the matching degrees, and determine a target point cloud from the candidate point clouds according to the sorting result and a set threshold, where the matching degree is a distance between the candidate point clouds and the target area.
7. An electronic device, comprising:
One or more processors;
A memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the target location determination method of any of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the target position determination method according to any one of claims 1-5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111093181.3A CN113838125B (en) | 2021-09-17 | 2021-09-17 | Target position determining method, device, electronic equipment and storage medium |
PCT/CN2022/117770 WO2023040737A1 (en) | 2021-09-17 | 2022-09-08 | Target location determining method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111093181.3A CN113838125B (en) | 2021-09-17 | 2021-09-17 | Target position determining method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113838125A CN113838125A (en) | 2021-12-24 |
CN113838125B true CN113838125B (en) | 2024-10-22 |
Family
ID=78959810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111093181.3A Active CN113838125B (en) | 2021-09-17 | 2021-09-17 | Target position determining method, device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113838125B (en) |
WO (1) | WO2023040737A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113838125B (en) * | 2021-09-17 | 2024-10-22 | 中国第一汽车股份有限公司 | Target position determining method, device, electronic equipment and storage medium |
CN115641567B (en) * | 2022-12-23 | 2023-04-11 | 小米汽车科技有限公司 | Target object detection method and device for vehicle, vehicle and medium |
CN116299534A (en) * | 2023-02-21 | 2023-06-23 | 广西柳工机械股份有限公司 | Method, device, equipment and storage medium for determining vehicle pose |
CN116938960B (en) * | 2023-08-07 | 2024-07-26 | 北京斯年智驾科技有限公司 | Sensor data processing method, device, equipment and computer readable storage medium |
CN117806218B (en) * | 2024-02-28 | 2024-05-28 | 厦门市广和源工贸有限公司 | Method and device for determining position of electrical equipment, electronic equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112700552A (en) * | 2020-12-31 | 2021-04-23 | 华为技术有限公司 | Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10365650B2 (en) * | 2017-05-25 | 2019-07-30 | GM Global Technology Operations LLC | Methods and systems for moving object velocity determination |
CN109145680B (en) * | 2017-06-16 | 2022-05-27 | 阿波罗智能技术(北京)有限公司 | Method, device and equipment for acquiring obstacle information and computer storage medium |
CN109840448A (en) * | 2017-11-24 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | Information output method and device for automatic driving vehicle |
CN108509918B (en) * | 2018-04-03 | 2021-01-08 | 中国人民解放军国防科技大学 | Target detection and tracking method fusing laser point cloud and image |
CN109345510A (en) * | 2018-09-07 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Object detecting method, device, equipment, storage medium and vehicle |
US10867430B2 (en) * | 2018-11-19 | 2020-12-15 | Intel Corporation | Method and system of 3D reconstruction with volume-based filtering for image processing |
CN111819602A (en) * | 2019-02-02 | 2020-10-23 | 深圳市大疆创新科技有限公司 | Method for increasing point cloud sampling density, point cloud scanning system and readable storage medium |
CN110456363B (en) * | 2019-06-17 | 2021-05-18 | 北京理工大学 | Target detection and positioning method for three-dimensional laser radar point cloud and infrared image fusion |
CN112154445A (en) * | 2019-09-19 | 2020-12-29 | 深圳市大疆创新科技有限公司 | Method and device for determining lane line in high-precision map |
US12120532B2 (en) * | 2019-10-18 | 2024-10-15 | Nippon Telegraph And Telephone Corporation | Station placement designing method and station placement designing apparatus |
CN111862337B (en) * | 2019-12-18 | 2024-05-10 | 北京嘀嘀无限科技发展有限公司 | Visual positioning method, visual positioning device, electronic equipment and computer readable storage medium |
KR20210100777A (en) * | 2020-02-06 | 2021-08-18 | 엘지전자 주식회사 | Apparatus for determining position of vehicle and operating method thereof |
KR102338665B1 (en) * | 2020-03-02 | 2021-12-10 | 건국대학교 산학협력단 | Apparatus and method for classficating point cloud using semantic image |
CN111487641B (en) * | 2020-03-19 | 2022-04-22 | 福瑞泰克智能系统有限公司 | Method and device for detecting object by using laser radar, electronic equipment and storage medium |
CN111709988B (en) * | 2020-04-28 | 2024-01-23 | 上海高仙自动化科技发展有限公司 | Method and device for determining characteristic information of object, electronic equipment and storage medium |
CN111612841B (en) * | 2020-06-22 | 2023-07-14 | 上海木木聚枞机器人科技有限公司 | Target positioning method and device, mobile robot and readable storage medium |
CN111899302A (en) * | 2020-06-23 | 2020-11-06 | 武汉闻道复兴智能科技有限责任公司 | Point cloud data-based visual detection method, device and system |
CN111815707B (en) * | 2020-07-03 | 2024-05-28 | 北京爱笔科技有限公司 | Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment |
CN111881827B (en) * | 2020-07-28 | 2022-04-26 | 浙江商汤科技开发有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112489427B (en) * | 2020-11-26 | 2022-04-15 | 招商华软信息有限公司 | Vehicle trajectory tracking method, device, equipment and storage medium |
CN112907746B (en) * | 2021-03-25 | 2024-10-29 | 上海商汤临港智能科技有限公司 | Electronic map generation method and device, electronic equipment and storage medium |
CN113156421A (en) * | 2021-04-07 | 2021-07-23 | 南京邮电大学 | Obstacle detection method based on information fusion of millimeter wave radar and camera |
CN113297958A (en) * | 2021-05-24 | 2021-08-24 | 驭势(上海)汽车科技有限公司 | Automatic labeling method and device, electronic equipment and storage medium |
CN113838125B (en) * | 2021-09-17 | 2024-10-22 | 中国第一汽车股份有限公司 | Target position determining method, device, electronic equipment and storage medium |
-
2021
- 2021-09-17 CN CN202111093181.3A patent/CN113838125B/en active Active
-
2022
- 2022-09-08 WO PCT/CN2022/117770 patent/WO2023040737A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112700552A (en) * | 2020-12-31 | 2021-04-23 | 华为技术有限公司 | Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023040737A1 (en) | 2023-03-23 |
CN113838125A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113838125B (en) | Target position determining method, device, electronic equipment and storage medium | |
US11915502B2 (en) | Systems and methods for depth map sampling | |
CN110163930B (en) | Lane line generation method, device, equipment, system and readable storage medium | |
CN113486797B (en) | Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle | |
CN109144097B (en) | Obstacle or ground recognition and flight control method, device, equipment and medium | |
CN112967283B (en) | Target identification method, system, equipment and storage medium based on binocular camera | |
CN113761999B (en) | Target detection method and device, electronic equipment and storage medium | |
CN109849930B (en) | Method and device for calculating speed of adjacent vehicle of automatic driving automobile | |
CN110956137A (en) | Point cloud data target detection method, system and medium | |
CN112650300B (en) | Unmanned aerial vehicle obstacle avoidance method and device | |
CN112525147B (en) | Distance measurement method for automatic driving equipment and related device | |
TWI726278B (en) | Driving detection method, vehicle and driving processing device | |
KR101995223B1 (en) | System, module and method for detecting pedestrian, computer program | |
CN115147328A (en) | Three-dimensional target detection method and device | |
CN115223135B (en) | Parking space tracking method and device, vehicle and storage medium | |
CN114662600A (en) | Lane line detection method and device and storage medium | |
CN114219770A (en) | Ground detection method, ground detection device, electronic equipment and storage medium | |
CN112639822A (en) | Data processing method and device | |
CN112835063B (en) | Method, device, equipment and storage medium for determining dynamic and static properties of object | |
CN116642490A (en) | Visual positioning navigation method based on hybrid map, robot and storage medium | |
CN115359332A (en) | Data fusion method and device based on vehicle-road cooperation, electronic equipment and system | |
CN116203976A (en) | Indoor inspection method and device for transformer substation, unmanned aerial vehicle and storage medium | |
CN113011212B (en) | Image recognition method and device and vehicle | |
CN118505800A (en) | Grid map construction method and device and intelligent mobile device | |
CN117392620A (en) | Traffic behavior recognition method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |