[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023040737A1 - Target location determining method and apparatus, electronic device, and storage medium - Google Patents

Target location determining method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023040737A1
WO2023040737A1 PCT/CN2022/117770 CN2022117770W WO2023040737A1 WO 2023040737 A1 WO2023040737 A1 WO 2023040737A1 CN 2022117770 W CN2022117770 W CN 2022117770W WO 2023040737 A1 WO2023040737 A1 WO 2023040737A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
target
target object
candidate point
target area
Prior art date
Application number
PCT/CN2022/117770
Other languages
French (fr)
Chinese (zh)
Inventor
关瀛洲
王祎男
魏源伯
刘汉旭
Original Assignee
中国第一汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国第一汽车股份有限公司 filed Critical 中国第一汽车股份有限公司
Publication of WO2023040737A1 publication Critical patent/WO2023040737A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Definitions

  • the embodiments of the present application relate to the technical field of automatic driving, for example, to a method, an apparatus, an electronic device, and a storage medium for determining a target position.
  • Driving safety requires that autonomous driving have extremely high-precision perception capabilities, and the sensor field of view is fully covered, which can effectively perceive dangerous traffic conditions and traffic behaviors.
  • the camera is one of the earliest sensors used in the automatic driving system, and it is also the sensor of choice for various car manufacturers and automotive researchers. However, it is greatly affected by changes in the external environment, such as night, rainfall, and heavy fog, which will greatly affect the camera. Perception effect; lidar uses many denser laser beams to irradiate the object, receives the laser reflected by the object and then obtains the distance information of the object, which has a larger and denser data volume and higher accuracy than the millimeter-wave radar, but for Adaptability to extreme environments is much lower and more expensive, so improvements are urgently needed.
  • the present application provides a method, device, electronic device and storage medium for determining a target position, so as to improve the accuracy of determining the position of target objects around a vehicle.
  • the embodiment of the present application provides a method for determining a target position, the method comprising:
  • the position of the target object is determined.
  • the embodiment of the present application also provides a device for determining a target position, which includes:
  • the target area determination module is configured to determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
  • the candidate point cloud determination module is configured to determine the candidate point cloud of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
  • the target point cloud determination module is configured to determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
  • a position determining module configured to determine the position of the target object according to the target point cloud.
  • the embodiment of the present application also provides an electronic device, the electronic device includes:
  • a memory configured to store at least one program
  • the at least one processor When the at least one program is executed by the at least one processor, the at least one processor is made to implement the method for determining a target position provided in any embodiment of the present application.
  • an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for determining a target location as provided in any embodiment of the present application is implemented.
  • FIG. 1 is a flow chart of a method for determining a target position provided in Embodiment 1 of the present application;
  • FIG. 2 is a flow chart of a method for determining a target location provided in Embodiment 2 of the present application;
  • Fig. 3 is a flow chart of a method for determining a target position provided in Embodiment 3 of the present application
  • FIG. 4 is a schematic structural diagram of a device for determining a target position provided in Embodiment 4 of the present application;
  • FIG. 5 is a schematic structural diagram of an electronic device provided in Embodiment 5 of the present application.
  • Fig. 1 is a flow chart of a method for determining a target position provided in Embodiment 1 of the present application.
  • This embodiment is applicable to the determination of the position of target objects around a vehicle in automatic driving, especially in extreme environments (bad weather conditions or Dark environment), the situation of determining the position of target objects around the vehicle in automatic driving.
  • the method can be executed by a device for determining a target position, which can be implemented in software and/or hardware, and can be integrated into an electronic device carrying a function of determining a target position, such as a vehicle-mounted controller.
  • the method may include:
  • the target object refers to an object in the scene where the vehicle is located, such as other vehicles.
  • the so-called target area refers to the area where the target object is located in the image data of the scene around the vehicle collected by the image acquisition device.
  • the image collection device may be a camera installed on the vehicle, for example, it may be four groups of cameras, which can collect image data of the front, rear, left, and right directions of the vehicle.
  • the image acquisition device acquires the image data of the target object in the scene where the vehicle is located, and then obtains the target area of the target object in the image data based on the target detection model.
  • the target detection model may be a YOLO-v3 model.
  • the image data of the target object may also be corrected based on preset correction parameters.
  • the preset correction parameters are set by those skilled in the art according to actual conditions.
  • target object detection is performed on the corrected image data based on a target detection model to obtain a target area of the target object.
  • the first point cloud data refers to the point cloud data of the target object in the scene where the vehicle is collected by the millimeter wave radar. It should be noted that four sets of millimeter-wave radars are installed on the vehicle, which can collect point cloud data of target objects in the front, rear, left, and right directions of the vehicle;
  • the point cloud data collected by the wave radar corresponds to a frame of image data collected by the image acquisition device.
  • the so-called candidate point cloud refers to the point cloud data related to the target area.
  • the millimeter-wave radar collects the first point cloud data of the target object in the scene where the vehicle is located, and then based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the first point cloud data is projected onto the image plane coordinate system to obtain the second point cloud data.
  • a candidate point cloud of the target object is determined according to the positional relationship between the second point cloud data and the target area.
  • the point cloud data falling into the target area in the second point cloud data is used as the candidate point cloud of the target object.
  • the distance between the candidate point cloud and the target area can be used as the matching degree between the target area and the candidate point cloud, and then the target point cloud is determined from the candidate point cloud according to the matching degree.
  • the target point cloud refers to the point cloud data that best matches the target area.
  • the matching degree may be sorted, and the target point cloud is determined from the candidate point cloud according to the sorting result and a set threshold.
  • the matching degree is sorted from small to large, and the candidate point cloud corresponding to the matching degree greater than the set threshold in the sorting result is used as the target point cloud.
  • the set threshold can be set by those skilled in the art according to actual conditions.
  • S140 Determine the position of the target object according to the target point cloud.
  • the basic information of the target object corresponding to the target point cloud collected in the millimeter-wave radar is obtained, wherein the basic information includes information such as the position and speed of the target object.
  • a Kalman filter algorithm is used to determine the position of the target object.
  • the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and then the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located Then, according to the matching degree between the target area and the candidate point cloud, the target point cloud is determined from the candidate point cloud, and finally the position of the target object is determined according to the target point cloud.
  • the above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
  • Fig. 2 is a flow chart of a method for determining a target position provided in Embodiment 2 of the present application.
  • determine the candidate point cloud of the target object "Refinement provides an optional implementation.
  • the method may include:
  • S210 Determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located.
  • the target area can be three-dimensionally converted to obtain the point cloud cone corresponding to the target area.
  • clustering is performed on the first point cloud data to obtain clustered point cloud data.
  • the first point cloud data can be clustered based on the DNSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm to obtain clustered point cloud data.
  • DNSCAN Domain-Based Spatial Clustering of Applications with Noise
  • the clustered point cloud data is projected to the image plane coordinate system to obtain the second point cloud data.
  • the point cloud data falling into the point cloud cone in the second point cloud data is used as the candidate point cloud of the target object; the second point cloud data falling outside the point cloud cone is eliminated.
  • S260 Determine the position of the target object according to the target point cloud.
  • the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and the target area is three-dimensionally converted to obtain a point cloud cone.
  • the first point cloud data is projected to the image plane coordinate system to obtain the second point cloud data, and then according to the relationship between the second point cloud data and the point cloud cone.
  • the positional relationship determines the candidate point cloud of the target object, and then determines the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud, and finally determines the position of the target object according to the target point cloud.
  • the above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
  • Fig. 3 is a flow chart of a method for determining a target position provided in Embodiment 3 of the present application.
  • determine the target from the candidate point cloud "point cloud” refinement providing an optional implementation.
  • the method may include:
  • For each candidate point cloud determine the matching degree between the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area, and the focal length of the acquisition device.
  • the basic information of the target area includes the size information of the target area and the coordinates of the center point of the target area; the size information includes the width and height of the target area.
  • the focal length of the acquisition device (that is, the image acquisition device) includes a focal length in a horizontal direction and a focal length in a vertical direction.
  • the first distance between the target object and the vehicle is determined according to the point cloud coordinates of the candidate point cloud.
  • the square of the abscissa and the square of the ordinate of the point cloud coordinates may be added, and the added result may be used as the first distance between the target object and the vehicle.
  • an estimated height and an estimated width of the target object may be determined according to the first distance, the width or height in the size information, and the focal length of the acquisition device.
  • the estimated height of the target object can be determined according to the first distance, the height in the size information, and the vertical focal length of the acquisition device, for example, it can be determined by the following formula:
  • h represents the estimated height of the target object
  • f y represents the vertical focal length of the acquisition device
  • represents the height in the size information
  • r represents the first distance
  • the estimated width of the target object can be determined according to the first distance, the width in the size information, and the horizontal focal length of the acquisition device, for example, it can be determined by the following formula:
  • w represents the estimated width of the target object
  • f x represents the horizontal focal length of the acquisition device
  • represents the width in the size information
  • r represents the first distance
  • the second distance between the target object and the vehicle may be determined according to the estimated width or estimated height of the target object, the focal length of the acquisition device, and the width or height in the size information.
  • the second distance between the target object and the vehicle is determined according to the height of the target object, the vertical focal length of the collection device, and the height in the size information. For example, it can be determined by the following formula:
  • r' represents the second distance between the target object and the vehicle
  • h' represents the height of the target object
  • represents the height in the size information
  • f y represents the vertical focal length of the acquisition device.
  • the second distance between the target object and the vehicle is determined according to the width of the target object, the horizontal focal length of the acquisition device, and the width in the size information. For example, it can be determined by the following formula:
  • r' represents the second distance between the target object and the vehicle
  • w' represents the width of the target object
  • represents the width in the size information
  • f x represents the horizontal focal length of the acquisition device.
  • the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and basic information are input into the neural network model to obtain the matching degree between the candidate point cloud and the target area.
  • the neural network model may be a radial basis neural network.
  • the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and basic information are input into the neural network model, and after processing by the neural network model, the distance between the candidate point cloud and the target area is obtained. suitability.
  • the accuracy of the matching degree can be enhanced by determining the matching degree of the candidate point cloud and the target area through the neural network model.
  • S340 Determine the target point cloud from the at least one candidate point cloud according to the matching degree between the at least one candidate point cloud and the target area.
  • the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and then the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located and for each candidate point cloud, according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device, determine the matching degree between the candidate point cloud and the target area, and then according to at least one The matching degree between the candidate point cloud and the target area is used to determine the target point cloud from the at least one candidate point cloud, and finally the position of the target object is determined according to the target point cloud.
  • the above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
  • Fig. 4 is a schematic structural diagram of a target position determination device provided in Embodiment 4 of the present application.
  • This embodiment is applicable to the determination of the position of target objects around the vehicle in automatic driving, especially in extreme environments (bad weather conditions or Dark environment), the situation of determining the position of target objects around the vehicle in automatic driving.
  • the device can be realized by means of software and/or hardware, and can be integrated into an electronic device carrying the function of determining the target position, such as a vehicle controller.
  • the device may include a target area determination module 410, a candidate point cloud determination module 420, a target point cloud determination module 430 and a position determination module 440, wherein,
  • the target area determination module 410 is configured to determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
  • the candidate point cloud determination module 420 is configured to determine the candidate point cloud of the target object according to the first point cloud data and the target area of the target object in the scene where the vehicle is located;
  • the target point cloud determining module 430 is configured to determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
  • the position determination module 440 is configured to determine the position of the target object according to the target point cloud.
  • the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and then the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located Then, according to the matching degree between the target area and the candidate point cloud, the target point cloud is determined from the candidate point cloud, and finally the position of the target object is determined according to the target point cloud.
  • the above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
  • the candidate point cloud determination module 420 includes a point cloud stack obtaining unit, a second point cloud data obtaining unit and a candidate point cloud determining unit, wherein,
  • the point cloud stack obtains the unit, which is set to perform three-dimensional conversion of the target area to obtain the point cloud cone;
  • the second point cloud data obtaining unit is configured to project the first point cloud data to the image plane coordinate system based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system to obtain the second point cloud data;
  • the candidate point cloud determining unit is configured to determine the candidate point cloud of the target object according to the positional relationship between the second point cloud data and the point cloud cone.
  • the number of candidate point clouds is at least one
  • the target point cloud determination module 430 includes a matching degree determination unit and a target point cloud determination unit, wherein,
  • the matching degree determining unit is configured to, for each candidate point cloud, determine the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device; the basic information includes The size information of the target area and the coordinates of the center point of the target area;
  • the target point cloud determining unit is configured to determine the target point cloud from the at least one candidate point cloud according to the matching degree between the at least one candidate point cloud and the target area.
  • the matching degree determining unit includes a first distance determining subunit, an estimated information determining subunit, a second distance determining subunit, and a matching degree determining subunit, wherein,
  • the first distance determination subunit is configured to determine the first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
  • the estimated information determination subunit is configured to determine the estimated height and estimated width of the target object according to the first distance, the width or height in the size information of the target area, and the focal length of the acquisition device;
  • the second distance determination subunit is configured to determine the second distance between the target object and the vehicle according to the estimated width or estimated height of the target object, the focal length of the acquisition device, and the width or height in the size information of the target area;
  • the matching degree determination subunit is configured to input the first distance, estimated height, estimated width, coordinates of the center point of the candidate point cloud, the second distance and basic information into the neural network model to obtain the degree of matching between the candidate point cloud and the target area .
  • the candidate point cloud determination module 420 also includes a cluster point cloud determination unit, wherein the distance point cloud unit is set to:
  • the second point cloud data obtaining unit is also set as:
  • the clustered point cloud data is projected to the image plane coordinate system to obtain the second point cloud data.
  • the target point cloud determination module 430 is set to:
  • the matching degree is sorted, and the target point cloud is determined from the candidate point cloud according to the sorting result and the set threshold.
  • the device also includes a correction processing module, the correction processing module is configured to:
  • Correction processing is performed on the image data based on preset correction parameters.
  • the above-mentioned device for determining a target position can execute the method for determining a target position provided in any embodiment of the present application, and has corresponding functional modules for executing the method.
  • Fig. 5 is a schematic structural diagram of an electronic device provided in Embodiment 5 of the present application, and Fig. 5 shows a block diagram of an exemplary device suitable for implementing the implementation manner of the embodiment of the present application.
  • the device shown in FIG. 5 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
  • electronic device 12 takes the form of a general-purpose computing device.
  • Components of the electronic device 12 may include, but are not limited to, at least one processor or processing unit 16 , a system memory 28 , and a bus 18 connecting various system components including the system memory 28 and the processing unit 16 .
  • Bus 18 represents at least one of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture, ISA) bus, Micro Channel Architecture (Micro Channel Architecture, MCA) bus, Enhanced ISA bus, Video Electronics Standards Association (Video Electronics Standards Association, VESA) local bus and peripheral component interconnect (Peripheral Component Interconnect, PCI) bus.
  • Electronic device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 12 and include both volatile and nonvolatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory (cache 32).
  • the electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive").
  • a disk drive for reading and writing to a removable non-volatile disk may be provided, as well as a removable non-volatile disk (such as a Compact Disc- Read Only Memory, CD-ROM), Digital Video Disc (Digital Video Disc-Read Only Memory, DVD-ROM) or other optical media) CD-ROM drive.
  • each drive may be connected to bus 18 via at least one data medium interface.
  • the system memory 28 may include at least one program product, which has a set of (for example, at least one) program modules configured to execute the functions of the various embodiments of the embodiments of the present application.
  • a program/utility 40 having a set (at least one) of program modules 42, such as may be stored in system memory 28, such as but not limited to an operating system, at least one application program, other program modules, and program data, Implementations of networked environments may be included in each or some combination of these examples.
  • the program module 42 generally executes the functions and/or methods in the embodiments described in the embodiments of this application.
  • the electronic device 12 can also communicate with at least one external device 14 (such as a keyboard, a pointing device, a display 24, etc.), and can also communicate with at least one device that enables a user to interact with the electronic device 12, and/or communicate with the electronic device 12. 12. Any device capable of communicating with at least one other computing device (eg, network card, modem, etc.). This communication can be performed through an input/output (Input/Output, I/O) interface 22 . Moreover, the electronic device 12 can also communicate with at least one network (such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN) and/or a public network such as the Internet) through the network adapter 20.
  • LAN Local Area Network
  • WAN Wide Area Network
  • public network such as the Internet
  • Network adapter 20 communicates with other modules of electronic device 12 over bus 18, as shown. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk arrays (Redundant Arrays) of Independent Disks, RAID) systems, tape drives, and data backup storage systems.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28 , for example, realizing the method for determining the target position provided by the embodiment of the present application.
  • Embodiment 6 of the present application also provides a computer-readable storage medium on which a computer program (or called computer-executable instructions) is stored.
  • a computer program or called computer-executable instructions
  • the program is executed by a processor, it is used to execute the target location provided by the embodiment of the present application. Determine the method, which includes:
  • the target point cloud determine the position of the target object.
  • the computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • the program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to wireless, electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • any appropriate medium including but not limited to wireless, electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • Computer program codes for performing the operations of the embodiments of the present application may be written in one or more programming languages or combinations thereof, the programming languages including object-oriented programming languages—such as Java, Smalltalk, C++, including A conventional procedural programming language such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. Where a remote computer is involved, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g. via the Internet using an Internet Service Provider). .
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider e.g. via the Internet using an Internet Service Provider.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to the technical field of automatic driving, and discloses a target location determining method and apparatus, an electronic device, and a storage medium. The method comprises: determining, according to image data of a target object in an environment where a vehicle is located, a target area of the target object; determining candidate point clouds of the target object according to first point cloud data of the target object in the environment where the vehicle is located and the target area; determining a target point cloud from the candidate point clouds according to degrees of matching between the target area and the candidate point clouds; and determining the location of the target object according to the target point cloud.

Description

目标位置确定方法、装置、电子设备以及存储介质Target location determination method, device, electronic device and storage medium
本申请要求在2021年9月17日提交中国专利局、申请号为202111093181.3的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims priority to a Chinese patent application with application number 202111093181.3 filed with the China Patent Office on September 17, 2021, the entire contents of which are incorporated herein by reference.
技术领域technical field
本申请实施例涉及自动驾驶技术领域,例如涉及一种目标位置确定方法、装置、电子设备以及存储介质。The embodiments of the present application relate to the technical field of automatic driving, for example, to a method, an apparatus, an electronic device, and a storage medium for determining a target position.
背景技术Background technique
行车安全要求自动驾驶具有极高精度的感知能力,传感器视野全面覆盖,能有效感知危险的交通状况与交通行为。摄像机是自动驾驶系统最早使用的传感器之一,也是各汽车制造商和汽车研究人员的首选传感器,但其受外界环境变化影响大,如夜晚、降雨、大雾等都会较大程度的影响摄像机的感知效果;激光雷达使用许多个更为稠密的激光束照射物体,接收物体反射的激光进而得到物体的距离信息,具有比毫米波雷达更为庞大且稠密的数据量以及更高的精度,但是对极端环境的适应能力要低很多且价格昂贵,因此,亟需改进。Driving safety requires that autonomous driving have extremely high-precision perception capabilities, and the sensor field of view is fully covered, which can effectively perceive dangerous traffic conditions and traffic behaviors. The camera is one of the earliest sensors used in the automatic driving system, and it is also the sensor of choice for various car manufacturers and automotive researchers. However, it is greatly affected by changes in the external environment, such as night, rainfall, and heavy fog, which will greatly affect the camera. Perception effect; lidar uses many denser laser beams to irradiate the object, receives the laser reflected by the object and then obtains the distance information of the object, which has a larger and denser data volume and higher accuracy than the millimeter-wave radar, but for Adaptability to extreme environments is much lower and more expensive, so improvements are urgently needed.
发明内容Contents of the invention
本申请提供一种目标位置确定方法、装置、电子设备以及存储介质,以提高车辆周围目标物体的位置确定的准确性。The present application provides a method, device, electronic device and storage medium for determining a target position, so as to improve the accuracy of determining the position of target objects around a vehicle.
第一方面,本申请实施例提供了一种目标位置确定方法,该方法包括:In the first aspect, the embodiment of the present application provides a method for determining a target position, the method comprising:
根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域;Determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
根据车辆所处场景中所述目标物体的第一点云数据和所述目标区域,确定所述目标物体的候选点云;determining a candidate point cloud of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
根据所述目标区域与所述候选点云之间的匹配度,从所述候选点云中确定目标点云;determining a target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
根据所述目标点云,确定所述目标物体的位置。According to the target point cloud, the position of the target object is determined.
第二方面,本申请实施例还提供了一种目标位置确定装置,该装置包括:In the second aspect, the embodiment of the present application also provides a device for determining a target position, which includes:
目标区域确定模块,设置为根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域;The target area determination module is configured to determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
候选点云确定模块,设置为根据车辆所处场景中所述目标物体的第一点云数据和所述目标区域,确定所述目标物体的候选点云;The candidate point cloud determination module is configured to determine the candidate point cloud of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
目标点云确定模块,设置为根据所述目标区域与所述候选点云之间的匹配度,从所述候选点云中确定目标点云;The target point cloud determination module is configured to determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
位置确定模块,设置为根据所述目标点云,确定所述目标物体的位置。A position determining module, configured to determine the position of the target object according to the target point cloud.
第三方面,本申请实施例还提供了一种电子设备,该电子设备包括:In a third aspect, the embodiment of the present application also provides an electronic device, the electronic device includes:
至少一个处理器;at least one processor;
存储器,设置为存储至少一个程序;a memory configured to store at least one program;
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如本申请任一实施例所提供的目标位置确定方法。When the at least one program is executed by the at least one processor, the at least one processor is made to implement the method for determining a target position provided in any embodiment of the present application.
第四方面,本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请任一实施例所提供的目标位置确定方法。In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for determining a target location as provided in any embodiment of the present application is implemented.
附图说明Description of drawings
图1是本申请实施例一提供的一种目标位置确定方法的流程图;FIG. 1 is a flow chart of a method for determining a target position provided in Embodiment 1 of the present application;
图2是本申请实施例二提供的一种目标位置确定方法的流程图;FIG. 2 is a flow chart of a method for determining a target location provided in Embodiment 2 of the present application;
图3是本申请实施例三提供的一种目标位置确定方法的流程图Fig. 3 is a flow chart of a method for determining a target position provided in Embodiment 3 of the present application
图4是本申请实施例四提供的一种目标位置确定装置的结构示意图;FIG. 4 is a schematic structural diagram of a device for determining a target position provided in Embodiment 4 of the present application;
图5是本申请实施例五提供的一种电子设备的结构示意图。FIG. 5 is a schematic structural diagram of an electronic device provided in Embodiment 5 of the present application.
具体实施方式Detailed ways
下面结合附图和实施例对本申请作详细说明。The application will be described in detail below in conjunction with the accompanying drawings and embodiments.
实施例一Embodiment one
图1是本申请实施例一提供的一种目标位置确定方法的流程图,本实施例可适用于自动驾驶中车辆周围目标物体位置确定的情况,尤其适用于极端环境下(恶劣的天气条件或黑暗环境),自动驾驶中车辆周围目标物体位置确定的情况。该方法可以由目标位置确定装置来执行,该装置可以由软件和/或硬件的方式实现,并可集成于承载目标位置确定功能的电子设备中,如车载控制器中。Fig. 1 is a flow chart of a method for determining a target position provided in Embodiment 1 of the present application. This embodiment is applicable to the determination of the position of target objects around a vehicle in automatic driving, especially in extreme environments (bad weather conditions or Dark environment), the situation of determining the position of target objects around the vehicle in automatic driving. The method can be executed by a device for determining a target position, which can be implemented in software and/or hardware, and can be integrated into an electronic device carrying a function of determining a target position, such as a vehicle-mounted controller.
如图1所示,该方法可以包括:As shown in Figure 1, the method may include:
110、根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域。110. Determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located.
其中,目标物体是指车辆所处场景中的物体,例如其他车辆等。所谓目标区域是指图像采集设备中采集到的车辆周围的场景的图像数据中目标物体所在的区域。图像采集设备可以是车辆上安装的摄像头,示例性的,可以是四组摄像头,可以采集车辆前后左右方位的图像数据。Wherein, the target object refers to an object in the scene where the vehicle is located, such as other vehicles. The so-called target area refers to the area where the target object is located in the image data of the scene around the vehicle collected by the image acquisition device. The image collection device may be a camera installed on the vehicle, for example, it may be four groups of cameras, which can collect image data of the front, rear, left, and right directions of the vehicle.
本实施例中,获取图像采集设备采集车辆所处场景中的目标物体的图像数据,进而基于目标检测模型,得到图像数据中目标物体的目标区域。其中,目标检测模型可以是YOLO-v3模型。In this embodiment, the image acquisition device acquires the image data of the target object in the scene where the vehicle is located, and then obtains the target area of the target object in the image data based on the target detection model. Wherein, the target detection model may be a YOLO-v3 model.
需要说明的是,在获取到目标物体的图像数据后,还可以基于预设校正参数,对图像数据进行校正处理。其中,预设校正参数是本领域技术人员根据实际情况设定的。示例性的,对校正后的图像数据基于目标检测模型进行目标物体检测,得到目标物体的目标区域。It should be noted that after the image data of the target object is acquired, the image data may also be corrected based on preset correction parameters. Wherein, the preset correction parameters are set by those skilled in the art according to actual conditions. Exemplarily, target object detection is performed on the corrected image data based on a target detection model to obtain a target area of the target object.
S120、根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云。S120. Determine a candidate point cloud of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area.
其中,第一点云数据是指通过毫米波雷达采集到的车辆所处场景中目标物体的点云数据。需要说明的是,车辆上安装有四组毫米波雷达,可以采集车辆前后左右方位的目标物体的点云数据;同时需要预先将毫米波雷达和图像采集设备的采集时间同步,使得每一帧毫米波雷达采集的点云数据对应图像采集设 备采集的一帧图像数据。所谓候选点云是指与目标区域相关的点云数据。Wherein, the first point cloud data refers to the point cloud data of the target object in the scene where the vehicle is collected by the millimeter wave radar. It should be noted that four sets of millimeter-wave radars are installed on the vehicle, which can collect point cloud data of target objects in the front, rear, left, and right directions of the vehicle; The point cloud data collected by the wave radar corresponds to a frame of image data collected by the image acquisition device. The so-called candidate point cloud refers to the point cloud data related to the target area.
本实施例中,获取毫米波雷达采集车辆所处场景中目标物体的第一点云数据,进而基于雷达平面坐标系和图像平面坐标系之间的转换关系,将第一点云数据投影到图像平面坐标系,得到第二点云数据。根据第二点云数据和目标区域之间的位置关系,确定目标物体的候选点云。示例性的,将第二点云数据中落入目标区域的点云数据,作为目标物体的候选点云。In this embodiment, the millimeter-wave radar collects the first point cloud data of the target object in the scene where the vehicle is located, and then based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the first point cloud data is projected onto the image plane coordinate system to obtain the second point cloud data. A candidate point cloud of the target object is determined according to the positional relationship between the second point cloud data and the target area. Exemplarily, the point cloud data falling into the target area in the second point cloud data is used as the candidate point cloud of the target object.
S130、根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云。S130. Determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud.
可选的,可以将候选点云与目标区域之间的距离,作为目标区域和候选点云之间的匹配度,进而根据匹配度,从候选点云中确定目标点云。Optionally, the distance between the candidate point cloud and the target area can be used as the matching degree between the target area and the candidate point cloud, and then the target point cloud is determined from the candidate point cloud according to the matching degree.
其中,目标点云是指与目标区域最匹配的点云数据。Among them, the target point cloud refers to the point cloud data that best matches the target area.
示例性的,可以对匹配度进行排序,根据排序结果和设定阈值,从候选点云中确定目标点云。例如,将匹配度按照从小到大的顺序排序,将排序结果中大于设定阈值的匹配度对应的候选点云,作为目标点云。其中,设定阈值可以由本领域技术人员根据实际情况设定。Exemplarily, the matching degree may be sorted, and the target point cloud is determined from the candidate point cloud according to the sorting result and a set threshold. For example, the matching degree is sorted from small to large, and the candidate point cloud corresponding to the matching degree greater than the set threshold in the sorting result is used as the target point cloud. Wherein, the set threshold can be set by those skilled in the art according to actual conditions.
S140、根据目标点云,确定目标物体的位置。S140. Determine the position of the target object according to the target point cloud.
本实施例中,根据目标点云,获取毫米波雷达中采集的目标点云对应的目标物体的基本信息,其中基本信息包括目标物体的位置和速度等信息。示例性的,根据目标物体的基本信息,利用卡尔曼滤波算法,确定目标物体的位置。In this embodiment, according to the target point cloud, the basic information of the target object corresponding to the target point cloud collected in the millimeter-wave radar is obtained, wherein the basic information includes information such as the position and speed of the target object. Exemplarily, according to the basic information of the target object, a Kalman filter algorithm is used to determine the position of the target object.
本申请实施例的技术方案,通过根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域,之后根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云,进而根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云,最后根据目标点云,确定目标物体的位置。上述技术方案,通过图像数据与点云数据结合,来确定车辆所处场景中目标物体的位置,提高了目标物体的位置确定的准确性,为自动驾驶中车辆周围环境中目标物体的位置确定提供了一种新思路。In the technical solution of the embodiment of the present application, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and then the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located Then, according to the matching degree between the target area and the candidate point cloud, the target point cloud is determined from the candidate point cloud, and finally the position of the target object is determined according to the target point cloud. The above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
实施例二Embodiment two
图2是本申请实施例二提供的一种目标位置确定方法的流程图,在上述实施例的基础上,对“根据目标物体的第一点云数据和目标区域,确定目标物体的候选点云”细化,提供一种可选实施方案。Fig. 2 is a flow chart of a method for determining a target position provided in Embodiment 2 of the present application. On the basis of the above embodiment, for "according to the first point cloud data of the target object and the target area, determine the candidate point cloud of the target object "Refinement provides an optional implementation.
如图2所示,该方法可以包括:As shown in Figure 2, the method may include:
S210、根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域。S210. Determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located.
S220、将目标区域进行三维转换,得到点云锥。S220. Perform three-dimensional transformation on the target area to obtain a point cloud cone.
本实施例中,可以基于三维转换模型,将目标区域进行三维转换,得到目标区域对应的点云锥。In this embodiment, based on the three-dimensional transformation model, the target area can be three-dimensionally converted to obtain the point cloud cone corresponding to the target area.
S230、基于雷达平面坐标系和图像平面坐标系之间的转换关系,将第一点云数据投影到图像平面坐标系,得到第二点云数据。S230. Based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, project the first point cloud data to the image plane coordinate system to obtain second point cloud data.
本实施例中,对第一点云数据进行聚类,得到聚类后的点云数据。示例性的,可以基于DNSCAN(Density-Based Spatial Clustering of Applications with Noise)算法对第一点云数据进行聚类,得到聚类后的点云数据。In this embodiment, clustering is performed on the first point cloud data to obtain clustered point cloud data. Exemplarily, the first point cloud data can be clustered based on the DNSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm to obtain clustered point cloud data.
示例性的,基于雷达平面坐标系和图像平面坐标系之间的转换关系,将聚类后的点云数据投影到图像平面坐标系,得到第二点云数据。Exemplarily, based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the clustered point cloud data is projected to the image plane coordinate system to obtain the second point cloud data.
可以理解的是,通过对第一点云数据进行聚类,可以过滤掉无关点云数据,从而使得后续目标位置的确定更加准确。It can be understood that by clustering the first point cloud data, irrelevant point cloud data can be filtered out, so that the determination of the subsequent target position is more accurate.
S240、根据第二点云数据与点云锥之间的位置关系,确定目标物体的候选点云。S240. Determine a candidate point cloud of the target object according to the positional relationship between the second point cloud data and the point cloud cone.
本实施例中,将第二点云数据中落入点云锥中的点云数据,作为目标物体的候选点云;将落入点云锥外的第二点云数据剔除掉。In this embodiment, the point cloud data falling into the point cloud cone in the second point cloud data is used as the candidate point cloud of the target object; the second point cloud data falling outside the point cloud cone is eliminated.
S250、根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云。S250. Determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud.
S260、根据目标点云,确定目标物体的位置。S260. Determine the position of the target object according to the target point cloud.
本申请实施例的技术方案,通过根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域,将目标区域进行三维转换,得到点云锥。基于雷达平面坐标系和图像平面坐标系之间的转换关系,将第一点云数据投影到图像平面坐标系,得到第二点云数据,之后根据第二点云数据与点云锥之间的位置关系,确定目标物体的候选点云,进而根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云,最后根据目标点云,确定目标物体的位置。上述技术方案,通过图像数据与点云数据结合,来确定车辆所处场景中目标物体的位置,提高了目标物体的位置确定的准确性,为自动驾驶中车辆周围环境中目标物体的位置确定提供了一种新思路。In the technical solution of the embodiment of the present application, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and the target area is three-dimensionally converted to obtain a point cloud cone. Based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the first point cloud data is projected to the image plane coordinate system to obtain the second point cloud data, and then according to the relationship between the second point cloud data and the point cloud cone The positional relationship determines the candidate point cloud of the target object, and then determines the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud, and finally determines the position of the target object according to the target point cloud. The above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
实施例三Embodiment Three
图3是本申请实施例三提供的一种目标位置确定方法的流程图,在上述实施例的基础上,对“根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云”细化,提供一种可选实施方案。Fig. 3 is a flow chart of a method for determining a target position provided in Embodiment 3 of the present application. On the basis of the above-mentioned embodiment, for "according to the matching degree between the target area and the candidate point cloud, determine the target from the candidate point cloud "point cloud" refinement, providing an optional implementation.
如图3所示,该方法可以包括:As shown in Figure 3, the method may include:
S310、根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域。S310. Determine a target area of the target object according to the image data of the target object in the scene where the vehicle is located.
S320、根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云。S320. Determine a candidate point cloud of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area.
S330、针对每一候选点云,根据该候选点云的点云坐标、目标区域的基本信息和采集设备的焦距,确定该候选点云与目标区域的匹配度。S330. For each candidate point cloud, determine the matching degree between the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area, and the focal length of the acquisition device.
其中,目标区域的基本信息包括目标区域的尺寸信息和目标区域的中心点坐标;尺寸信息包括目标区域的宽度和高度。采集设备(即图像采集设备)的焦距包括水平方向焦距和垂直方向焦距。Wherein, the basic information of the target area includes the size information of the target area and the coordinates of the center point of the target area; the size information includes the width and height of the target area. The focal length of the acquisition device (that is, the image acquisition device) includes a focal length in a horizontal direction and a focal length in a vertical direction.
可选的,根据该候选点云的点云坐标,确定目标物体与车辆的第一距离。 示例的,可以将点云坐标的横坐标的平方与纵坐标的平方相加,将相加后的结果作为目标物体与车辆的第一距离。Optionally, the first distance between the target object and the vehicle is determined according to the point cloud coordinates of the candidate point cloud. For example, the square of the abscissa and the square of the ordinate of the point cloud coordinates may be added, and the added result may be used as the first distance between the target object and the vehicle.
在确定第一距离后,可以根据第一距离、尺寸信息中的宽度或高度、以及采集设备的焦距,确定目标物体的估计高度和估计宽度。After the first distance is determined, an estimated height and an estimated width of the target object may be determined according to the first distance, the width or height in the size information, and the focal length of the acquisition device.
示例性的,可以根据第一距离、尺寸信息中的高度、以及采集设备的垂直方向焦距,确定目标物体的估计高度,例如可以通过如下公式确定:Exemplarily, the estimated height of the target object can be determined according to the first distance, the height in the size information, and the vertical focal length of the acquisition device, for example, it can be determined by the following formula:
Figure PCTCN2022117770-appb-000001
Figure PCTCN2022117770-appb-000001
其中,h表示目标物体的估计高度,f y表示采集设备的垂直方向焦距,|y 2-y 1|表示尺寸信息中的高度,r表示第一距离。 Wherein, h represents the estimated height of the target object, f y represents the vertical focal length of the acquisition device, |y 2 −y 1 | represents the height in the size information, and r represents the first distance.
示例性的,可以根据第一距离、尺寸信息中的宽度、以及采集设备的水平方向焦距,确定目标物体的估计宽度,例如可以通过如下公式确定:Exemplarily, the estimated width of the target object can be determined according to the first distance, the width in the size information, and the horizontal focal length of the acquisition device, for example, it can be determined by the following formula:
Figure PCTCN2022117770-appb-000002
Figure PCTCN2022117770-appb-000002
其中,w表示目标物体的估计宽度,f x表示采集设备的水平方向焦距,|x 2-x 1|表示尺寸信息中的宽度,r表示第一距离。 Wherein, w represents the estimated width of the target object, f x represents the horizontal focal length of the acquisition device, |x 2 −x 1 | represents the width in the size information, and r represents the first distance.
在确定目标物体的估计高度和估计宽度后,可以根据目标物体的估计宽度或估计高度、采集设备的焦距、以及尺寸信息中的宽度或高度,确定目标物体与车辆之间的第二距离。After determining the estimated height and estimated width of the target object, the second distance between the target object and the vehicle may be determined according to the estimated width or estimated height of the target object, the focal length of the acquisition device, and the width or height in the size information.
示例性的,若目标物体的高度已知,则根据目标物体的高度、采集设备的垂直方向焦距、以及尺寸信息中的高度,确定目标物体与车辆之间的第二距离。例如,可以通过如下公式确定:Exemplarily, if the height of the target object is known, the second distance between the target object and the vehicle is determined according to the height of the target object, the vertical focal length of the collection device, and the height in the size information. For example, it can be determined by the following formula:
Figure PCTCN2022117770-appb-000003
Figure PCTCN2022117770-appb-000003
其中,r’表示目标物体与车辆之间的第二距离,h’表示目标物体的高度,|y 2-y 1|表示尺寸信息中的高度,f y表示采集设备的垂直方向焦距。 Among them, r' represents the second distance between the target object and the vehicle, h' represents the height of the target object, |y 2 -y 1 | represents the height in the size information, and f y represents the vertical focal length of the acquisition device.
示例性的,若目标物体的宽度已知,则根据目标物体的宽度、采集设备的 水平方向焦距、以及尺寸信息中的宽度,确定目标物体与车辆的第二距离。例如可以通过如下公式确定:Exemplarily, if the width of the target object is known, the second distance between the target object and the vehicle is determined according to the width of the target object, the horizontal focal length of the acquisition device, and the width in the size information. For example, it can be determined by the following formula:
Figure PCTCN2022117770-appb-000004
Figure PCTCN2022117770-appb-000004
其中,r’表示目标物体与车辆之间的第二距离,w’表示目标物体的宽度,|x 2-x 1|表示尺寸信息中的宽度,f x表示采集设备的水平方向焦距。 Among them, r' represents the second distance between the target object and the vehicle, w' represents the width of the target object, |x 2 -x 1 | represents the width in the size information, and f x represents the horizontal focal length of the acquisition device.
可选的,将第一距离、估计高度、估计宽度、候选点云中心点坐标、第二距离和基本信息输入至神经网络模型中,得到该候选点云与目标区域的匹配度。其中,神经网络模型可以是径向基神经网络。Optionally, the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and basic information are input into the neural network model to obtain the matching degree between the candidate point cloud and the target area. Wherein, the neural network model may be a radial basis neural network.
示例性的,将第一距离、估计高度、估计宽度、候选点云中心点坐标、第二距离和基本信息输入至神经网络模型中,经过神经网络模型处理,得到该候选点云与目标区域的匹配度。Exemplarily, the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and basic information are input into the neural network model, and after processing by the neural network model, the distance between the candidate point cloud and the target area is obtained. suitability.
可以理解的是,通过神经网络模型确定候选点云与目标区域的匹配度,可以增强匹配度的准确性。It can be understood that the accuracy of the matching degree can be enhanced by determining the matching degree of the candidate point cloud and the target area through the neural network model.
S340、根据至少一个候选点云与目标区域之间的匹配度,从所述至少一个候选点云中确定目标点云。S340. Determine the target point cloud from the at least one candidate point cloud according to the matching degree between the at least one candidate point cloud and the target area.
S350、根据目标点云,确定目标物体的位置。S350. Determine the position of the target object according to the target point cloud.
本申请实施例的技术方案,通过根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域,之后根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云,并针对每一候选点云,根据该候选点云的点云坐标、目标区域的基本信息和采集设备的焦距,确定该候选点云与目标区域的匹配度,进而根据至少一个候选点云与目标区域之间的匹配度,从所述至少一个候选点云中确定目标点云,最后根据目标点云,确定目标物体的位置。上述技术方案,通过图像数据与点云数据结合,来确定车辆所处场景中目标物体的位置,提高了目标物体的位置确定的准确性,为自动驾驶中车辆周围环境中目标物体的位置确定提供了一种新思路。In the technical solution of the embodiment of the present application, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and then the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located and for each candidate point cloud, according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device, determine the matching degree between the candidate point cloud and the target area, and then according to at least one The matching degree between the candidate point cloud and the target area is used to determine the target point cloud from the at least one candidate point cloud, and finally the position of the target object is determined according to the target point cloud. The above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
实施例四Embodiment four
图4是本申请实施例四提供的一种目标位置确定装置的结构示意图,本实施例可适用于自动驾驶中车辆周围目标物体位置确定的情况,尤其适用于极端环境下(恶劣的天气条件或黑暗环境),自动驾驶中车辆周围目标物体位置确定的情况。该装置可以由软件和/或硬件的方式实现,并可集成于承载目标位置确定功能的电子设备中,如车载控制器中。Fig. 4 is a schematic structural diagram of a target position determination device provided in Embodiment 4 of the present application. This embodiment is applicable to the determination of the position of target objects around the vehicle in automatic driving, especially in extreme environments (bad weather conditions or Dark environment), the situation of determining the position of target objects around the vehicle in automatic driving. The device can be realized by means of software and/or hardware, and can be integrated into an electronic device carrying the function of determining the target position, such as a vehicle controller.
如图4所示,该装置可以包括目标区域确定模块410、候选点云确定模块420、目标点云确定模块430和位置确定模块440,其中,As shown in Figure 4, the device may include a target area determination module 410, a candidate point cloud determination module 420, a target point cloud determination module 430 and a position determination module 440, wherein,
目标区域确定模块410,设置为根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域;The target area determination module 410 is configured to determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
候选点云确定模块420,设置为根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云;The candidate point cloud determination module 420 is configured to determine the candidate point cloud of the target object according to the first point cloud data and the target area of the target object in the scene where the vehicle is located;
目标点云确定模块430,设置为根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云;The target point cloud determining module 430 is configured to determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
位置确定模块440,设置为根据目标点云,确定目标物体的位置。The position determination module 440 is configured to determine the position of the target object according to the target point cloud.
本申请实施例的技术方案,通过根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域,之后根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云,进而根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云,最后根据目标点云,确定目标物体的位置。上述技术方案,通过图像数据与点云数据结合,来确定车辆所处场景中目标物体的位置,提高了目标物体的位置确定的准确性,为自动驾驶中车辆周围环境中目标物体的位置确定提供了一种新思路。In the technical solution of the embodiment of the present application, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and then the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located Then, according to the matching degree between the target area and the candidate point cloud, the target point cloud is determined from the candidate point cloud, and finally the position of the target object is determined according to the target point cloud. The above technical solution determines the position of the target object in the scene where the vehicle is located through the combination of image data and point cloud data, which improves the accuracy of determining the position of the target object and provides a basis for determining the position of the target object in the surrounding environment of the vehicle in automatic driving. a new way of thinking.
可选地,候选点云确定模块420包括点云堆得到单元、第二点云数据得到单元和候选点云确定单元,其中,Optionally, the candidate point cloud determination module 420 includes a point cloud stack obtaining unit, a second point cloud data obtaining unit and a candidate point cloud determining unit, wherein,
点云堆得到单元,设置为将目标区域进行三维转换,得到点云锥;The point cloud stack obtains the unit, which is set to perform three-dimensional conversion of the target area to obtain the point cloud cone;
第二点云数据得到单元,设置为基于雷达平面坐标系和图像平面坐标系之间的转换关系,将第一点云数据投影到图像平面坐标系,得到第二点云数据;The second point cloud data obtaining unit is configured to project the first point cloud data to the image plane coordinate system based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system to obtain the second point cloud data;
候选点云确定单元,设置为根据第二点云数据与点云锥之间的位置关系,确定目标物体的候选点云。The candidate point cloud determining unit is configured to determine the candidate point cloud of the target object according to the positional relationship between the second point cloud data and the point cloud cone.
可选地,所述候选点云的数量为至少一个,目标点云确定模块430包括匹配度确定单元和目标点云确定单元,其中,Optionally, the number of candidate point clouds is at least one, and the target point cloud determination module 430 includes a matching degree determination unit and a target point cloud determination unit, wherein,
匹配度确定单元,设置为针对每一候选点云,根据该候选点云的点云坐标、目标区域的基本信息和采集设备的焦距,确定该候选点云与目标区域的匹配度;基本信息包括目标区域的尺寸信息和目标区域的中心点坐标;The matching degree determining unit is configured to, for each candidate point cloud, determine the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device; the basic information includes The size information of the target area and the coordinates of the center point of the target area;
目标点云确定单元,设置为根据至少一个候选点云与目标区域之间的匹配度,从所述至少一个候选点云中确定目标点云。The target point cloud determining unit is configured to determine the target point cloud from the at least one candidate point cloud according to the matching degree between the at least one candidate point cloud and the target area.
可选地,匹配度确定单元包括第一距离确定子单元、估计信息确定子单元、第二距离确定子单元和匹配度确定子单元,其中,Optionally, the matching degree determining unit includes a first distance determining subunit, an estimated information determining subunit, a second distance determining subunit, and a matching degree determining subunit, wherein,
第一距离确定子单元,设置为根据该候选点云的点云坐标,确定目标物体与车辆的第一距离;The first distance determination subunit is configured to determine the first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
估计信息确定子单元,设置为根据第一距离、目标区域的尺寸信息中的宽度或高度、以及采集设备的焦距,确定目标物体的估计高度和估计宽度;The estimated information determination subunit is configured to determine the estimated height and estimated width of the target object according to the first distance, the width or height in the size information of the target area, and the focal length of the acquisition device;
第二距离确定子单元,设置为根据目标物体的估计宽度或估计高度、采集设备的焦距、以及目标区域的尺寸信息中的宽度或高度,确定目标物体与车辆之间的第二距离;The second distance determination subunit is configured to determine the second distance between the target object and the vehicle according to the estimated width or estimated height of the target object, the focal length of the acquisition device, and the width or height in the size information of the target area;
匹配度确定子单元,设置为将第一距离、估计高度、估计宽度、候选点云中心点坐标、第二距离和基本信息输入至神经网络模型中,得到该候选点云与目标区域的匹配度。The matching degree determination subunit is configured to input the first distance, estimated height, estimated width, coordinates of the center point of the candidate point cloud, the second distance and basic information into the neural network model to obtain the degree of matching between the candidate point cloud and the target area .
可选地,候选点云确定模块420还包括聚类点云确定单元,其中该距离点云单元设置为:Optionally, the candidate point cloud determination module 420 also includes a cluster point cloud determination unit, wherein the distance point cloud unit is set to:
对第一点云数据进行聚类,得到聚类后的点云数据;Clustering the first point cloud data to obtain clustered point cloud data;
相应的,第二点云数据得到单元还设置为:Correspondingly, the second point cloud data obtaining unit is also set as:
基于雷达平面坐标系和图像平面坐标系之间的转换关系,将聚类后的点云数据投影到图像平面坐标系,得到第二点云数据。Based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the clustered point cloud data is projected to the image plane coordinate system to obtain the second point cloud data.
可选地,目标点云确定模块430设置为:Optionally, the target point cloud determination module 430 is set to:
对匹配度进行排序,根据排序结果和设定阈值,从候选点云中确定目标点云。The matching degree is sorted, and the target point cloud is determined from the candidate point cloud according to the sorting result and the set threshold.
可选地,该装置还包括校正处理模块,该校正处理模块设置为:Optionally, the device also includes a correction processing module, the correction processing module is configured to:
基于预设校正参数,对图像数据进行校正处理。Correction processing is performed on the image data based on preset correction parameters.
上述目标位置确定装置可执行本申请任意实施例所提供的目标位置确定方法,具备执行方法相应的功能模块。The above-mentioned device for determining a target position can execute the method for determining a target position provided in any embodiment of the present application, and has corresponding functional modules for executing the method.
实施例五Embodiment five
图5是本申请实施例五提供的一种电子设备的结构示意图,图5示出了适于用来实现本申请实施例实施方式的示例性设备的框图。图5显示的设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。Fig. 5 is a schematic structural diagram of an electronic device provided in Embodiment 5 of the present application, and Fig. 5 shows a block diagram of an exemplary device suitable for implementing the implementation manner of the embodiment of the present application. The device shown in FIG. 5 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
如图5所示,电子设备12以通用计算设备的形式表现。电子设备12的组件可以包括但不限于:至少一个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。As shown in FIG. 5, electronic device 12 takes the form of a general-purpose computing device. Components of the electronic device 12 may include, but are not limited to, at least one processor or processing unit 16 , a system memory 28 , and a bus 18 connecting various system components including the system memory 28 and the processing unit 16 .
总线18表示几类总线结构中的至少一种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture,ISA)总线,微通道体系结构(Micro Channel Architecture,MCA)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association,VESA)局域总线以及外围组件互连(Peripheral Component Interconnect,PCI)总线。 Bus 18 represents at least one of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures. For example, these architectures include but are not limited to Industry Standard Architecture (Industry Standard Architecture, ISA) bus, Micro Channel Architecture (Micro Channel Architecture, MCA) bus, Enhanced ISA bus, Video Electronics Standards Association (Video Electronics Standards Association, VESA) local bus and peripheral component interconnect (Peripheral Component Interconnect, PCI) bus.
电子设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能 够被电子设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。 Electronic device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by electronic device 12 and include both volatile and nonvolatile media, removable and non-removable media.
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory,RAM)30和/或高速缓存存储器(高速缓存32)。电子设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图5未显示,通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如只读光盘(Compact Disc-Read Only Memory,CD-ROM),数字视盘(Digital Video Disc-Read Only Memory,DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过至少一个数据介质接口与总线18相连。系统存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请实施例各实施例的功能。 System memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory (cache 32). The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a disk drive for reading and writing to a removable non-volatile disk (such as a "floppy disk") may be provided, as well as a removable non-volatile disk (such as a Compact Disc- Read Only Memory, CD-ROM), Digital Video Disc (Digital Video Disc-Read Only Memory, DVD-ROM) or other optical media) CD-ROM drive. In these cases, each drive may be connected to bus 18 via at least one data medium interface. The system memory 28 may include at least one program product, which has a set of (for example, at least one) program modules configured to execute the functions of the various embodiments of the embodiments of the present application.
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如系统存储器28中,这样的程序模块42包括但不限于操作系统、至少一个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本申请实施例所描述的实施例中的功能和/或方法。a program/utility 40 having a set (at least one) of program modules 42, such as may be stored in system memory 28, such as but not limited to an operating system, at least one application program, other program modules, and program data, Implementations of networked environments may be included in each or some combination of these examples. The program module 42 generally executes the functions and/or methods in the embodiments described in the embodiments of this application.
电子设备12也可以与至少一个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与至少一个使得用户能与该电子设备12交互的设备通信,和/或与使得该电子设备12能与至少一个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(Input/Output,I/O)接口22进行。并且,电子设备12还可以通过网络适配器20与至少一个网络(例如局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与 电子设备12的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、磁盘阵列(Redundant Arrays of Independent Disks,RAID)系统、磁带驱动器以及数据备份存储系统等。The electronic device 12 can also communicate with at least one external device 14 (such as a keyboard, a pointing device, a display 24, etc.), and can also communicate with at least one device that enables a user to interact with the electronic device 12, and/or communicate with the electronic device 12. 12. Any device capable of communicating with at least one other computing device (eg, network card, modem, etc.). This communication can be performed through an input/output (Input/Output, I/O) interface 22 . Moreover, the electronic device 12 can also communicate with at least one network (such as a local area network (Local Area Network, LAN), a wide area network (Wide Area Network, WAN) and/or a public network such as the Internet) through the network adapter 20. Network adapter 20 communicates with other modules of electronic device 12 over bus 18, as shown. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk arrays (Redundant Arrays) of Independent Disks, RAID) systems, tape drives, and data backup storage systems.
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例所提供的目标位置确定方法。The processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28 , for example, realizing the method for determining the target position provided by the embodiment of the present application.
实施例六Embodiment six
本申请实施例六还提供一种计算机可读存储介质,其上存储有计算机程序(或称为计算机可执行指令),该程序被处理器执行时用于执行本申请实施例所提供的目标位置确定方法,该方法包括:Embodiment 6 of the present application also provides a computer-readable storage medium on which a computer program (or called computer-executable instructions) is stored. When the program is executed by a processor, it is used to execute the target location provided by the embodiment of the present application. Determine the method, which includes:
根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域;Determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
根据车辆所处场景中目标物体的第一点云数据和目标区域,确定目标物体的候选点云;Determine the candidate point cloud of the target object according to the first point cloud data and the target area of the target object in the scene where the vehicle is located;
根据目标区域与候选点云之间的匹配度,从候选点云中确定目标点云;Determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
根据目标点云,确定目标物体的位置。According to the target point cloud, determine the position of the target object.
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器((Erasable Programmable Read-Only Memory,EPROM)或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与 其结合使用。The computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer-readable storage media include: electrical connections with at least one lead, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable Read-Only Memory ((Erasable Programmable Read-Only Memory, EPROM) or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above . In this document, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。The program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to wireless, electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请实施例操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program codes for performing the operations of the embodiments of the present application may be written in one or more programming languages or combinations thereof, the programming languages including object-oriented programming languages—such as Java, Smalltalk, C++, including A conventional procedural programming language such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. Where a remote computer is involved, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g. via the Internet using an Internet Service Provider). .

Claims (10)

  1. 一种目标位置确定方法,包括:A method for determining a target location, comprising:
    根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域;Determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
    根据车辆所处场景中所述目标物体的第一点云数据和所述目标区域,确定所述目标物体的候选点云;determining a candidate point cloud of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
    根据所述目标区域与所述候选点云之间的匹配度,从所述候选点云中确定目标点云;determining a target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
    根据所述目标点云,确定所述目标物体的位置。According to the target point cloud, the position of the target object is determined.
  2. 根据权利要求1所述的方法,其中,所述根据所述目标物体的第一点云数据和所述目标区域,确定所述目标物体的候选点云,包括:The method according to claim 1, wherein said determining the candidate point cloud of the target object according to the first point cloud data of the target object and the target area comprises:
    将所述目标区域进行三维转换,得到点云锥;performing three-dimensional transformation on the target area to obtain a point cloud cone;
    基于雷达平面坐标系和图像平面坐标系之间的转换关系,将所述第一点云数据投影到图像平面坐标系,得到第二点云数据;Based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, projecting the first point cloud data to the image plane coordinate system to obtain the second point cloud data;
    根据所述第二点云数据与所述点云锥之间的位置关系,确定所述目标物体的候选点云。A candidate point cloud of the target object is determined according to a positional relationship between the second point cloud data and the point cloud cone.
  3. 根据权利要求1所述的方法,其中,所述候选点云的数量为至少一个,所述根据所述目标区域与所述候选点云之间的匹配度,从所述候选点云中确定目标点云,包括:The method according to claim 1, wherein the number of the candidate point cloud is at least one, and the target is determined from the candidate point cloud according to the matching degree between the target area and the candidate point cloud. Point cloud, including:
    针对每一候选点云,根据该候选点云的点云坐标、所述目标区域的基本信息和采集设备的焦距,确定该候选点云与所述目标区域的匹配度;所述基本信息包括目标区域的尺寸信息和目标区域的中心点坐标;For each candidate point cloud, determine the matching degree between the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device; the basic information includes the target area The size information of the area and the coordinates of the center point of the target area;
    根据至少一个候选点云与所述目标区域之间的匹配度,从所述至少一个候选点云中确定目标点云。The target point cloud is determined from the at least one candidate point cloud according to the matching degree between the at least one candidate point cloud and the target area.
  4. 根据权利要求3所述的方法,其中,所述根据该候选点云的点云坐标、所述目标区域的基本信息和采集设备的焦距,确定该候选点云与所述目标区域的匹配度,包括:The method according to claim 3, wherein the matching degree of the candidate point cloud and the target area is determined according to the point cloud coordinates of the candidate point cloud, the basic information of the target area, and the focal length of the acquisition device, include:
    根据该候选点云的点云坐标,确定所述目标物体与所述车辆的第一距离;determining a first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
    根据所述第一距离、所述目标区域的尺寸信息中的宽度或高度、以及所述采集设备的焦距,确定所述目标物体的估计高度和估计宽度;determining an estimated height and an estimated width of the target object according to the first distance, the width or height in the size information of the target area, and the focal length of the collection device;
    根据所述目标物体的估计宽度或估计高度、所述采集设备的焦距、以及所述目标区域的尺寸信息中的宽度或高度,确定所述目标物体与所述车辆之间的第二距离;determining a second distance between the target object and the vehicle according to the estimated width or estimated height of the target object, the focal length of the acquisition device, and the width or height in the size information of the target area;
    将所述第一距离、所述估计高度、所述估计宽度、候选点云中心点坐标、所述第二距离和所述基本信息输入至神经网络模型中,得到该候选点云与所述目标区域的匹配度。Input the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and the basic information into the neural network model to obtain the candidate point cloud and the target Matching degree of the region.
  5. 根据权利要求2所述的方法,所述基于雷达平面坐标系和图像平面坐标系之间的转换关系,将所述第一点云数据投影到图像平面坐标系,得到第二点云数据之前,还包括:According to the method according to claim 2, said first point cloud data is projected onto the image plane coordinate system based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, and before obtaining the second point cloud data, Also includes:
    对所述第一点云数据进行聚类,得到聚类后的点云数据;Clustering the first point cloud data to obtain clustered point cloud data;
    相应的,基于雷达平面坐标系和图像平面坐标系之间的转换关系,将所述第一点云数据投影到图像平面坐标系,得到第二点云数据,包括:Correspondingly, based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the first point cloud data is projected to the image plane coordinate system to obtain the second point cloud data, including:
    基于雷达平面坐标系和图像平面坐标系之间的转换关系,将所述聚类后的点云数据投影到图像平面坐标系,得到第二点云数据。Based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, the clustered point cloud data is projected onto the image plane coordinate system to obtain the second point cloud data.
  6. 根据权利要求1所述的方法,其中,所述根据所述目标区域与所述候选点云之间的匹配度,从所述候选点云中确定目标点云,包括:The method according to claim 1, wherein said determining the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud comprises:
    对所述匹配度进行排序,根据排序结果和设定阈值,从所述候选点云中确定目标点云。The matching degree is sorted, and the target point cloud is determined from the candidate point cloud according to the sorting result and the set threshold.
  7. 根据权利要求1所述的方法,还包括:The method according to claim 1, further comprising:
    基于预设校正参数,对所述图像数据进行校正处理。Correction processing is performed on the image data based on preset correction parameters.
  8. 一种目标位置确定装置,包括:A device for determining a target location, comprising:
    目标区域确定模块,设置为根据车辆所处场景中目标物体的图像数据,确定目标物体的目标区域;The target area determination module is configured to determine the target area of the target object according to the image data of the target object in the scene where the vehicle is located;
    候选点云确定模块,设置为根据车辆所处场景中所述目标物体的第一点云 数据和所述目标区域,确定所述目标物体的候选点云;The candidate point cloud determination module is configured to determine the candidate point cloud of the target object according to the first point cloud data and the target area of the target object in the scene where the vehicle is located;
    目标点云确定模块,设置为根据所述目标区域与所述候选点云之间的匹配度,从所述候选点云中确定目标点云;The target point cloud determination module is configured to determine the target point cloud from the candidate point cloud according to the matching degree between the target area and the candidate point cloud;
    位置确定模块,设置为根据所述目标点云,确定所述目标物体的位置。A position determining module, configured to determine the position of the target object according to the target point cloud.
  9. 一种电子设备,包括:An electronic device comprising:
    至少一个处理器;at least one processor;
    存储器,设置为存储至少一个程序;a memory configured to store at least one program;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一项所述的目标位置确定方法。When the at least one program is executed by the at least one processor, the at least one processor is made to implement the target position determination method according to any one of claims 1-7.
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的目标位置确定方法。A computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for determining a target position according to any one of claims 1-7 is implemented.
PCT/CN2022/117770 2021-09-17 2022-09-08 Target location determining method and apparatus, electronic device, and storage medium WO2023040737A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111093181.3 2021-09-17
CN202111093181.3A CN113838125B (en) 2021-09-17 2021-09-17 Target position determining method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023040737A1 true WO2023040737A1 (en) 2023-03-23

Family

ID=78959810

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117770 WO2023040737A1 (en) 2021-09-17 2022-09-08 Target location determining method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN113838125B (en)
WO (1) WO2023040737A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116938960A (en) * 2023-08-07 2023-10-24 北京斯年智驾科技有限公司 Sensor data processing method, device, equipment and computer readable storage medium
CN117806218A (en) * 2024-02-28 2024-04-02 厦门市广和源工贸有限公司 Method and device for determining position of electrical equipment, electronic equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838125B (en) * 2021-09-17 2024-10-22 中国第一汽车股份有限公司 Target position determining method, device, electronic equipment and storage medium
CN115641567B (en) * 2022-12-23 2023-04-11 小米汽车科技有限公司 Target object detection method and device for vehicle, vehicle and medium
CN116299534A (en) * 2023-02-21 2023-06-23 广西柳工机械股份有限公司 Method, device, equipment and storage medium for determining vehicle pose

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190171212A1 (en) * 2017-11-24 2019-06-06 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for outputting information of autonomous vehicle
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN111612841A (en) * 2020-06-22 2020-09-01 上海木木聚枞机器人科技有限公司 Target positioning method and device, mobile robot and readable storage medium
CN111815707A (en) * 2020-07-03 2020-10-23 北京爱笔科技有限公司 Point cloud determining method, point cloud screening device and computer equipment
CN111862337A (en) * 2019-12-18 2020-10-30 北京嘀嘀无限科技发展有限公司 Visual positioning method and device, electronic equipment and computer readable storage medium
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10365650B2 (en) * 2017-05-25 2019-07-30 GM Global Technology Operations LLC Methods and systems for moving object velocity determination
CN109145680B (en) * 2017-06-16 2022-05-27 阿波罗智能技术(北京)有限公司 Method, device and equipment for acquiring obstacle information and computer storage medium
CN108509918B (en) * 2018-04-03 2021-01-08 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
US10867430B2 (en) * 2018-11-19 2020-12-15 Intel Corporation Method and system of 3D reconstruction with volume-based filtering for image processing
WO2020155159A1 (en) * 2019-02-02 2020-08-06 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system, and readable storage medium
CN112154445A (en) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 Method and device for determining lane line in high-precision map
WO2021075058A1 (en) * 2019-10-18 2021-04-22 日本電信電話株式会社 Station position selection assisting method and station position selection assisting device
KR20210100777A (en) * 2020-02-06 2021-08-18 엘지전자 주식회사 Apparatus for determining position of vehicle and operating method thereof
KR102338665B1 (en) * 2020-03-02 2021-12-10 건국대학교 산학협력단 Apparatus and method for classficating point cloud using semantic image
CN111487641B (en) * 2020-03-19 2022-04-22 福瑞泰克智能系统有限公司 Method and device for detecting object by using laser radar, electronic equipment and storage medium
CN111709988B (en) * 2020-04-28 2024-01-23 上海高仙自动化科技发展有限公司 Method and device for determining characteristic information of object, electronic equipment and storage medium
CN111899302A (en) * 2020-06-23 2020-11-06 武汉闻道复兴智能科技有限责任公司 Point cloud data-based visual detection method, device and system
CN111881827B (en) * 2020-07-28 2022-04-26 浙江商汤科技开发有限公司 Target detection method and device, electronic equipment and storage medium
CN112489427B (en) * 2020-11-26 2022-04-15 招商华软信息有限公司 Vehicle trajectory tracking method, device, equipment and storage medium
CN112907746B (en) * 2021-03-25 2024-10-29 上海商汤临港智能科技有限公司 Electronic map generation method and device, electronic equipment and storage medium
CN113156421A (en) * 2021-04-07 2021-07-23 南京邮电大学 Obstacle detection method based on information fusion of millimeter wave radar and camera
CN113297958A (en) * 2021-05-24 2021-08-24 驭势(上海)汽车科技有限公司 Automatic labeling method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190171212A1 (en) * 2017-11-24 2019-06-06 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for outputting information of autonomous vehicle
CN110456363A (en) * 2019-06-17 2019-11-15 北京理工大学 The target detection and localization method of three-dimensional laser radar point cloud and infrared image fusion
CN111862337A (en) * 2019-12-18 2020-10-30 北京嘀嘀无限科技发展有限公司 Visual positioning method and device, electronic equipment and computer readable storage medium
CN111612841A (en) * 2020-06-22 2020-09-01 上海木木聚枞机器人科技有限公司 Target positioning method and device, mobile robot and readable storage medium
CN111815707A (en) * 2020-07-03 2020-10-23 北京爱笔科技有限公司 Point cloud determining method, point cloud screening device and computer equipment
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116938960A (en) * 2023-08-07 2023-10-24 北京斯年智驾科技有限公司 Sensor data processing method, device, equipment and computer readable storage medium
CN117806218A (en) * 2024-02-28 2024-04-02 厦门市广和源工贸有限公司 Method and device for determining position of electrical equipment, electronic equipment and storage medium
CN117806218B (en) * 2024-02-28 2024-05-28 厦门市广和源工贸有限公司 Method and device for determining position of electrical equipment, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113838125B (en) 2024-10-22
CN113838125A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
WO2023040737A1 (en) Target location determining method and apparatus, electronic device, and storage medium
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
JP6866439B2 (en) Position detection method and its devices, equipment, storage media and vehicles
EP3623838A1 (en) Method, apparatus, device, and medium for determining angle of yaw
CN110226186B (en) Method and device for representing map elements and method and device for positioning
CN111563450B (en) Data processing method, device, equipment and storage medium
CN109849930B (en) Method and device for calculating speed of adjacent vehicle of automatic driving automobile
WO2021207954A1 (en) Target identification method and device
WO2023142816A1 (en) Obstacle information determination method and apparatus, and electronic device and storage medium
WO2020259506A1 (en) Method and device for determining distortion parameters of camera
CN113297881B (en) Target detection method and related device
CN112650300B (en) Unmanned aerial vehicle obstacle avoidance method and device
WO2020215254A1 (en) Lane line map maintenance method, electronic device and storage medium
WO2022078342A1 (en) Dynamic occupancy grid estimation method and apparatus
CN115147328A (en) Three-dimensional target detection method and device
CN109635868B (en) Method and device for determining obstacle type, electronic device and storage medium
JP7418476B2 (en) Method and apparatus for determining operable area information
CN114662600B (en) Lane line detection method, device and storage medium
WO2023005797A1 (en) Data processing method and apparatus
WO2021218346A1 (en) Clustering method and device
CN113313654B (en) Laser point cloud filtering denoising method, system, equipment and storage medium
WO2023138331A1 (en) Method and apparatus for constructing semantic map
CN112835063B (en) Method, device, equipment and storage medium for determining dynamic and static properties of object
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22869106

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22869106

Country of ref document: EP

Kind code of ref document: A1