[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112424568A - System and method for constructing high-definition map - Google Patents

System and method for constructing high-definition map Download PDF

Info

Publication number
CN112424568A
CN112424568A CN201880095637.XA CN201880095637A CN112424568A CN 112424568 A CN112424568 A CN 112424568A CN 201880095637 A CN201880095637 A CN 201880095637A CN 112424568 A CN112424568 A CN 112424568A
Authority
CN
China
Prior art keywords
landmark
data
vehicle
processor
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880095637.XA
Other languages
Chinese (zh)
Other versions
CN112424568B (en
Inventor
马腾
杨晟
朱晓玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Publication of CN112424568A publication Critical patent/CN112424568A/en
Application granted granted Critical
Publication of CN112424568B publication Critical patent/CN112424568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods for updating high definition maps. The system includes a communication interface (202) configured to receive, over a network, sensor (140/150) data (203) of a target area acquired by at least one sensor (140/150) mounted on a vehicle (100) as the vehicle (100) travels along a trajectory. The system also includes a memory configured to store a high definition map. The system also includes at least one processor (204). The at least one processor (204) is configured to identify at least two frames of data associated with the landmark, each frame of data corresponding to a pose of the vehicle (100) in the trajectory. The at least one processor (204) is further configured for determining a set of landmark parameters in each identified data frame. The at least one processor (204) is further configured to associate the set of parameters with a pose of the vehicle (100) corresponding to each data frame. The at least one processor (204) is further configured for constructing a high-definition map based on the set of parameters and the associated pose.

Description

System and method for constructing high-definition map
Technical Field
The present application relates to systems and methods for constructing High Definition (HD) maps, and more particularly, to systems and methods for constructing high definition maps based on integrating point cloud data of the same landmarks acquired from different poses.
Background
Autonomous driving techniques rely heavily on accurate maps. For example, the accuracy of navigation maps is critical to the function of autonomous vehicles, such as localization, environment recognition, decision making, and control. High definition maps may be acquired by aggregating images and information acquired by various sensors, probes, and other devices equipped on a vehicle. For example, a vehicle may be equipped with a plurality of integrated sensors, such as LiDAR, a Global Positioning System (GPS) receiver, one or more Inertial Measurement Unit (IMU) sensors, and one or more cameras, to capture characteristics of the road or surrounding objects on which the vehicle is traveling. The captured data may include, for example, centerline or extended edge coordinates of a lane, coordinates of an object, and images, such as buildings, other vehicles, landmarks, pedestrians, or traffic signs. The point cloud data acquired by the integrated sensor may be affected by errors from the sensor itself (e.g., laser ranging errors, GPS positioning errors, IMU attitude measurement errors, etc.). For example, when the GPS signal is weak, the error of the pose information increases significantly.
Some solutions have been developed to improve the accuracy of point cloud data acquisition. For example, one solution based on Kalman filtering integrates a LiDAR unit and a navigation unit (e.g., a GPS/IMU unit) to estimate the pose of the vehicle. Another solution is to iteratively optimize pose information through a set of constraints, such as point cloud matching, such as using a gaussian-newton method. While these solutions may mitigate accurate errors to some extent, they are not robust and still susceptible to noise in the image coordinates. Accordingly, there is a need for an improved system and method for updating high definition maps based on optimization techniques.
Embodiments of the present application address the above-mentioned problems by a method and system for constructing a high-definition map by integrating point cloud data of the same landmarks acquired from different poses.
Disclosure of Invention
The embodiment of the application provides a method for constructing a high-definition map. The method may include receiving, via the communication interface, sensor data of a target area acquired by at least one sensor mounted on a vehicle as the vehicle travels along a trajectory. The method may further include identifying, by the at least one processor, at least two frames of data associated with the landmark, each frame of data corresponding to a pose of the vehicle in the trajectory. The method may further include determining, by the at least one processor, a set of parameters for the landmark within each identified data frame and associating the set of parameters with a pose of the vehicle corresponding to each data frame. The method may further include constructing, by at least one processor, a high-definition map based on the set of parameters and the associated pose.
The embodiment of the application also provides a system for constructing the high-definition map. The system may include a communication interface configured to receive, over a network, sensor data of a target area acquired by at least one sensor mounted on a vehicle as the vehicle travels along a trajectory. The system may further include a memory configured to store a high definition map. The system may also include at least one processor. The at least one processor may be configured to identify at least two frames of data associated with a landmark, each frame of data corresponding to a pose of the vehicle in the trajectory. The at least one processor may be further configured to determine a set of parameters for the landmark within each identified data frame and associate the set of parameters with a pose of the vehicle corresponding to each data frame. The at least one processor may also be configured to construct a high-definition map based on the set of parameters and the associated pose.
Embodiments of the present application also provide a non-transitory computer-readable medium having stored thereon instructions, which, when executed by one or more processors, cause the one or more processors to perform a method for updating a high-definition map. The method may include receiving sensor data of a target area acquired by at least one sensor mounted on a vehicle as the vehicle travels along a trajectory. The method may further include identifying at least two frames of data associated with the landmark, each frame of data corresponding to a pose of the vehicle in the trajectory. The method may further include determining a set of parameters for the landmark within each identified data frame and associating the set of parameters with the pose of the vehicle corresponding to each data frame. The method may also include constructing, by at least one processor, a high-definition map based on the set of parameters and the associated pose.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
FIG. 1 is a schematic illustration of an exemplary vehicle equipped with sensors, shown in accordance with an embodiment of the present application.
Fig. 2 is an exemplary block diagram of a system for building a high definition map, shown in accordance with an embodiment of the present application.
FIG. 3 is an exemplary method for optimizing a set of parameters for a landmark and a pose of a vehicle according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating an exemplary method for building a high definition map according to an embodiment of the present application.
Fig. 5 is an exemplary point cloud frame before and after using the RANSAC algorithm, shown in accordance with an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the examples illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIG. 1 is a schematic illustration of an exemplary vehicle 100 having at least two sensors 140 and 150, shown in accordance with an embodiment of the present application. Consistent with some embodiments, vehicle 100 may be a survey vehicle configured to acquire data for building high definition maps or three-dimensional (3D) city modeling. It is contemplated that vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. The vehicle 100 may have a body 110 and at least one wheel 120. Body 110 may be any body type, such as a sports vehicle, coupe, sedan, pick-up truck, recreational vehicle, Sport Utility Vehicle (SUV), minivan, or convertible recreational vehicle. In some embodiments, the vehicle 100 may include a pair of front wheels and a pair of rear wheels as shown in FIG. 1. However, it is contemplated that the vehicle 100 may have fewer wheels or equivalent structures that enable the vehicle 100 to move around. The vehicle 100 may be configured for all-wheel drive (AWD), front-wheel drive (FWR), or rear-wheel drive (RWD). In some embodiments, the vehicle 100 may be configured for remote control and/or autonomous operation by an occupied vehicle operator.
As shown in fig. 1, the vehicle 100 may be equipped with various sensors 140 and 150. The sensor 140 may be mounted to the body 110 via the mounting structure 130. The mounting structure 130 may be an electromechanical device mounted or otherwise attached to the body 110 of the vehicle 100. In some embodiments, the mounting structure 130 may use screws, adhesives, or other mounting mechanisms. Vehicle 100 may be additionally equipped with sensors 150 inside or outside body 110 using any suitable mounting mechanism. It is contemplated that the manner in which the sensors 140 or 150 may be mounted on the vehicle 100 is not limited by the example shown in fig. 1 and may be modified depending on the type of sensor 140/150 and/or vehicle 100 to achieve the desired sensing performance.
Consistent with some embodiments, the sensors 140 and 150 may be configured to capture data as the vehicle 100 travels along a trajectory. For example, the sensor 140 may be a LiDAR scanner configured to scan the surroundings and acquire a point cloud. LiDAR measures distance to a target by illuminating the target with a pulsed laser and measuring the reflected pulses with a sensor. The difference in laser return time and wavelength can then be used to construct a digital 3D representation of the target. The light used for LiDAR scanning may be ultraviolet, visible, or near infrared. LiDAR scanners are particularly well suited for high-definition map measurements because a narrow laser beam can map physical features with very high resolution. In some embodiments, a LiDAR scanner may capture a point cloud.
The sensors 140 may continuously capture data as the vehicle 100 travels along the trajectory. Each set of scene data captured within a particular time frame is referred to as a data frame. For example, point cloud data captured by LiDAR may include multiple frames of point cloud data corresponding to different time ranges. Each data frame also corresponds to the pose of the vehicle along the trajectory. In some embodiments, the scene may include landmarks, and thus the plurality of data frames of the captured scene may include data associated with the landmarks. Because the frames of data are captured in different vehicle poses, the data in each frame contains landmark features observed from different angles and distances. However, these features may be matched and correlated between different data frames in order to build a high-definition map.
As shown in fig. 1, the vehicle 100 may additionally be equipped with sensors 150, which may include sensors used in navigation units, such as a GPS receiver and one or more IMU sensors. GPS is a global navigation satellite system that provides geographic positioning and time information to GPS receivers. An IMU is an electronic device that uses various inertial sensors (such as accelerometers and gyroscopes, and sometimes magnetometers) to measure and provide specific forces, angular rates of a vehicle, and sometimes magnetic fields around the vehicle. By combining a GPS receiver and IMU sensors, the sensors 150 can provide real-time pose information of the vehicle 100 as the vehicle 100 travels, including the position and orientation (e.g., euler angles) of the vehicle 100 at each point in time.
In some embodiments, the point cloud data acquired by the LiDAR unit of the sensor 140 may be initially in the local coordinate system of the LiDAR unit and may need to be converted to a global coordinate system (e.g., longitude/latitude coordinates) for later processing. The real-time pose information of the vehicle 100 collected by the sensors 150 of the navigation unit may be used to transform the point cloud data from the local coordinate system to the global coordinate system through point cloud data registration, e.g., based on the pose of the vehicle 100 at the time of acquiring each frame of point cloud data. To register the point cloud data with the matching real-time pose information, the sensors 140 and 150 may be integrated into an integrated sensing system so that the point cloud data may be aligned by registration with the pose information when the data is collected. The integrated sensing system may be calibrated relative to a calibration target to reduce integration errors, including but not limited to mounting angle errors and mounting vector errors of the sensors 140 and 150.
Consistent with the present application, sensors 140 and 150 may communicate with server 160. In some embodiments, server 160 may be a local physical server, a cloud server (as shown in fig. 1), a virtual server, a distributed server, or any other suitable computing device. Consistent with the present application, server 160 may build a high-definition map. In some embodiments, a high-definition map may be constructed using point cloud data acquired by LiDAR. LiDAR measures distance to a target by illuminating the target with a pulsed laser and measuring the reflected pulses with a sensor. The difference in laser return time and wavelength can then be used to construct a digital 3D representation of the target. The light used for LiDAR scanning may be ultraviolet, visible, or near infrared. LiDAR is particularly well suited for high-definition map surveys because a narrow laser beam can map physical features with very high resolution.
Consistent with the present application, server 160 may construct a high-definition map based on point cloud data comprising multiple frames of data of one or more landmarks acquired from different vehicle poses. The server 160 may receive the point cloud data, identify landmarks within multiple frames of the point cloud data, associate a parameter set with the landmarks and construct a high definition map based on the parameter set. The server 160 may communicate with the sensors 140, 150 and/or other components of the vehicle 100 via a network, such as a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), a wireless network such as radio waves, a cellular network, a satellite communication network, and/or a local or short range wireless network (e.g., Bluetooth @)TM)。
For example, fig. 2 is a block diagram illustrating an exemplary server 160 for building a high definition map according to an embodiment of the present application. Consistent with the present application, server 160 may receive sensor data 203 from sensors 140 and vehicle pose information 205 from sensors 150. Based on the sensor data 203, the server 160 may identify data frames associated with landmarks, determine sets of parameters within the data frames, associate them with the pose of the vehicle when the respective data frames are acquired, and construct a high-definition map based on the sets of parameters.
In some embodiments, as shown in fig. 2, server 160 may include a communication interface 202, a processor 204, memory 206, and storage 208. In some embodiments, the server 160 may have different modules in a single device, such as an Integrated Circuit (IC) chip (implemented as an Application Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA)), or a separate device with dedicated functionality. In some embodiments, one or more components of the server 160 may be located in a cloud disk, or may alternatively be located in a single location (such as within the vehicle 100 or within a mobile device) or distributed locations. The components of server 160 may be in an integrated device or distributed across different locations, but in communication with each other via a network (not shown).
Communication interface 202 may be through a communication cable, a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), a wireless network such as radio waves, a cellular network, and/or a local or short range wireless network (e.g., Bluetooth)TM) Or other communication methods, to send and receive data to and from the components (e.g., sensors 140 and 150). In some embodiments, communication interface 202 may be an Integrated Services Digital Network (ISDN) card, a cable modem, a satellite modem, or a modem to provide a data communication connection. As another example, communication interface 202 may be a Local Area Network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented through the communication interface 202. In such implementations, communication interface 202 may send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
Consistent with some embodiments, the communication interface 202 may receive sensor data 203, such as point cloud data captured by the sensors 140, and pose information 205 captured by the sensors 150. The communication interface may also provide the received data to the memory 208 for storage or to the processor 204 for processing. The communication interface 202 may also receive the point cloud generated by the processor 204 and provide the point cloud over a network to any local component in the vehicle 100 or to any remote device.
The processor 204 may comprise any suitable type of general or special purpose microprocessor, digital signal processor, or microcontroller. The processor 204 may be configured as a separate processor module dedicated to building high definition maps. Alternatively, the processor 204 may be configured as a shared processor module for performing other functions unrelated to color point cloud generation.
As shown in fig. 2, the processor 204 may include a plurality of modules, such as a landmark feature extraction unit 210, a landmark feature matching unit 212, a landmark parameter determination unit 214, a high definition map construction unit 216, and the like. These modules (and any corresponding sub-modules or sub-units) may be hardware units (e.g., portions of an integrated circuit) of the processor 204 that are designed to be used with other components or software units implemented by the processor 204 by executing at least a portion of a program. The program may be stored on a computer readable medium and when executed by the processor 204, it may perform one or more functions. Although FIG. 2 shows all of the units 210 within one processor 204 and 216, it is contemplated that the units may be distributed among multiple processors that are close or remote from each other. For example, modules related to landmark feature extraction, such as the landmark feature extraction unit 210, the landmark feature matching unit 212, the landmark parameter determination unit 214, and the like, may be within a processor on the vehicle 100. Modules related to building a high definition map, such as high definition map building unit 216, may be within a processor on a remote server.
The landmark feature extraction unit 210 may be configured to extract landmark features from the sensor data 203. In some embodiments, the landmark features may be geometric features of the landmark. Different methods may be used to extract landmark features based on the type of landmark. For example, a landmark may be a road marker (e.g., a lane or pedestrian marker) or a standing object (e.g., a tree or a road panel).
The processor 204 may determine the type of landmark. In some embodiments, if the landmark is determined to be a road marker, the landmark feature extraction unit 210 may identify the landmark based on the point cloud intensity of the landmark. For example, the landmark feature extraction unit 210 may segment point cloud data associated with a road surface on which the vehicle travels using a random sample consensus (RANSAC) method. Since the road marker is generally made using a special marker material corresponding to a high-intensity point cloud, the landmark feature extraction unit 210 may extract the feature of the road marker based on the intensity. For example, the landmark feature extraction unit 210 may use a region growing or clustering method. In some other embodiments, if the landmark is determined to be a standing object, the landmark feature extraction unit 210 may extract the landmark feature based on a principal component analysis method. For example, the landmark feature extraction unit 210 may identify neighboring areas of the landmark using a Principal Component Analysis (PCA) method, so that geometric features of the landmark may be identified, and the landmark feature extraction unit 210 may determine landmark features using a combination of the geometric features.
The landmark feature matching unit 212 may be configured to divide the sensor data 203 into subsets. For example, the landmark feature matching unit 212 may divide the sensor data 203 into data frames based on the time points at which the sensor data is captured. The data frame includes point cloud data associated with the same landmark captured at different vehicle poses along the trajectory. The landmark feature matching unit 212 may be further configured to match landmark features in the subset and identify data frames associated with landmarks.
In some embodiments, landmark features may be matched using a learning model trained based on known sample landmark features associated with the same landmark. For example, the landmark feature matching unit 212 may use landmark features such as the type, set attributes, and/or geometric features of landmarks as sample landmark features and combine the features with associated vehicle poses to identify landmarks within different subsets. The landmark feature matching unit 212 may then train a learning model (e.g., a rule-based machine learning method) based on sample landmark features associated with the same landmark. The trained model may then be applied to find matching landmark features.
The landmark parameter determination unit 214 may be configured to determine a set of parameters of the landmark based on the matched landmark features. In some embodiments, a set of parameters for a landmark may be determined based on the type of landmark. For example, if a landmark is a segment type object (e.g., a street light pole), it may be represented with 4 or 6 degrees of freedom, including line direction (2 degrees of freedom), tangent position (2 degrees of freedom), and endpoint (0 or 2 degrees of freedom). For another example, if a landmark is a symmetric object (e.g., a tree or a billboard advertisement), it may be represented with 5 degrees of freedom, which includes a normal vector (2 degrees of freedom) and the spatial location of the landmark (3 degrees of freedom). For landmarks that are not both types of objects, they can be represented in 6 degrees of freedom, which includes the euler angle (3 degrees of freedom) and the spatial position of the landmark (3 degrees of freedom).
The high definition map construction unit 216 may be configured to construct a high definition map based on a set of parameters. In some embodiments, a high-definition map may be constructed using an optimization method. The matched landmark features obtained by the landmark feature matching unit 212 and the set of parameters determined by the landmark parameter determination unit 214 provide additional constraints that may be used during optimization of high definition mapping. In some embodiments, bundle adjustment may be added as an auxiliary component to the optimization to improve the robustness of the map construction. For example, a bundle adjustment method may be applied in addition to a conventional map optimization method (e.g., adding constraints). Extended conventional map optimization methods (e.g., adding beam adjustment constraints) are more robust in optimizing vehicle pose information and sets of parameters for landmarks, and thus can improve the accuracy of high definition map construction (e.g., when GPS positioning accuracy is on the decimeter level, high definition maps can still be constructed with centimeter level accuracy).
In some embodiments, the processor 204 may additionally include a sensor calibration unit (not shown) configured to determine one or more calibration parameters associated with the sensor 140 or 150. In some embodiments, the sensor calibration unit may alternatively be within the vehicle 100, in a mobile device, or otherwise remotely located from the processor 204. For example, sensor calibration may be used to calibrate LiDAR scanners and positioning sensors.
Memory 206 and storage 208 may comprise any suitable type of mass storage provided to store any type of information that processor 204 may need to operate. The memory 206 and storage 208 may be volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of storage devices or tangible (i.e., non-transitory) computer-readable media, including but not limited to ROM, flash memory, dynamic RAM, and static RAM. The memory 206 and/or storage 208 may be configured to store one or more computer programs that may be executed by the processor 204 to perform the color point cloud generation functions disclosed herein. For example, the memory 206 and/or storage 208 may be configured to store a program that may be executed by the processor 204 to construct a high-definition map based on sensor data captured by the sensors 140 and 150.
Memory 206 and/or storage 208 may further be configured to store information and data used by processor 204. For example, the memory 206 and/or storage 208 may be configured to store various types of sensor data (e.g., frames of point cloud data, pose information, etc.) captured by the sensors 140, 150 and high definition maps. The memory 206 and/or storage 208 may also store intermediate data, such as a machine learning model, landmark features, a set of parameters associated with landmarks, and so forth. Various types of data may be permanently stored, periodically removed, or ignored immediately after processing each data frame.
Fig. 3 is an exemplary method of optimizing a set of parameters of a landmark and a pose of a vehicle when acquiring a frame of data of the landmark, according to an embodiment of the present application. As shown in FIG. 3, PiIs acquiring sensor data FiAnd ViThe pose information of the vehicle 100 at the time point i. In some embodiments, Pi may comprise [ S ]iTiRi]Which represents the acquisition of sensor data FiAnd ViTemporal vehicle pose. For example, [ S ]iTiRi]May be a parameter indicating the pose of the vehicle 100 in the global coordinates. Sensor data FiIs a set of parameters (e.g., line direction, tangent position, and end point of line segment direction) of the observed landmark, and the sensor data ViIs a vehicle positionAttitude information PiWith a set of observed parameters FiThe difference between them. For example, P1Is acquiring sensor data F1And V1The pose information of the vehicle 100 at the time point of (a), F1Is a set of parameters of the observed landmarks. V1Is from P1To F1The vector of (2).
In some embodiments, { F }1,F2,…,FnAnd { V }1,V2,…,VnCan be divided into subsets of sensor data based on the point in time at which the sensor data is collected (e.g., { F }1,F2,…,FnCan be divided into { F }1,F2,…,FkAnd { F }k+1,Fk+2,…,Fn};{V1,V2,…,VnCan be divided into { V }1,V2,…,VkAnd { V }k+1,Vk+2,…,Vn})。[CC Tc Rc]A set of parameters representing a landmark in global coordinates (e.g., C if the landmark is a line segment type objectC、Tc、RcMay represent the line direction, tangent position, and endpoint, respectively, of the landmark). diRepresents an observation error equal to the sensor data F acquired by the sensors 140 and 150iWith landmarks [ C ] in global coordinatesC Tc Rc]Is determined by the difference between the sets of parameters.
In some embodiments, the disclosed method includes first, from sensor data { F }1,F2,…,FiAnd { V }1,V2,…,ViAnd extracting landmark features. The method then includes dividing the sensor data into subsets. For example, sensor data { F1,F2,…,FiAnd { V }1,V2,…,ViCan be divided into subsets F1,F2,…,Fm}、{Fm+1,Fm+2,…,FiAnd { V }1,V2,…,Vm}、{Vm+1,Vm+2,…,Vi}. The method may also include matching landmark features in the subset and identifying at least two frames of data associated with the landmark. The method then includes determining a set of parameters for the landmark within each identified data frame, and associating the set of parameters with the parameters corresponding to each data frame PiIs associated with the pose of the vehicle. Finally, the pose and the set of parameters are optimized simultaneously. For example, an optimization method can be used to find the optimal { T } Ti,Ri,PiTo minimize observation errors
Figure BDA0002895031880000091
The sum of (a) and (b).
Fig. 4 shows a flowchart of an exemplary method for building a high definition map according to an embodiment of the present application. In some embodiments, method 400 may be implemented by a high definition mapping system that includes, among other things, server 160 and sensors 140 and 150. However, the method 400 is not limited to this exemplary embodiment. The method 400 may include steps S402-S416 as described below. It should be understood that some steps may be optional to perform the present application as provided herein. Further, some steps may be performed simultaneously, or in a different order than shown in fig. 4.
In step S402, one or more sensors 140 and 150 may be calibrated. In some embodiments, the vehicle 100 may be dispatched through a calibration trip to collect data for calibrating the sensor parameters. Calibration may be performed before actual investigation to build and/or update the map. Point cloud data captured by LiDAR (as an example of sensor 140) and pose information acquired by positioning devices such as GPS receivers and one or more IMU sensors may be calibrated.
At step S404, as the vehicle 100 travels along the trajectory, the sensor 140 may capture the sensor data 203 and pose information 205. In some embodiments, the sensor data 203 of the target area may be point cloud data. The vehicle 100 may be equipped with sensors 140, such as LiDAR laser scanners. As the vehicle 100 travels along the trajectory, the sensor 140 may continuously capture frames of sensor data 203 at different points in time in the form of frames of point cloud data. The vehicle 100 may also be equipped with sensors 150, such as a GPS receiver and one or more IMU sensors. The sensors 140 and 150 may form an integrated sensing system. In some embodiments, the sensor 150 may capture real-time pose information of the vehicle 100 as the vehicle 100 travels along a trajectory in a natural scene and as the sensor 140 captures a point cloud dataset representing a target area.
In some embodiments, the captured data (including, for example, sensor data 203 and pose information 205) may be sent from the sensors 140/150 to the server 160 in real-time. For example, data may be streamed as it becomes available. The real-time transmission of data enables server 160 to process frames of data in real-time while capturing subsequent frames. Alternatively, the data may be transferred in batches after a portion or the entire survey is completed.
At step S406, the processor 204 may extract landmark features from the sensor data. In some embodiments, landmarks may be extracted based on the type of landmark. For example, the processor 204 may determine whether a landmark is a road marker (e.g., a traffic lane) or a standing object (e.g., a tree or a billboard advertisement). In some embodiments, if the landmark is determined to be a road marker, the processor 204 may identify the landmark based on the point cloud strength of the landmark. For example, the landmark feature extraction unit 210 may segment the sensor data using the RANASC algorithm. Based on the segment, the processor 204 may further identify the landmark based on a point cloud strength of the landmark.
For example, fig. 5 is an exemplary point cloud 510 and 520 of the same object (e.g., a road marker) before and after point cloud intensity identification of a landmark, respectively, as shown in accordance with an embodiment of the present application. The point cloud 510 of road markings is data collected by the sensors 140 and 150 prior to point cloud intensity identification. In contrast, the point cloud 520 of the same road marker is data that is regenerated after point cloud intensity identification (e.g., using the RANSAC algorithm to segment the sensor data). For example, after the sensor data collected by sensors 140 and 150 is filtered in the RANSAC method to reduce noise, landmarks (road markings) are more distinguishable in the point cloud 520 than the same landmarks (road markings) shown in the point cloud 510.
In some other embodiments, if the landmark is determined to be a standing object, the processor 204 may identify the landmark based on a principle component analysis method. For example, the processor 204 may use an orthogonal transformation to convert observations of a set of potentially relevant variables (e.g., point cloud data of a vicinity of a landmark) into linearly irrelevant variable values for a set of landmarks.
At step S408, the processor 204 may divide the sensor data into subsets. For example, the processor 204 may divide the sensor data 203 into data frames based on the time points at which the sensor data was captured. The data frame includes point cloud data associated with the same landmark captured at different vehicle poses along the trajectory. The processor 204 may be further configured to match landmark features in the subset and identify data frames associated with landmarks.
In some embodiments, landmark features may be matched using a learning model trained based on known sample landmark features associated with the same landmark. For example, the processor 204 may use landmark features, such as type, aggregate attributes, and/or geometric features, as sample landmark features and combine the features with associated vehicle poses to identify landmarks within different subsets. The processor 204 may then train a learning model based on the sample landmark features of the matched landmarks (e.g., using a rule-based machine learning approach). The trained model may be applied to match landmark features associated with the same landmark.
Based on the matching results, the processor 204 may identify at least two data frames associated with the landmark at step S410. For example, the processor 204 may associate at least two data frames in the different subset with a landmark if the match results for the at least two data frames are above a predetermined threshold level corresponding to a sufficient level of match.
At step S412, the processor 204 may determine a set of parameters associated with the landmark. In some embodiments, a set of parameters for a landmark may be determined based on the type of landmark.
For example, if the landmark is a line segment type object (e.g., a street light pole), it may be represented with 4 or 6 degrees of freedom, including line direction (2 degrees of freedom), tangent position (2 degrees of freedom), and endpoint (0 or 2 degrees of freedom). As another example, if the landmark is a symmetric object (e.g., a tree or a billboard advertisement), it may be represented with 5 degrees of freedom, which includes a normal vector (2 degrees of freedom) and the spatial location of the landmark (3 degrees of freedom). For landmarks that are not both types of objects, they can be represented in 6 degrees of freedom, which includes the euler angle (3 degrees of freedom) and the spatial position of the landmark (3 degrees of freedom).
At step S414, the processor 204 may associate the set of parameters with the pose of the vehicle corresponding to each data frame. For example, each set of parameters may be associated with pose information 205 of the vehicle 100 at a point in time when a data frame is acquired.
At step S416, the processor 204 may construct a high-definition map based on the set of parameters and the associated pose information. In some embodiments, a high-definition map may be constructed using an optimization method. The matched landmark features obtained in step S410 and the set of parameters determined in step S412 may provide additional constraints for use during optimization of high definition map construction. In some embodiments, bundle adjustment may be added as an auxiliary component to the optimization to improve the robustness of the map construction. For example, a bundle adjustment method may be applied in addition to a conventional map optimization method (e.g., adding constraints). Extended conventional map optimization methods (e.g., adding beam adjustment constraints) are more robust in optimizing the vehicle pose of the landmark and the set of parameters, and thus may improve the accuracy of high-definition map construction.
Another aspect of the application relates to a non-transitory computer-readable medium storing instructions that, when executed, cause one or more processors to perform a method as described above. The computer-readable medium includes volatile or nonvolatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage device. For example, a computer-readable medium as in the present application may be a storage device or a storage module having stored thereon computer instructions. In some embodiments, the computer readable medium may be a disk or flash drive having computer instructions stored thereon.
It will be apparent that various modifications and variations can be made in the system and related methods of the present application by those of ordinary skill in the art. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the system and associated method of the present application.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims (20)

1. A method for constructing a high-definition map comprises the following steps:
receiving, via a communication interface, sensor data of a target area acquired by at least one sensor mounted on a vehicle as the vehicle travels along a trajectory;
identifying, by at least one processor, at least two frames of data associated with landmarks, each said frame of data corresponding to a pose of the vehicle in the trajectory;
determining, by the at least one processor, a set of parameters for the landmark in each identified data frame;
associating the set of parameters with the pose of the vehicle corresponding to each data frame; and
constructing, by the at least one processor, a high-definition map based on the set of parameters and the associated pose.
2. The method of claim 1, wherein identifying at least two frames of data associated with the landmark further comprises:
extracting landmark features from the sensor data;
dividing the sensor data into subsets; and
matching the landmark features in the subset to identify the at least two data frames associated with the landmark.
3. The method of claim 2, wherein extracting the landmark features further comprises:
segmenting the sensor data; and
the landmark is identified based on the segmented sensor data.
4. The method of claim 3, wherein the sensor data is segmented based on the RANSAC algorithm.
5. The method of claim 4, wherein the identifying the landmark further comprises: material characteristics of the landmark are determined based on the intensity band.
6. The method of claim 3, wherein the identifying the landmark further comprises determining a geometric characteristic of the landmark based on a PCA method.
7. The method of claim 1, wherein the at least one sensor comprises LiDAR and the sensor data comprises point cloud data.
8. The method of claim 2, wherein matching the landmark feature in the subsets further comprises calculating a match rating for the landmark feature between two of the subsets.
9. The method of claim 1, wherein the set of parameters for the landmark includes a direction, a tangent, and an endpoint for the landmark.
10. The method of claim 1, wherein the set of parameters for the landmark includes an euler angle and a spatial location of the landmark within the target region.
11. The method of claim 1, wherein the set of parameters for the landmark includes a normal vector and a spatial location of the landmark.
12. The method of claim 1, wherein constructing a high definition map further comprises optimizing the pose and the set of parameters simultaneously.
13. A system for building a high definition map, comprising:
a communication interface configured to receive sensor data of a target area acquired by at least one sensor mounted on a vehicle while the vehicle is traveling along a trajectory over a network;
a memory configured to store a high definition map; and
at least one processor configured for:
identifying at least two frames of data associated with a landmark, each of the frames of data corresponding to a pose of the vehicle in the trajectory;
determining a set of parameters for the landmark in each identified data frame;
associating the set of parameters with the pose of the vehicle corresponding to each data frame; and
and constructing a high-definition map based on the parameter group and the associated pose.
14. The system of claim 13, wherein to identify at least two frames of data associated with a landmark, the at least one processor is further configured to:
extracting landmark features from the sensor data;
dividing the sensor data into subsets; and
matching the landmark features in the subset to identify the at least two data frames associated with the landmark.
15. The system according to claim 14, wherein to match the landmark features in the subsets, the at least one processor is further configured for calculating a match rating for the landmark features between two of the subsets.
16. The system according to claim 13, wherein to extract the landmark features, the at least one processor is further configured to:
segmenting the sensor data; and
the landmark is identified based on the segmented sensor data.
17. The system of claim 15, wherein the at least one processor is further configured for segmenting the sensor data based on a RANSAC algorithm.
18. The system of claim 13, wherein the at least one processor is further configured for determining geometric features of the landmark based on a PCA method.
19. The system of claim 13, wherein the at least one processor is further configured to optimize the pose and the set of parameters simultaneously.
20. A non-transitory computer-readable medium storing a computer program, which when executed by at least one processor is configured to perform a method of constructing a high definition map, the method comprising:
receiving sensor data of a target area acquired by at least one sensor mounted on a vehicle as the vehicle travels along a trajectory;
identifying at least two frames of data associated with a landmark, each of the frames of data corresponding to a pose of the vehicle in the trajectory;
determining a set of parameters for the landmark in each identified data frame;
associating the set of parameters with the pose of the vehicle corresponding to each data frame; and
and constructing a high-definition map based on the parameter group and the associated pose.
CN201880095637.XA 2018-12-04 2018-12-04 System and method for constructing high-definition map Active CN112424568B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119199 WO2020113425A1 (en) 2018-12-04 2018-12-04 Systems and methods for constructing high-definition map

Publications (2)

Publication Number Publication Date
CN112424568A true CN112424568A (en) 2021-02-26
CN112424568B CN112424568B (en) 2024-09-03

Family

ID=70973697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880095637.XA Active CN112424568B (en) 2018-12-04 2018-12-04 System and method for constructing high-definition map

Country Status (2)

Country Link
CN (1) CN112424568B (en)
WO (1) WO2020113425A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712561B (en) * 2021-01-05 2024-11-01 北京三快在线科技有限公司 Picture construction method and device, storage medium and electronic equipment
US20240212204A1 (en) * 2021-08-31 2024-06-27 Intel Corporation Hierarchical segment-based map optimization for localization and mapping system
CN113984071B (en) * 2021-09-29 2023-10-13 云鲸智能(深圳)有限公司 Map matching method, apparatus, robot, and computer-readable storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103782247A (en) * 2011-09-07 2014-05-07 克朗设备有限公司 Method and apparatus for using pre-positioned objects to localize an industrial vehicle
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
US20150269734A1 (en) * 2014-03-20 2015-09-24 Electronics And Telecommunications Research Institute Apparatus and method for recognizing location of object
CN107438754A (en) * 2015-02-10 2017-12-05 御眼视觉技术有限公司 Sparse map for autonomous vehicle navigation
CN107607107A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of Slam method and apparatus based on prior information
US20180023960A1 (en) * 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Distributing a crowdsourced sparse map for autonomous vehicle navigation
US20180045519A1 (en) * 2016-08-09 2018-02-15 Nauto, Inc. System and method for precision localization and mapping
US20180188041A1 (en) * 2016-12-30 2018-07-05 DeepMap Inc. Detection of misalignment hotspots for high definition maps for navigating autonomous vehicles
CN108280866A (en) * 2016-12-30 2018-07-13 乐视汽车(北京)有限公司 Road Processing Method of Point-clouds and system
CN108351218A (en) * 2015-11-25 2018-07-31 大众汽车有限公司 Method and system for generating numerical map

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3032221B1 (en) * 2014-12-09 2022-03-30 Volvo Car Corporation Method and system for improving accuracy of digital map data utilized by a vehicle
DE102016210495A1 (en) * 2016-06-14 2017-12-14 Robert Bosch Gmbh Method and apparatus for creating an optimized location map and method for creating a location map for a vehicle
DE102017207257A1 (en) * 2017-04-28 2018-10-31 Robert Bosch Gmbh Method and apparatus for creating and providing a high accuracy card

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103782247A (en) * 2011-09-07 2014-05-07 克朗设备有限公司 Method and apparatus for using pre-positioned objects to localize an industrial vehicle
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
US20150269734A1 (en) * 2014-03-20 2015-09-24 Electronics And Telecommunications Research Institute Apparatus and method for recognizing location of object
CN107438754A (en) * 2015-02-10 2017-12-05 御眼视觉技术有限公司 Sparse map for autonomous vehicle navigation
CN108351218A (en) * 2015-11-25 2018-07-31 大众汽车有限公司 Method and system for generating numerical map
US20180023960A1 (en) * 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Distributing a crowdsourced sparse map for autonomous vehicle navigation
US20180045519A1 (en) * 2016-08-09 2018-02-15 Nauto, Inc. System and method for precision localization and mapping
US20180188041A1 (en) * 2016-12-30 2018-07-05 DeepMap Inc. Detection of misalignment hotspots for high definition maps for navigating autonomous vehicles
CN108280866A (en) * 2016-12-30 2018-07-13 乐视汽车(北京)有限公司 Road Processing Method of Point-clouds and system
CN107607107A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of Slam method and apparatus based on prior information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIHUI SHEN; HANXU SUN; PING YE: "Research of large-scale offline map management in visual SLAM", 2017 4TH INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI), 8 January 2018 (2018-01-08) *
艾青林;余杰;胡克用;陈琦;: "基于ORB关键帧匹配算法的机器人SLAM实现", 机电工程, vol. 33, no. 05, 31 May 2016 (2016-05-31) *
许泽宁, 高晓路: "基于电子地图兴趣点的城市建成区边界识别方法", 地理学报, vol. 71, no. 6, 30 June 2016 (2016-06-30) *

Also Published As

Publication number Publication date
WO2020113425A1 (en) 2020-06-11
CN112424568B (en) 2024-09-03

Similar Documents

Publication Publication Date Title
CN110859044B (en) Integrated sensor calibration in natural scenes
CN111436216B (en) Method and system for color point cloud generation
CN110832275B (en) System and method for updating high-resolution map based on binocular image
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN112136021B (en) System and method for constructing landmark-based high definition map
CN112005079B (en) System and method for updating high-definition map
CN111656136A (en) Vehicle positioning system using laser radar
JP2015194397A (en) Vehicle location detection device, vehicle location detection method, vehicle location detection computer program and vehicle location detection system
CN112424568B (en) System and method for constructing high-definition map
CN113874681B (en) Evaluation method and system for point cloud map quality
AU2018102199A4 (en) Methods and systems for color point cloud generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant