[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021005073A1 - Method for aligning crowd-sourced data to provide an environmental data model of a scene - Google Patents

Method for aligning crowd-sourced data to provide an environmental data model of a scene Download PDF

Info

Publication number
WO2021005073A1
WO2021005073A1 PCT/EP2020/069150 EP2020069150W WO2021005073A1 WO 2021005073 A1 WO2021005073 A1 WO 2021005073A1 EP 2020069150 W EP2020069150 W EP 2020069150W WO 2021005073 A1 WO2021005073 A1 WO 2021005073A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
vehicle
server
data sets
feature
Prior art date
Application number
PCT/EP2020/069150
Other languages
French (fr)
Inventor
Bingtao Gao
Christian Thiel
Paul Barnard
Original Assignee
Continental Automotive Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive Gmbh filed Critical Continental Automotive Gmbh
Priority to EP20745105.5A priority Critical patent/EP4022255A1/en
Publication of WO2021005073A1 publication Critical patent/WO2021005073A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3811Point data, e.g. Point of Interest [POI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • G01C21/3819Road shape data, e.g. outline of a route
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors

Definitions

  • the embodiments relate to a method and a system for aligning crowd-sourced data to provide an environmental data model of a scene which may be used as a database for autonomous vehicle navigation and localization.
  • the computing architecture includes a data network comprised of a plurality of interconnected processors, a first group of sensors responsive to environmental conditions around the vehicle, a second group of sensors responsive to the vehicle's hardware systems, and a map data base containing data that represent geographic features in the geographic area around the vehicle.
  • the vehicle operations programming applications may in- elude adaptive cruise control, automated mayday, and obstacle and collision warning systems, among others.
  • the vehicle needs a contin uous communication over a data network which may sometimes be interrupted under real road conditions.
  • a precise map database is required.
  • EP 1219928 A1 shows a method for generating road segments for a digital map.
  • a vehicle equipped with a camera is driving over a road section.
  • the captured im ages are correlated with GNSS information to produce a digital map.
  • EP 3 130 945 Al a method and a system for precision vehicle positioning is disclosed. This method improves positioning accuracy by using first data sets from a database and generating second data sets by its sensors during driving.
  • the first and second datasets are aligned in the vehicle to derive a higher accuracy position value of the vehicle.
  • This method does not improve the accu racy of the database but is instead based only on internal processing by the vehi- cle.
  • US 2018/0023961 Al discloses a system and a method for aligning crowdsourced sparse map data.
  • vehicles located in a scene collect environmental sensing data captured by respective environmental sensor units of the mobile entities.
  • the raw data from the sensor units may be used by a respective processor unit of the mobile entities to generate an initial 3D model of the scene, for example, be ing construed as a 3D point cloud.
  • Each of the mobile entities send snippets with reduced point cloud data to a remote server to align a crowd-sourced database.
  • the remote server does not find enough matches to align and merge the snippets received from the various mobile entities.
  • the alignment of the crowd-sourced database may be performed on the server side with very limited features.
  • the problem to be solved by the invention is to improve the known vehicles, databases and methods for building databases which further provide sufficient information to vehicles to enable Highly Automated Driving (HAD) or autonomous driving or self-driving.
  • HADA Highly Automated Driving
  • the alignment process on the server side based on the evaluation of transmitted data from the mobile entities tends to generate many incorrect matches.
  • the number of sparse landmarks is very limited on the server side, which causes some alignment failures.
  • the aligning of a crowd-sourced database on the server side may not perform well with pro cessing an environmental data model describing unstructured roads, such as ru ral roads.
  • the basic concept of an embodiment of generating and updating a precision road property database is a two-step feature data processing, whereby a first feature data processing step is done by a vehicle and a second feature data processing step is done by a server database. These steps may be repeated multiple times until the database is sufficiently accurate.
  • the term database may be understood as a structured and/or organized memory to store and retrieve data.
  • the database may have at least one index for faster access to selected data.
  • the processed data may include at least a plurality of feature reference points and/or object reference points and/or objects of a scene.
  • a feature reference point indicates a feature in a 2D image or in a 3D road model and relates to a position of an optical feature e.g. corners or edges which may be parts of an object. Such features may be clusters of pixels, which may be detected by a simple filter.
  • a feature reference point is a point in 3D space indicating the position of a feature in 3D space.
  • a feature reference point may not indicate an object itself. It merely may indicate a visual or otherwise detectable feature, which may be part of an object. The recognition of features is easier and requires less computational power than identifying and classifying objects in images.
  • a feature descriptor may be attached to a feature reference point.
  • a feature reference point may be assigned to an object and may then become an object reference point.
  • the method may use at least one of the following data types: a feature reference point of a scene, an object reference point of a scene, or an object of a scene. It may use only one of these data types, both data types or all three data types. For example, only feature reference points may be used. In a more complex processing, object reference points may be used in addition.
  • this method includes the steps of: providing a server database by a remote server, the server database including a plurality of data sets, each data set including at least one of a feature reference point, an object reference point, and an object of a scene, conducting at least once or repeating the sequence of steps a) to f): a) at least one vehicle receiving at least one first data set by the server from the server database, the at least one first data set including at least one of a feature reference point, an object reference point, and an object of a scene, b) the at least one vehicle collecting ambient data and generating at least one of a feature reference point, an object reference point, and an object of a scene of the collected ambient data, c) the at least one vehicle aligning the collected ambient data with the first data sets based on at least one of a pair of feature reference points, a pair of object reference points, and a pair of objects of a scene of, d) detecting differences between the collected ambient data with the first data sets and generating second data sets related to at least
  • Step c) of aligning may further include the following steps of: processing or generating a 3D (3-dimensional) model for of the at least one of the first feature reference points, first object reference points, and first objects of a scene of the first data sets and generat- ing a 3D model for the at least one of the second feature reference points, second object reference points, and second objects of the col lected ambient data, comparing at least one of the first and second feature reference points, object reference points, and objects to find at least one of a matching pair of first and second feature reference points, a match ing pair of first and second object reference points, and a matching pair of first and second objects, and aligning the at least one of a second feature reference point, a sec ond object reference point, and a second object of the ambient data set with the at least one
  • Step c) may either start with processing or generating a 3D (3-dimensional) model based on first data sets.
  • a 3D model may be generated if it is not contained within the first data sets. If the first data sets already provide a 3D model, it must not be generated, but simply can be processed instead.
  • Comparing at least one of the first and second feature reference points, object reference points, and objects relates to find at least a pair of the same data type having one member from a first dataset and a member from collected ambient data.
  • the at least one second data set may include at least one of: data existing already in the first data sets confirmation or rejection of data existing already in the first data sets new, amended, moved or changed data related to at least one of feature reference points, object reference points, objects of a scene.
  • the at least one second data set may include at least one of feature reference points, object reference points, objects of a scene.
  • a vehicle which may be a standard production vehicle uses its sensors to collect a plurality of ambient data sets, processes them and communicates with a precision road property database on a server to improve the quality of its collected data and to provide the server database updated ambient data.
  • the information is mostly feature-based, as processing feature-based data is more simple, quicker and requires less processing power.
  • object-based data related to driving-relevant objects like a road object, a road furniture object, a geographic object, or a further object
  • Object data may include multiple objects.
  • a road object preferably is related to a road itself. It may describe basic characteristics of a road like the width, the direction, the curvature, the number of lanes in each direction, the width of lanes, the surface structure. It may further describe specific details like a kerb stone, a centreline, even a single dash of a dashed centreline or other markings like a crosswalk or a stop line.
  • a road furniture object is related to road furniture, also called street furniture. Road furniture may comprise objects and pieces of equipment installed on streets and roads for various purposes.
  • a geographic object may be any stationary object like but not limited to a building, a river, a lake, or a mountain.
  • Further objects may be objects which are not falling into one of the above categories but are also relevant for driving.
  • Such objects may be trees, stones, walls, or other obstacles close to a road. They may also comprise structures like trenches, plane surfaces and others which may be considered for planning alternate, emergency exit or collision avoidance paths.
  • Such objects may be very complex in their shape and structure, such that recognizing and processing such objects is very demanding in respect to computing power, memory requirement and sensor resolution.
  • Feature-based data In contrast to the object-based data, feature-based data is much easier to handle.
  • Feature-based data only refers to optical features, which may be detected within 2D or SD sensor data.
  • Such features may for example be corners of an object, e.g. a traffic sign or points, e.g. the start of a single dash of a dashed centreline.
  • Certain parts of such a feature are assigned to a feature reference point.
  • a corner of a traffic sign may be assigned a feature reference point.
  • the vehicle does not need to identify the other corners of the traffic sign nor does it need to identify that the corner belongs to a traffic sign. Finding such a single corner is much easier than identifying a whole traffic sign. This is further beneficial, if only parts of an object are visible, e.g. other corners of the traffic sign may be hidden.
  • Further information may be attached and/or connected to a feature reference point. Such information may be the type of a feature, e.g. an edge, a corner
  • At least one of the second data sets includes an object and/or an object reference point.
  • second data sets including only object and/or an object reference point may be provided.
  • the server may provide a better refinement, if both, feature reference points and object reference points can be used.
  • An object may for example allow classification of features. If for example, the server knows, that certain feature reference points belong to a tree, it may consider, that these feature reference points are not visible all year. Feature reference points of a movable object may require different handling than those of a stationary object. If the feature reference points and object reference points are from the same vehicle, preferably from the same pass of a road, their correlation is much easier and requires less computing power than if these are from different vehicles.
  • the server and the server database may be provided by a service provider and may be hosted on a plurality of host computer systems. It may be a central database, but it may be split into a plurality of sub databases, which for example may be divided by geographic regions.
  • the server database has a suitable network connection like an Internet connection to communicate with the vehicle.
  • the communication with the vehicles may be performed via third party communication systems like back ends of vehicle manufacturers.
  • the bandwidth should be large enough to communicate with a large number of vehicles simultaneously. It is preferred, if the
  • the server database comprises a physical data storage which may be in specific server locations, in a cloud based shared infrastructure, a combination of approaches or similar embodiment, together with a software for managing data storage and data flow.
  • server database is used meaning all related systems in a broad sense. This may include a computer system, a physical storage or a cloud based shared infrastructure, a database engine, a database manager and further software for processing data in the database or similar embodiment. It is preferred, if the server database supplies auxiliary information to the vehicle as part of the first data sets.
  • auxiliary information may be a server database generated confidence level or other confidence related information, herein called confidence level, and importance level, and urgency level or statistical information like average values and standard deviations. This auxiliary information may be supplied for each individual data set or for a group of data sets.
  • At least one vehicle is further configured to generate feature-based data by analysing ambient data sets collected by their sensors.
  • vehicles sample or collect information by their sensors, preferably also by their image sensors while driving, while participating in traffic or at least while being on the road or in proximity to a road and compare and/or correlate the sampled information with the previously downloaded or stored information on a regular or periodic base.
  • the frequency or time intervals between such collections which may also be called sampling rate, may be determined by the sensors in the vehicle, their resolution and/or the processing power in the vehicle. Instead of periodic sampling, random sampling may be chosen.
  • the frequency of sampling may also be adapted to the environment. In the case of a curvy road, a higher sampling rate may be desired compared to a straight road. Different sapling rates may also be requested by the server database.
  • the sampled ambient data sets may be recorded and stored over a certain time as a sequence of ambient data sets. This allows to process the stored ambient data sets as a background task and to move back and forth through the stored data sets for more sophisticated processing in contrast to real-time processing which has to follow the vehicle at speed. Storage of a sequence of sampled ambient data sets allows the vehicle specifically when driving, to monitor for the features appearance over time and/or distance. This allows for a more precise alignment of features, as with changing distance the angle under which a feature is detected, changes. Sampling may be triggered based on confidence and/or importance and/or urgency levels.
  • a first data set in the vehicle database shows a comparatively low confidence level or a high importance and/or urgency level at a certain location
  • an increasing number of data is collected which leads to a higher confidence level.
  • the number of rescanned features decrease with time. Accordingly, the network traffic and the required processing power in the vehicle and the server database may decrease.
  • the processing power in some vehicles may not be sufficient to do a real-time comparison of data provided by the vehicle database and the data acquired by the vehicle's sensors for the complete vehicle path driven, at least with high frame rates, which may be adequate for high speed driving vehicles.
  • a vehicle which is driving at 120 km/h proceeds with 33.3 m/s.
  • 33.3 ambient data sets must be processed per second. Comparing the vehicle database information with the ambient data sets in parallel to other driving-related tasks, may overburden most vehicle's computers, in particular as a single ambient data set may comprise a large number of information of a plurality of sensors.
  • the comparison is preferably done in larger, discontinuous sections of the vehicle's path, as can be handled by each vehicle.
  • the length of the vehicle path sections may be in a range from a few metres to some hundred metres and follow each other with a gap of a few metres to some hundred metres, which may further depend on a vehicle's speed or the currently available processing power.
  • the evaluation of different information may be made independently. For example, visible camera images may be evaluated in real time, while laser scanned images may be evaluated at a later time.
  • a certain level of evaluation of information may be done close to real-time, following the vehicle at speed, while a more detailed evaluation of information may only be done if differences are detected.
  • the at least one vehicle aligns the collected ambient data with at least one of the first data sets and generates second data sets describing new and/or amended feature reference points. Further, second data sets describing new and/or amended feature reference points may be generated. Such second data sets may include feature descriptors or further information about the feature to which the feature reference points refer.
  • the vehicle forwards the second data sets to the server database.
  • Communication with the server database may be done according to the availability of a communication network. So, communication may, for example, be done when a vehicle is parked or stops at a traffic light and Wi-Fi is available. Important and/or urgent data may always be exchanged via a mobile
  • the second data sets mainly comprise feature reference data of detected features, they are comparatively compact and require less size than images taken from the road or the objects. This significantly simplifies communication and reduces network load. Only in very special cases, larger image data may be transferred to the server database. For example, if a traffic sign is not readable, an image of this traffic sign may be forwarded to the server database for further evaluation.
  • All or a selection of the received second data sets may be stored by the server database.
  • the server tries to fit the received second data sets to data sets in the server database.
  • the data sets in the database may be refined by information contained in the second data sets. For example, there may be a weighted averaging with data sets in the database information contained in the second data sets.
  • the improved data may be stored in the server database and forwarded to at least one vehicle. Such data may be received by a vehicle as first data sets. This may result in an iteration until the server and/or vehicles consider the data as good and/or reliable enough, such that further iterations of alignment are not necessary.
  • a server database has the necessary means to perform the method of precision alignment of vehicle path data related to the same part of the road.
  • the method is performed by a computer having a computer program in memory.
  • the method is performed by a computer or a plurality of computers having a computer program in memory.
  • the steps mentioned above may either be performed in sequence, processing a certain quantity of data sets step by step or by processing individual data sets.
  • data of a certain section of the vehicle's path may be collected and processed step by step after the path section has been passed by the vehicle.
  • data processing may be made continuously while the vehicle is passing a certain section of its path.
  • data may be continuously forwarded to the server database or in bursts.
  • a combination may be possible.
  • second data sets may be generated by aligning feature data while other first data sets are received from the server.
  • the vehicle may at least have a sensor or a plurality of sensors, from at least one of imaging sensors like cameras, radar sensors, infrared or RF sensors; position and movement sensors like GNSS (global navigation satellite system), IMU (inertial measurement unit), wheel-ticks, steering angle; environmental sensors like outside temperature, rain (e.g. from windshield wiper) which may be a camera.
  • the vehicle may further include a data processing means like a computer which may include further components like a memory.
  • the vehicle may also include a communication means to exchange data (receive and send data) with a remote server and/or a server database.
  • a further embodiment relates to a system for aligning data provided by a plurality of vehicles, the system comprising a vehicle of claim 5 and further including a server and a server database remote from the vehicle.
  • Figure 1 shows a general scenario with self-driving vehicles driving on roads.
  • Figure 2 shows basic vehicle components.
  • Figure 3 shows a precision vehicle position detection.
  • Figure 4 shows basic ambient data set acquisition.
  • Figures 5a, 5b and 5c show features of different images taken by a vehicle pro ceeding along a road.
  • Figure 6 shows a feature map of figure 5b.
  • FIG. 7 shows the basic flow of data handling.
  • a road network 100 comprises a plurality of roads 101.
  • the roads have specific properties like the width, the direction, the curvature, the number of lanes in each direction, the width of lanes, or the surface structure. There may be further specific details like a kerb stone, a centreline, even a single dash of a dashed centreline or other markings like a crosswalk.
  • Close to the roads are driving relevant geographic objects 140.
  • a geographic object may be any stationary object like but not limited to a building, a tree, a river, a lake, or a mountain.
  • road furniture objects 150 which may comprise objects and pieces of equipment installed on streets and roads for various purposes, such as benches, traffic barriers, bollards, post boxes, phone boxes, streetlamps, traffic lights, traffic signs, bus stops, tram stops, taxi stands, public sculptures, and waste receptacles.
  • objects which are not falling into one of the above categories but are also relevant for driving.
  • Such objects may be green-belts, trees, stones, walls, or other obstacles close to a road. They may also comprise structures like trenches, plane surfaces and others which may be considered for planning alternate, emergency exit or collision avoidance paths.
  • a server 120 preferably has at least a database hub 121 connected to a server database 122.
  • a communication network 130 is provided for the communication between the server 120 and the vehicles 110.
  • the communication network may be a mobile communications network, Wi-Fi, or any other type of network.
  • this network is based on an Internet protocol.
  • the embodiments generally relate to vehicles, which may be cars, trucks, motorbikes, or any other means for traveling on a road. For simplicity in the figures cars are shown.
  • a vehicle comprises at least one of environmental sensors 210, vehicle status sensors 220, position sensors 230 and a communication system 240. These are preferably connected by at least one communication bus 259 to a processor system 250.
  • This communication bus may comprise a single bus or a plurality of buses.
  • the processor system 250 may comprise a plurality of individual processors 251 which preferably are connected to each other by a bus 252. Furthermore, they are preferably connected to a vehicle database or storage 253.
  • Environmental sensors 210 collect information about the environment of the vehicle. They may comprise a camera system like a CCD camera which may be suitable for capturing visible and/or infrared images. Preferably a simple mono- camera is provided. Alternatively, a stereo camera, which may have two imaging sensors mounted distant from each other may be used. There may be further sensors including at least one of imaging sensors like cameras, radar sensors, infrared or RF sensors; position and movement sensors like GNSS (global navigation satellite system), IMU (inertial measurement unit), wheel-ticks, steering angle; environmental sensors like outside temperature, rain (e.g. from windshield wiper); and more. The sensors may be used for scanning and detecting outside objects. The sensors may also be used for generating 2D images or 3D road models in which features and their feature reference points may be detected.
  • GNSS global navigation satellite system
  • IMU inertial measurement unit
  • wheel-ticks steering angle
  • environmental sensors like outside temperature, rain (e.g. from windshield wiper); and more.
  • the sensors may be
  • the status sensors 220 collect information about the vehicle and its internal states. Such sensors may detect the status of driving, driving speed and steering direction.
  • the position sensors 230 collect information about the vehicle's position.
  • Such position sensors preferably comprise a GNSS system. Any positioning system like GPS, GLONASS or Galileo may be used. Herein only the term GPS or GNSS is used indicating any positioning system.
  • a wheel dependent distance sensor like a rotation sensor which may be used for further systems like antilocking systems or Anti Blocking System (ABS) for measuring driven distances and/or a steering sensor for detecting driving direction.
  • ABS Anti Blocking System
  • the position sensors 230 may at least partially interact with or use signals from the status sensors 220.
  • the communication system 240 is used for communication with devices outside of the vehicle.
  • the communication system is based on a
  • Wi-Fi wireless local area network
  • the communication system may use different communication networks and/or communication protocols dependent on their availability. For example, in a parking position of the vehicle Wi-Fi may be available. Therefore, in such positions Wi-Fi may be the preferred
  • Wi-Fi may be made available at intersections, close to traffic lights and close to road sections with low traffic speeds or regular congestion. If Wi-Fi is not available, any other communication network like a mobile communications network may be used for communication. Furthermore, the communication system 240 may have a buffer to delay communication until a suitable network is available. For example, road property data which has been allocated during driving may be forwarded to a server, when the vehicle has arrived in a parking position and Wi-Fi is available.
  • the processor system 250 may be a processor system which is already available in a large number of vehicles. In modern vehicles, many processors are used for controlling different tasks like engine management, driving control, automated driver assistance, navigation, and entertainment. Generally, there is significant processor power already available. Such already available processors or additional processors may be used for performing the tasks described herein.
  • the processors preferably are connected by a bus system which may comprise a plurality of different buses like CAN, MOST, Ethernet or FlexRay.
  • first data sets including at least one of a feature reference point, an object reference point, and an object of a scene are received and/or loaded 501 by a server from a server database 122 by a vehicle into the vehicle ' s database 253. Further, the vehicle collects ambient data represented by environmental sensor data 502. The steps of loading and collecting may be exchanged in their sequence or performed simultaneously.
  • the vehicle aligns 503 the collected ambient data with at least one of the first data sets, detects the differences 504 between the collected ambient data and at least one of the first data sets and generates second data sets describing new and/or amended feature reference points.
  • These are transferred or transmitted 505 to the server and may be stored in the server database 122.
  • the server may refine the server database by using the data included in the second data sets.
  • the refined data may be transferred 507 to a vehicle.
  • the vehicle may be the same or a different vehicle as from steps 501 to 505 or a plurality of vehicles.
  • FIG 4 basic acquisition of an ambient data set and detection / evaluation of features and related at least one of a feature reference point, an object reference point, and an object of a scene is shown in more detail together with figures 5a, 5b and 5c.
  • the term ambient data set is used herein for a data set generated by a plurality of sensors which may comprise at least one optical image sensor and/or radar sensor and/or infrared sensor and/or other vehicle movement and/or environmental sensors. From the combination and/or correlation of these sensor signals, an ambient data set from the perspective of a vehicle is collected, out of which at least one 2D image or 3D road model may be generated.
  • a vehicle 110 is driving on a road 300.
  • the road has the right limit 310 and the left limit 311 which may be marked by limit lines, a kerb or any other limitation or marking means.
  • a centreline which is a dashed line in this embodiment, having a plurality of dashes 320, 330, 340,
  • Each dash has a beginning and an end.
  • the beginnings are marked as features 321, 331, 341, and 351 whereas the ends are marked as features 322, 332, 342 and 352.
  • road furniture which may be traffic signs 370, 371, 372.
  • the vehicle preferably identifies features which may for example be the beginnings 321, 331, 341, 352 and/or the ends 322, 332, 342, 352 of the dashes. Further features may be the peak 375 of the arrow on the traffic sign or the bottom 376 of the pole of the traffic sign.
  • an ambient data set may give a 360 degree representation of the vehicle's environment
  • a viewing sector 380 which may correspond to a front camera, having an approximately 90° viewing angle.
  • the corresponding front image is shown in figure 5a.
  • the left side of the image shows centreline dash 330 and part of the following centreline dash 340. It is obvious, that this view gives a clear image of the road objects and features related to the right lane of the road, where the vehicle is driving, but cannot provide too much information about the road objects and features at the left lane from which parts can be seen in the left top corner of the image.
  • the vehicle proceeds along the road through positions 111 and 112, it captures images according to the image sectors 381 and 382.
  • the corresponding images together with the road objects and their features are shown in figures 5b and 5c.
  • FIG. 6 A resulting feature map is shown in figure 6. This feature map is based on figure 5b. Here, all features detected and the related feature reference points 331, 332, 341, 342, 375, 376 are shown only.
  • step 550 the server forwards data sets to at least one vehicle.
  • ambient data may be collected in step 551 by the vehicle.
  • the vehicle then aligns the collected ambient data with the data sets received from the server in step 552.
  • the vehicle detects the differences between the collected ambient data and the data sets received from the server.
  • the vehicle may process 554 the alignment data and it may generate second data sets which may be forwarded to the server in step 555. Further processing of the data is done by the server, which receives and stores the second data sets 556 and refines the database in step 557 by using the data included in the second data sets.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for aligning data to provide an environmental data model of a scene, includes providing a server database with a plurality of data sets by a remote server. Each data set has at least one of a feature reference point, an object reference point, and an object of a scene. A vehicle receives first data sets form the server database, where the data sets have at least one feature reference point. The vehicle collects ambient data and generates feature reference points of the collected ambient data. It further aligns feature reference points of the collected ambient data with at least one of the first data sets and generates second data sets describing new and/or amended feature reference points. Finally, the vehicle transmits the second data sets to the server, which refines the server database by using the data included in the second data sets and forwards the refined datasets to vehicles for further alignments.

Description

Method for aligning crowd-sourced data to provide an environmental data model of a scene
Field of the invention
The embodiments relate to a method and a system for aligning crowd-sourced data to provide an environmental data model of a scene which may be used as a database for autonomous vehicle navigation and localization.
Description of the related art
For enhancing the operation of vehicles and for providing new features and func tions an increasing number of sensors and processors or computers are inte grated in modern vehicles. In US 6,353,785 Bl, a method and system for in-vehi- cle computer architecture is disclosed. The computing architecture includes a data network comprised of a plurality of interconnected processors, a first group of sensors responsive to environmental conditions around the vehicle, a second group of sensors responsive to the vehicle's hardware systems, and a map data base containing data that represent geographic features in the geographic area around the vehicle. The vehicle operations programming applications may in- elude adaptive cruise control, automated mayday, and obstacle and collision warning systems, among others. In this embodiment, the vehicle needs a contin uous communication over a data network which may sometimes be interrupted under real road conditions. Furthermore, a precise map database is required. EP 1219928 A1 shows a method for generating road segments for a digital map.
A vehicle equipped with a camera is driving over a road section. The captured im ages are correlated with GNSS information to produce a digital map.
To generate road maps, highly specialized vehicles are used. They have expensive equipment for scanning roads which may cost significantly more than the vehicle itself. After scanning the roads, manual processing of the acquired information is required. This is expensive, labour intensive and prone to errors. Therefore, only comparatively long update cycles can be achieved.
In EP 3 130 945 Al, a method and a system for precision vehicle positioning is disclosed. This method improves positioning accuracy by using first data sets from a database and generating second data sets by its sensors during driving.
Then the first and second datasets are aligned in the vehicle to derive a higher accuracy position value of the vehicle. This method does not improve the accu racy of the database but is instead based only on internal processing by the vehi- cle.
US 2018/0023961 Al discloses a system and a method for aligning crowdsourced sparse map data. Here, vehicles, located in a scene collect environmental sensing data captured by respective environmental sensor units of the mobile entities.
The raw data from the sensor units may be used by a respective processor unit of the mobile entities to generate an initial 3D model of the scene, for example, be ing construed as a 3D point cloud. Each of the mobile entities send snippets with reduced point cloud data to a remote server to align a crowd-sourced database. However, in many cases the remote server does not find enough matches to align and merge the snippets received from the various mobile entities. As a re- suit, the alignment of the crowd-sourced database may be performed on the server side with very limited features. Summary of the invention
The problem to be solved by the invention is to improve the known vehicles, databases and methods for building databases which further provide sufficient information to vehicles to enable Highly Automated Driving (HAD) or autonomous driving or self-driving.
Solutions of the problem are described in the independent claims. The dependent claims relate to further improvements of the invention.
Tests and simulations with HAD and self-driving vehicles have shown that a re ported road segment including information about road surface, line features and sparse landmarks generated from a single vehicle with low-cost sensors, for ex ample monocular camera, IMU, GPS etc., is not very accurate so that an environ mental data model generated on the server side by evaluating the reported data from the vehicles can seriously suffer from accumulated error and scale ambigu ity issues. Furthermore, using bigger landmarks such as traffic signs and traffic lights leads to greater localization error. The size of a reported road segment is usually greatly optimized to reduce the transmission size of data to be transmit ted to the server side. Thus, the server side does not have the raw sensor data being available on the side of the mobile entities to do proper landmark match ing and alignment.
Therefore, the alignment process on the server side based on the evaluation of transmitted data from the mobile entities tends to generate many incorrect matches. Moreover, the number of sparse landmarks is very limited on the server side, which causes some alignment failures. In particular, the aligning of a crowd-sourced database on the server side may not perform well with pro cessing an environmental data model describing unstructured roads, such as ru ral roads. The basic concept of an embodiment of generating and updating a precision road property database is a two-step feature data processing, whereby a first feature data processing step is done by a vehicle and a second feature data processing step is done by a server database. These steps may be repeated multiple times until the database is sufficiently accurate. The term database may be understood as a structured and/or organized memory to store and retrieve data. The database may have at least one index for faster access to selected data.
The processed data may include at least a plurality of feature reference points and/or object reference points and/or objects of a scene. A feature reference point indicates a feature in a 2D image or in a 3D road model and relates to a position of an optical feature e.g. corners or edges which may be parts of an object. Such features may be clusters of pixels, which may be detected by a simple filter. A feature reference point is a point in 3D space indicating the position of a feature in 3D space. A feature reference point may not indicate an object itself. It merely may indicate a visual or otherwise detectable feature, which may be part of an object. The recognition of features is easier and requires less computational power than identifying and classifying objects in images. A feature descriptor may be attached to a feature reference point. A feature reference point may be assigned to an object and may then become an object reference point.
The method may use at least one of the following data types: a feature reference point of a scene, an object reference point of a scene, or an object of a scene. It may use only one of these data types, both data types or all three data types. For example, only feature reference points may be used. In a more complex processing, object reference points may be used in addition.
In more detail, this method includes the steps of: providing a server database by a remote server, the server database including a plurality of data sets, each data set including at least one of a feature reference point, an object reference point, and an object of a scene, conducting at least once or repeating the sequence of steps a) to f): a) at least one vehicle receiving at least one first data set by the server from the server database, the at least one first data set including at least one of a feature reference point, an object reference point, and an object of a scene, b) the at least one vehicle collecting ambient data and generating at least one of a feature reference point, an object reference point, and an object of a scene of the collected ambient data, c) the at least one vehicle aligning the collected ambient data with the first data sets based on at least one of a pair of feature reference points, a pair of object reference points, and a pair of objects of a scene of, d) detecting differences between the collected ambient data with the first data sets and generating second data sets related to at least one of a feature reference point, an object reference points, and an ob ject of a scene, e) the at least one vehicle transmitting at least one of the second data sets to the server f) the server refining the server database by using the data included in the second data sets.
These steps may be repeated multiple times until a limit by the server and/or vehicles may be reached. Such a limit may be a predetermined or dynamic deviation and/or tolerance between the feature reference points and/or the positions of the feature reference points. Step c) of aligning may further include the following steps of: processing or generating a 3D (3-dimensional) model for of the at least one of the first feature reference points, first object reference points, and first objects of a scene of the first data sets and generat- ing a 3D model for the at least one of the second feature reference points, second object reference points, and second objects of the col lected ambient data, comparing at least one of the first and second feature reference points, object reference points, and objects to find at least one of a matching pair of first and second feature reference points, a match ing pair of first and second object reference points, and a matching pair of first and second objects, and aligning the at least one of a second feature reference point, a sec ond object reference point, and a second object of the ambient data set with the at least one of the first feature reference point, first ob ject reference point, and first object by minimizing the overall match ing error between the matching pairs.
Step c) may either start with processing or generating a 3D (3-dimensional) model based on first data sets. Such a 3D model may be generated if it is not contained within the first data sets. If the first data sets already provide a 3D model, it must not be generated, but simply can be processed instead.
Comparing at least one of the first and second feature reference points, object reference points, and objects relates to find at least a pair of the same data type having one member from a first dataset and a member from collected ambient data. The at least one second data set may include at least one of: data existing already in the first data sets confirmation or rejection of data existing already in the first data sets new, amended, moved or changed data related to at least one of feature reference points, object reference points, objects of a scene.
Further the at least one second data set may include at least one of feature reference points, object reference points, objects of a scene.
In an embodiment, a vehicle which may be a standard production vehicle uses its sensors to collect a plurality of ambient data sets, processes them and communicates with a precision road property database on a server to improve the quality of its collected data and to provide the server database updated ambient data. Generally, the information is mostly feature-based, as processing feature-based data is more simple, quicker and requires less processing power.
In addition, object-based data related to driving-relevant objects like a road object, a road furniture object, a geographic object, or a further object may be used. Object data may include multiple objects. A road object preferably is related to a road itself. It may describe basic characteristics of a road like the width, the direction, the curvature, the number of lanes in each direction, the width of lanes, the surface structure. It may further describe specific details like a kerb stone, a centreline, even a single dash of a dashed centreline or other markings like a crosswalk or a stop line. A road furniture object is related to road furniture, also called street furniture. Road furniture may comprise objects and pieces of equipment installed on streets and roads for various purposes. It may include benches, traffic barriers, bollards, post boxes, phone boxes, streetlamps, traffic lights, traffic signs, bus stops, tram stops, taxi stands, public sculptures, and waste receptacles. A geographic object may be any stationary object like but not limited to a building, a river, a lake, or a mountain. Further objects may be objects which are not falling into one of the above categories but are also relevant for driving. Such objects may be trees, stones, walls, or other obstacles close to a road. They may also comprise structures like trenches, plane surfaces and others which may be considered for planning alternate, emergency exit or collision avoidance paths. Such objects may be very complex in their shape and structure, such that recognizing and processing such objects is very demanding in respect to computing power, memory requirement and sensor resolution.
In contrast to the object-based data, feature-based data is much easier to handle. Feature-based data only refers to optical features, which may be detected within 2D or SD sensor data. Such features may for example be corners of an object, e.g. a traffic sign or points, e.g. the start of a single dash of a dashed centreline. Certain parts of such a feature are assigned to a feature reference point. For example, a corner of a traffic sign may be assigned a feature reference point. The vehicle does not need to identify the other corners of the traffic sign nor does it need to identify that the corner belongs to a traffic sign. Finding such a single corner is much easier than identifying a whole traffic sign. This is further beneficial, if only parts of an object are visible, e.g. other corners of the traffic sign may be hidden. Further information may be attached and/or connected to a feature reference point. Such information may be the type of a feature, e.g. an edge, a corner, a point or a mathematical descriptor.
In an embodiment, at least one of the second data sets includes an object and/or an object reference point. Alternatively, second data sets including only object and/or an object reference point may be provided. The server may provide a better refinement, if both, feature reference points and object reference points can be used. An object may for example allow classification of features. If for example, the server knows, that certain feature reference points belong to a tree, it may consider, that these feature reference points are not visible all year. Feature reference points of a movable object may require different handling than those of a stationary object. If the feature reference points and object reference points are from the same vehicle, preferably from the same pass of a road, their correlation is much easier and requires less computing power than if these are from different vehicles.
The server and the server database may be provided by a service provider and may be hosted on a plurality of host computer systems. It may be a central database, but it may be split into a plurality of sub databases, which for example may be divided by geographic regions. Preferably, the server database has a suitable network connection like an Internet connection to communicate with the vehicle. In an alternate embodiment, the communication with the vehicles may be performed via third party communication systems like back ends of vehicle manufacturers. The bandwidth should be large enough to communicate with a large number of vehicles simultaneously. It is preferred, if the
communication is compressed and/or encrypted. Preferably, the server database comprises a physical data storage which may be in specific server locations, in a cloud based shared infrastructure, a combination of approaches or similar embodiment, together with a software for managing data storage and data flow. Herein, the term server database is used meaning all related systems in a broad sense. This may include a computer system, a physical storage or a cloud based shared infrastructure, a database engine, a database manager and further software for processing data in the database or similar embodiment. It is preferred, if the server database supplies auxiliary information to the vehicle as part of the first data sets. Such auxiliary information may be a server database generated confidence level or other confidence related information, herein called confidence level, and importance level, and urgency level or statistical information like average values and standard deviations. This auxiliary information may be supplied for each individual data set or for a group of data sets.
At least one vehicle is further configured to generate feature-based data by analysing ambient data sets collected by their sensors. Under normal road and driving conditions, it may be preferred, if vehicles sample or collect information by their sensors, preferably also by their image sensors while driving, while participating in traffic or at least while being on the road or in proximity to a road and compare and/or correlate the sampled information with the previously downloaded or stored information on a regular or periodic base. The frequency or time intervals between such collections, which may also be called sampling rate, may be determined by the sensors in the vehicle, their resolution and/or the processing power in the vehicle. Instead of periodic sampling, random sampling may be chosen. The frequency of sampling may also be adapted to the environment. In the case of a curvy road, a higher sampling rate may be desired compared to a straight road. Different sapling rates may also be requested by the server database.
The sampled ambient data sets may be recorded and stored over a certain time as a sequence of ambient data sets. This allows to process the stored ambient data sets as a background task and to move back and forth through the stored data sets for more sophisticated processing in contrast to real-time processing which has to follow the vehicle at speed. Storage of a sequence of sampled ambient data sets allows the vehicle specifically when driving, to monitor for the features appearance over time and/or distance. This allows for a more precise alignment of features, as with changing distance the angle under which a feature is detected, changes. Sampling may be triggered based on confidence and/or importance and/or urgency levels. If a first data set in the vehicle database shows a comparatively low confidence level or a high importance and/or urgency level at a certain location, it would be preferred, if a large number or almost all vehicles passing or staying at that location would collect information by their sensors and forward updates to the server database. Over time, an increasing number of data is collected which leads to a higher confidence level. As it may not be necessary to rescan features with a higher confidence level, the number of rescanned features decrease with time. Accordingly, the network traffic and the required processing power in the vehicle and the server database may decrease.
The processing power in some vehicles may not be sufficient to do a real-time comparison of data provided by the vehicle database and the data acquired by the vehicle's sensors for the complete vehicle path driven, at least with high frame rates, which may be adequate for high speed driving vehicles. A vehicle, which is driving at 120 km/h proceeds with 33.3 m/s. To get a spatial resolution of a sequence of ambient data sets of about lm, 33.3 ambient data sets must be processed per second. Comparing the vehicle database information with the ambient data sets in parallel to other driving-related tasks, may overburden most vehicle's computers, in particular as a single ambient data set may comprise a large number of information of a plurality of sensors. Basically, there may be multiple groups of sensors: imaging sensors like cameras, radar sensors, infrared or RF sensors; position and movement sensors like GNSS (global navigation satellite system), IMU (inertial measurement unit), wheel-ticks, steering angle; environmental sensors like outside temperature, rain (e.g. from windshield wiper); and more. Therefore, the comparison is preferably done in larger, discontinuous sections of the vehicle's path, as can be handled by each vehicle.
The length of the vehicle path sections may be in a range from a few metres to some hundred metres and follow each other with a gap of a few metres to some hundred metres, which may further depend on a vehicle's speed or the currently available processing power. In a further embodiment, the evaluation of different information may be made independently. For example, visible camera images may be evaluated in real time, while laser scanned images may be evaluated at a later time. In a further embodiment, a certain level of evaluation of information may be done close to real-time, following the vehicle at speed, while a more detailed evaluation of information may only be done if differences are detected.
The at least one vehicle aligns the collected ambient data with at least one of the first data sets and generates second data sets describing new and/or amended feature reference points. Further, second data sets describing new and/or amended feature reference points may be generated. Such second data sets may include feature descriptors or further information about the feature to which the feature reference points refer.
Further, the vehicle forwards the second data sets to the server database.
Communication with the server database may be done according to the availability of a communication network. So, communication may, for example, be done when a vehicle is parked or stops at a traffic light and Wi-Fi is available. Important and/or urgent data may always be exchanged via a mobile
communications network, while other data may be exchanged at a later time when Wi-Fi is available. As the second data sets mainly comprise feature reference data of detected features, they are comparatively compact and require less size than images taken from the road or the objects. This significantly simplifies communication and reduces network load. Only in very special cases, larger image data may be transferred to the server database. For example, if a traffic sign is not readable, an image of this traffic sign may be forwarded to the server database for further evaluation.
All or a selection of the received second data sets may be stored by the server database. The server tries to fit the received second data sets to data sets in the server database. The data sets in the database may be refined by information contained in the second data sets. For example, there may be a weighted averaging with data sets in the database information contained in the second data sets. The improved data may be stored in the server database and forwarded to at least one vehicle. Such data may be received by a vehicle as first data sets. This may result in an iteration until the server and/or vehicles consider the data as good and/or reliable enough, such that further iterations of alignment are not necessary.
According to a further embodiment, a server database has the necessary means to perform the method of precision alignment of vehicle path data related to the same part of the road. Preferably, the method is performed by a computer having a computer program in memory.
Preferably, the method is performed by a computer or a plurality of computers having a computer program in memory.
The steps mentioned above may either be performed in sequence, processing a certain quantity of data sets step by step or by processing individual data sets. For example, data of a certain section of the vehicle's path may be collected and processed step by step after the path section has been passed by the vehicle. Instead, data processing may be made continuously while the vehicle is passing a certain section of its path. Furthermore, data may be continuously forwarded to the server database or in bursts. Also, a combination may be possible. For example, there may be a continuous real-time processing when the vehicle is passing a certain section of its path, while recording all relevant sensor data, allowing a view back as described above, if the vehicle detects an important feature or object. There may also be a plurality of interleaved processes. For example, second data sets may be generated by aligning feature data while other first data sets are received from the server.
Another embodiment relates to a vehicle configured to perform the method mentioned above. The vehicle may at least have a sensor or a plurality of sensors, from at least one of imaging sensors like cameras, radar sensors, infrared or RF sensors; position and movement sensors like GNSS (global navigation satellite system), IMU (inertial measurement unit), wheel-ticks, steering angle; environmental sensors like outside temperature, rain (e.g. from windshield wiper) which may be a camera. The vehicle may further include a data processing means like a computer which may include further components like a memory. The vehicle may also include a communication means to exchange data (receive and send data) with a remote server and/or a server database. A further embodiment relates to a system for aligning data provided by a plurality of vehicles, the system comprising a vehicle of claim 5 and further including a server and a server database remote from the vehicle.
Description of Drawings In the following, the invention will be described by way of example, without limi tation of the general inventive concept, on examples of embodiment with refer ence to the drawings.
Figure 1 shows a general scenario with self-driving vehicles driving on roads.
Figure 2 shows basic vehicle components. Figure 3 shows a precision vehicle position detection.
Figure 4 shows basic ambient data set acquisition.
Figures 5a, 5b and 5c show features of different images taken by a vehicle pro ceeding along a road.
Figure 6 shows a feature map of figure 5b.
Figure 7 shows the basic flow of data handling.
In figure 1, a general scenario with self-driving vehicles driving on roads is shown. A road network 100 comprises a plurality of roads 101. The roads have specific properties like the width, the direction, the curvature, the number of lanes in each direction, the width of lanes, or the surface structure. There may be further specific details like a kerb stone, a centreline, even a single dash of a dashed centreline or other markings like a crosswalk. Close to the roads are driving relevant geographic objects 140. A geographic object may be any stationary object like but not limited to a building, a tree, a river, a lake, or a mountain. Furthermore, there is plurality of road furniture objects 150 which may comprise objects and pieces of equipment installed on streets and roads for various purposes, such as benches, traffic barriers, bollards, post boxes, phone boxes, streetlamps, traffic lights, traffic signs, bus stops, tram stops, taxi stands, public sculptures, and waste receptacles. There may be further objects which are not falling into one of the above categories but are also relevant for driving. Such objects may be green-belts, trees, stones, walls, or other obstacles close to a road. They may also comprise structures like trenches, plane surfaces and others which may be considered for planning alternate, emergency exit or collision avoidance paths. Furthermore, a server 120 preferably has at least a database hub 121 connected to a server database 122. For the communication between the server 120 and the vehicles 110, a communication network 130 is provided. Such a
communication network may be a mobile communications network, Wi-Fi, or any other type of network. Preferably, this network is based on an Internet protocol. The embodiments generally relate to vehicles, which may be cars, trucks, motorbikes, or any other means for traveling on a road. For simplicity in the figures cars are shown.
In figure 2, basic vehicle components 200 are shown. Preferably a vehicle comprises at least one of environmental sensors 210, vehicle status sensors 220, position sensors 230 and a communication system 240. These are preferably connected by at least one communication bus 259 to a processor system 250.
This communication bus may comprise a single bus or a plurality of buses. The processor system 250 may comprise a plurality of individual processors 251 which preferably are connected to each other by a bus 252. Furthermore, they are preferably connected to a vehicle database or storage 253.
Environmental sensors 210 collect information about the environment of the vehicle. They may comprise a camera system like a CCD camera which may be suitable for capturing visible and/or infrared images. Preferably a simple mono- camera is provided. Alternatively, a stereo camera, which may have two imaging sensors mounted distant from each other may be used. There may be further sensors including at least one of imaging sensors like cameras, radar sensors, infrared or RF sensors; position and movement sensors like GNSS (global navigation satellite system), IMU (inertial measurement unit), wheel-ticks, steering angle; environmental sensors like outside temperature, rain (e.g. from windshield wiper); and more. The sensors may be used for scanning and detecting outside objects. The sensors may also be used for generating 2D images or 3D road models in which features and their feature reference points may be detected.
The status sensors 220 collect information about the vehicle and its internal states. Such sensors may detect the status of driving, driving speed and steering direction.
The position sensors 230 collect information about the vehicle's position. Such position sensors preferably comprise a GNSS system. Any positioning system like GPS, GLONASS or Galileo may be used. Herein only the term GPS or GNSS is used indicating any positioning system. There may further be a Gyro, a yaw sensor, and/or an accelerometer for determining movements of the vehicle. It is further preferred to have a wheel dependent distance sensor like a rotation sensor which may be used for further systems like antilocking systems or Anti Blocking System (ABS) for measuring driven distances and/or a steering sensor for detecting driving direction. There may be further sensors like an altimeter or other altitude or slope sensors for precision detection of altitude changes. The position sensors 230 may at least partially interact with or use signals from the status sensors 220.
The communication system 240 is used for communication with devices outside of the vehicle. Preferably, the communication system is based on a
communication network, which may be a mobile communications network, Wi Fi, or any other type of network. The communication system may use different communication networks and/or communication protocols dependent on their availability. For example, in a parking position of the vehicle Wi-Fi may be available. Therefore, in such positions Wi-Fi may be the preferred
communication network. Furthermore, Wi-Fi may be made available at intersections, close to traffic lights and close to road sections with low traffic speeds or regular congestion. If Wi-Fi is not available, any other communication network like a mobile communications network may be used for communication. Furthermore, the communication system 240 may have a buffer to delay communication until a suitable network is available. For example, road property data which has been allocated during driving may be forwarded to a server, when the vehicle has arrived in a parking position and Wi-Fi is available.
The processor system 250 may be a processor system which is already available in a large number of vehicles. In modern vehicles, many processors are used for controlling different tasks like engine management, driving control, automated driver assistance, navigation, and entertainment. Generally, there is significant processor power already available. Such already available processors or additional processors may be used for performing the tasks described herein.
The processors preferably are connected by a bus system which may comprise a plurality of different buses like CAN, MOST, Ethernet or FlexRay.
In figure 3, the basic data flow of a method for aligning data is shown. In a first step, first data sets including at least one of a feature reference point, an object reference point, and an object of a scene are received and/or loaded 501 by a server from a server database 122 by a vehicle into the vehicle's database 253. Further, the vehicle collects ambient data represented by environmental sensor data 502. The steps of loading and collecting may be exchanged in their sequence or performed simultaneously.
Then, the vehicle aligns 503 the collected ambient data with at least one of the first data sets, detects the differences 504 between the collected ambient data and at least one of the first data sets and generates second data sets describing new and/or amended feature reference points. These are transferred or transmitted 505 to the server and may be stored in the server database 122. In step 506 the server may refine the server database by using the data included in the second data sets. Finally, the refined data may be transferred 507 to a vehicle. The vehicle may be the same or a different vehicle as from steps 501 to 505 or a plurality of vehicles.
In figure 4 basic acquisition of an ambient data set and detection / evaluation of features and related at least one of a feature reference point, an object reference point, and an object of a scene is shown in more detail together with figures 5a, 5b and 5c. Generally, the term ambient data set is used herein for a data set generated by a plurality of sensors which may comprise at least one optical image sensor and/or radar sensor and/or infrared sensor and/or other vehicle movement and/or environmental sensors. From the combination and/or correlation of these sensor signals, an ambient data set from the perspective of a vehicle is collected, out of which at least one 2D image or 3D road model may be generated. A vehicle 110 is driving on a road 300. The road has the right limit 310 and the left limit 311 which may be marked by limit lines, a kerb or any other limitation or marking means. At the centre of the road there is a centreline which is a dashed line in this embodiment, having a plurality of dashes 320, 330, 340,
350 and 360. Each dash has a beginning and an end. The beginnings are marked as features 321, 331, 341, and 351 whereas the ends are marked as features 322, 332, 342 and 352. There may be any other type of centreline or even no centreline. Furthermore, there is some road furniture, which may be traffic signs 370, 371, 372. From the captured ambient data set the vehicle preferably identifies features which may for example be the beginnings 321, 331, 341, 352 and/or the ends 322, 332, 342, 352 of the dashes. Further features may be the peak 375 of the arrow on the traffic sign or the bottom 376 of the pole of the traffic sign. Although an ambient data set may give a 360 degree representation of the vehicle's environment, here a more limited view is shown by referring to a viewing sector 380 which may correspond to a front camera, having an approximately 90° viewing angle. The corresponding front image is shown in figure 5a. Here only part of the road objects and their features, the right limit 310 and the traffic sign 380 can be seen at the right side of the image. The left side of the image shows centreline dash 330 and part of the following centreline dash 340. It is obvious, that this view gives a clear image of the road objects and features related to the right lane of the road, where the vehicle is driving, but cannot provide too much information about the road objects and features at the left lane from which parts can be seen in the left top corner of the image. When the vehicle proceeds along the road through positions 111 and 112, it captures images according to the image sectors 381 and 382. The corresponding images together with the road objects and their features are shown in figures 5b and 5c.
A resulting feature map is shown in figure 6. This feature map is based on figure 5b. Here, all features detected and the related feature reference points 331, 332, 341, 342, 375, 376 are shown only.
Assembling this sequence of road objects will give a continuous representation of the road. This assembling cannot be compared to image stitching which is known from panorama cameras or panorama software. There, only common marks in adjacent images have to be identified and the images must be scaled accordingly. For assembling of road objects, a spatial transformation of the driving-relevant object data sets must be done. Such a transformation may be a linear displacement, a scaling or even a complex nonlinear transformation to align captured data. Details of such transformation will be explained later. These transformations will be partially facilitated by the road objects themselves, which may allow automatic recalibration of sensors when consistent errors are found between detected road objects in second data sets and first data sets.
In figure 7 the basic flow of data is shown. The left colums shows operations by a vehicle, while the right column shows operations by a remote server. In step 550 the server forwards data sets to at least one vehicle. Before, at the same time or later, ambient data may be collected in step 551 by the vehicle. The vehicle then aligns the collected ambient data with the data sets received from the server in step 552. In step 553, the vehicle detects the differences between the collected ambient data and the data sets received from the server. Then, the vehicle may process 554 the alignment data and it may generate second data sets which may be forwarded to the server in step 555. Further processing of the data is done by the server, which receives and stores the second data sets 556 and refines the database in step 557 by using the data included in the second data sets.
List of reference numerals
100 road network
101 road
110 vehicle
111 vehicle in distant position
112 vehicle in further distant position
120 server
121 database host
122 server database
123 database manager
130 communication network
140 geographic environment
150 road furniture
200 vehicle components
210 environmental sensors
220 vehicle state sensors
230 position sensors
240 communication system
250 processor system
251 individual processor
252 bus
253 vehicle database
259 communication bus
300 road
310 - 311 road limit line
320, 330, 340, 350, 360 centreline dash
321, 331, 341, 351, 361 first end feature of centreline dash
322, 332, 342, 352, 362 second end feature of centreline dash 370 - 372 road sign 375 peak of arrow on road sign
376 bottom of pole of road sign
501 load feature description
502 environmental sensor data
503 data alignment
504 difference detection
505 transfer of aligned data
506 database refinement
507 transfer of refined data to a vehicle
550 forward data sets to vehicle
551 collect ambient data
552 alignment of data
553 diiference detection
554 data processing, generate second data sets 555 forward second data sets to server
556 store second data sets
557 database refinenement

Claims

Claims
1. A method for aligning data to provide an environmental data model of a scene, comprising a sequence of steps by:
providing a server database (122) by a remote server (120), the server da tabase (122) including a plurality of data sets, each data set including at least one of a feature reference point, an object reference point, and an object of a scene, and conducting at least once or repeating the sequence of steps a) to f): a) at least one vehicle (110) receiving (501) at least one first data set by the server (120) from the server database (122), the at least one first data set including at least one of a feature reference point, an object reference point, and an object of a scene, b) the at least one vehicle (110) collecting ambient data and generating at least one of a feature reference point, an object reference point, and an object of a scene of the collected ambient data, c) the at least one vehicle (110) aligning the collected ambient data with the first data sets based on at least one of a pair of feature reference points, a pair of object reference points, and a pair of objects of a scene, d) the at least one vehicle (110) detecting differences between the col lected ambient data with the first data sets and generating second data sets related to at least one of a feature reference point, an ob ject reference point, and an object of a scene, e) the at least one vehicle (110) transmitting at least one of the second data sets to the server, f) the server (120) refining the server database (122) by using the data included in the second data sets.
2. The method according to the previous claim,
characterized in, that
the steps a) and b) are exchanged in their sequence or are performed sim ultaneously.
3. The method according to any one of the preceding claims,
characterized in, that
step c) includes the further steps of processing or generating a 3D (3-dimensional) model for the at least one of the first feature reference points, first object refer ence points, and first objects of a scene of the first data sets and generating a 3D model for the at least one of the second feature reference points, second object reference points, and second ob jects of the collected ambient data, comparing at least one of the first and second feature reference points, object reference points, and objects to find at least one of a matching pair of first and second feature reference points, a matching pair of first and second object reference points, and a matching pair of first and second objects, and aligning the at least one of a second feature reference point, a second object reference point, and a second object of the ambient data set with the at least one of the first feature reference point, first object reference point, and first object by minimizing the overall matching error between the matching pairs.
4. The method according to any one of the preceding claims,
characterized in, that
at least one second data set includes at least one of: data existing already in the first data sets confirmation or rejection of data existing already in the first data sets new, amended, moved or changed data related to at least one of feature reference points, object reference points, objects of a scene.
5. The method according to any one of the preceding claims,
characterized in, that
the at least one vehicle (110) of step a) is the same or a different vehicle as in a previous iteration of the steps a) to e).
6. The method according to any one of the preceding claims,
characterized in, that
in step b), collecting ambient data is performed by using at least one of an image sensor like a CCD sensor, an infrared sensor, a laser scanner, an ul trasound transducer, a radar sensor, an RF channel sensor, position and movement sensors like GNSS, IMU (inertial measurement unit), wheel-ticks sensor, steering angle sensor, and environmental sensors like outside tem perature, rain;.
7. The method according to any one of the preceding claims,
characterized in, that
the server and/or the server database is hosted by at least one stationary server external to the vehicle (110).
8. A vehicle (110) including means configured for aligning data
the vehicle having means configured to receive first data sets by a server from a server database, the data sets including at least one feature refer ence point,
the vehicle (110) having means being configured to collect ambient data, the vehicle (110) includes a computer configured for aligning the collected ambient data with at least one of the first data sets and generating second data sets describing new and/or amended feature reference points, the vehicle(llO) is configured for transmitting at least one of the second data sets to the server.
9. The vehicle (110) according to the previous claim,
characterized in, that
the vehicle (110) comprises at least one of an image sensor like a CCD sen sor, an infrared sensor, a laser scanner, an ultrasound transducer, a radar sensor, an RF channel sensor, position and movement sensors like GNSS,
IMU (inertial measurement unit), wheel-ticks sensor, steering angle sensor, and environmental sensors like outside temperature, rain to collect at least one ambient data set.
10. A system for aligning data provided by a plurality of vehicles (110) compris ing a vehicle (110) of claim 5 and further including a server and a server da tabase remote from the vehicle (110).
11. A system for aligning data provided by a plurality of vehicles (110) accord ing to the previous claim,
characterized in that
the server is configured for refining the server database by using the data included in the second data sets sent by the plurality of vehicles.
PCT/EP2020/069150 2019-07-08 2020-07-07 Method for aligning crowd-sourced data to provide an environmental data model of a scene WO2021005073A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20745105.5A EP4022255A1 (en) 2019-07-08 2020-07-07 Method for aligning crowd-sourced data to provide an environmental data model of a scene

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019210034 2019-07-08
DE102019210034.1 2019-07-08

Publications (1)

Publication Number Publication Date
WO2021005073A1 true WO2021005073A1 (en) 2021-01-14

Family

ID=71783996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/069150 WO2021005073A1 (en) 2019-07-08 2020-07-07 Method for aligning crowd-sourced data to provide an environmental data model of a scene

Country Status (2)

Country Link
EP (1) EP4022255A1 (en)
WO (1) WO2021005073A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11605233B2 (en) 2021-06-03 2023-03-14 Here Global B.V. Apparatus and methods for determining state of visibility for a road object in real time

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6353785B1 (en) 1999-03-12 2002-03-05 Navagation Technologies Corp. Method and system for an in-vehicle computer architecture
EP1219928A2 (en) 2000-12-28 2002-07-03 Robert Bosch Gmbh Method and device for generating road segments for a digital map
EP3130945A1 (en) 2015-08-11 2017-02-15 Continental Automotive GmbH System and method for precision vehicle positioning
US20180023961A1 (en) 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Systems and methods for aligning crowdsourced sparse map data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6353785B1 (en) 1999-03-12 2002-03-05 Navagation Technologies Corp. Method and system for an in-vehicle computer architecture
EP1219928A2 (en) 2000-12-28 2002-07-03 Robert Bosch Gmbh Method and device for generating road segments for a digital map
US20020085095A1 (en) * 2000-12-28 2002-07-04 Holger Janssen Method and device for producing road and street data for a digital map
EP3130945A1 (en) 2015-08-11 2017-02-15 Continental Automotive GmbH System and method for precision vehicle positioning
US20180239032A1 (en) * 2015-08-11 2018-08-23 Continental Automotive Gmbh System and method for precision vehicle positioning
US20180023961A1 (en) 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Systems and methods for aligning crowdsourced sparse map data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4022255A1

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11605233B2 (en) 2021-06-03 2023-03-14 Here Global B.V. Apparatus and methods for determining state of visibility for a road object in real time

Also Published As

Publication number Publication date
EP4022255A1 (en) 2022-07-06

Similar Documents

Publication Publication Date Title
CN107851125B (en) System and method for two-step object data processing via vehicle and server databases to generate, update and communicate accurate road characteristics databases
US20220011130A1 (en) Selective retrieval of navigational information from a host vehicle
JP6714688B2 (en) System and method for matching road data objects to generate and update an accurate road database
CN107850672B (en) System and method for accurate vehicle positioning
EP3137850B1 (en) Method and system for determining a position relative to a digital map
CN112923930B (en) Crowd-sourcing and distributing sparse maps and lane measurements for autonomous vehicle navigation
US20200133272A1 (en) Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle
CN110945320B (en) Vehicle positioning method and system
WO2020264222A1 (en) Image-based keypoint generation
EP3842735B1 (en) Position coordinates estimation device, position coordinates estimation method, and program
CN113916242A (en) Lane positioning method and device, storage medium and electronic equipment
KR102358547B1 (en) Output system for real-time correcting the data collected by moving mms
JP7328178B2 (en) VEHICLE CONTROL DEVICE AND VEHICLE POSITION ESTIMATION METHOD
JP2019537018A (en) Method and system for generating an environment model and for positioning using a mutual sensor feature point reference
CN113838129A (en) Method, device and system for obtaining pose information
WO2021005073A1 (en) Method for aligning crowd-sourced data to provide an environmental data model of a scene
CN106537172B (en) Method for determining the position and/or orientation of a sensor
TW202115616A (en) Map data positioning system and method using roadside feature recognition having the advantages of low data volume, low computational complexity, high reliability
US20240295403A1 (en) Navigation system with semantic data map mechanism and method of operation thereof
CN109964132A (en) Method, apparatus and system for the sensors configured in moving object
JP2022188532A (en) Road information indication object recognition device
WO2024081330A1 (en) Rolling shutter compensation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20745105

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020745105

Country of ref document: EP

Effective date: 20220208