CN108196535B - Automatic driving system based on reinforcement learning and multi-sensor fusion - Google Patents
Automatic driving system based on reinforcement learning and multi-sensor fusion Download PDFInfo
- Publication number
- CN108196535B CN108196535B CN201711317899.XA CN201711317899A CN108196535B CN 108196535 B CN108196535 B CN 108196535B CN 201711317899 A CN201711317899 A CN 201711317899A CN 108196535 B CN108196535 B CN 108196535B
- Authority
- CN
- China
- Prior art keywords
- vehicles
- laser radar
- real
- data
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000002787 reinforcement Effects 0.000 title claims abstract description 15
- 230000004927 fusion Effects 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 claims abstract description 17
- 230000009471 action Effects 0.000 claims abstract description 11
- 230000008447 perception Effects 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000013135 deep learning Methods 0.000 claims abstract description 6
- 230000026676 system process Effects 0.000 claims abstract description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 7
- 230000006399 behavior Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 238000012549 training Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000012937 correction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an automatic driving system based on reinforcement learning and multi-sensor fusion, which comprises: perception system, control system and actuating system. The sensing system efficiently processes the laser radar, the camera and the GPS navigator to realize the real-time recognition and understanding of vehicles, pedestrians, lane lines, traffic signs and signal lamps around a driving vehicle through a deep learning network, the laser radar is matched and fused with panoramic images through an enhanced learning technology to form a real-time three-dimensional street view map and judge a drivable area, the real-time navigation is realized by combining the GPS navigator, the control system processes information collected by the sensing system through the enhanced learning network, predicts people and objects of surrounding vehicles, pairs the data with the record of driver actions according to vehicle body state data, makes the current optimal action selection, and completes the execution actions through the execution system. The method disclosed by the invention is used for fusing the laser radar data and the video, and carrying out travelable area identification and target path optimal planning.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence and automatic driving, and particularly relates to an automatic driving system based on reinforcement learning and multi-sensor fusion.
Background
The automatic driving technology is paid attention by automobile enterprises, internet enterprises and research institutions of colleges and universities all over the world, and all the parties actively promote the development of automatic driving. Automobile enterprises represented by Benchi and Audi realize human-vehicle interaction, vehicle-vehicle interaction and vehicle-road cooperation by applying advanced technologies such as ultrasonic waves, radars, night vision devices, stereo cameras and LEDs. However, the automatic driving field starts later in China, the breakthrough achievement is less, continuous innovation is needed, new technologies are combined, and breakthrough is made in the automatic driving field.
The automatic driving automobile needs a complete sensing system to replace the sensing of a driver and provide surrounding environment information, and the sensing system needs to continuously improve the comprehensiveness, accuracy and efficiency of obtaining the surrounding environment information and is the basis of automatic driving. Therefore, a plurality of sensors are integrated, and the surrounding environment of the vehicle is sensed in a multi-angle and multi-direction mode. Therefore, a large amount of data is generated and needs to be processed in time, and therefore, the automatic driving system is provided with a control system integrating an intelligent algorithm and high-performance hardware into a whole, the brain of a driver is replaced, a driving instruction is designated, and a driving path is planned. The invention is achieved accordingly.
Disclosure of Invention
In order to solve the technical problems, the invention provides an automatic driving system based on reinforcement learning and multi-sensor fusion, which adopts experience data of a professional driver to train a reinforcement learning network model, processes environment information from a perception system, fuses laser radar data and video, and performs travelable area identification and target path optimal planning.
The technical scheme of the invention is as follows:
an autonomous driving system based on reinforcement learning and multi-sensor fusion, comprising: the system comprises a perception system, a control system and an execution system, wherein the perception system processes data of a laser radar, an image acquisition module and a GPS navigator through a deep learning network, identifies and understands vehicles, pedestrians, lane lines, traffic signs and signal lamps around the vehicles in real time, matches and fuses the laser radar data and the images to form a real-time three-dimensional street view map, and judges a drivable area;
the control system adopts the reinforcement learning network to process the information collected by the perception system, predicts people, vehicles and objects around the vehicle, and performs pairing according to the vehicle body state data and the record of the driver action to make action selection;
the execution system executes corresponding operation according to the instruction of the control system and feeds back the operation result to the control system.
Preferably, the perception system utilizes convolutional neural network training picture samples, each network layer is trained to locally extract a subset of feature points of vehicles, pedestrians, lane lines, traffic signals and traffic signs generated by the previous network layer, each layer predicts the input of definite geometric constraint correction of the current network layer, and the identification and understanding of the vehicles, the pedestrians, the lane lines, the traffic signs and the traffic signals around the vehicles are positioned and identified in real time through the combination of coarse-fine cascade and geometric optimization.
Preferably, the method for forming a real-time three-dimensional street view map and determining a drivable area by the sensing system includes:
extracting feature points of three-dimensional point cloud data acquired by a laser radar by adopting a multilayer perceptron of a PointNet network framework, further acquiring contour lines of feature objects, splicing the contour lines, and constructing a body model;
making a picture object contour characteristic label for the image acquired by the image acquisition module;
the laser radar obtains a characteristic object contour and a picture label as input, the characteristic object contour and the picture label are matched through a convolutional neural network, and color texture information of an object model is output;
collecting laser radar data and video images at the same field angle, converting the laser radar data into a laser radar depth map, traversing through a single-frame ray to form a drivable area boundary, marking the drivable area boundary with colors at the same position on the video image, analyzing obstacles around a vehicle, and realizing drivable area detection by approaching a real image boundary through multiple iterations by using a random field optimization algorithm.
Preferably, the control system uses an enhanced learning network to initialize the area for the near-to-far target, performs online detection, learning and tracking, uses a reverse tracking method to continuously track the detected target for the near-to-far target, and obtains the relative speed information of the vehicle and the pedestrian through distance estimation to realize the prediction of the people, the vehicle and the objects around the vehicle.
Compared with the prior art, the invention has the advantages that:
1. the method adopts the experience data of a professional driver to train the reinforcement learning network model, realizes the accurate positioning of the vehicle, the accurate understanding of the surrounding environment, the sustainable learning optimization and the automatic evolution of the driving system.
2. And matching and fusing the laser radar data and the images to form a real-time three-dimensional street view map, obtaining a boundary which approaches to a real drivable area, realizing drivable area detection, planning the drivable area of the road, and accurately judging and planning the next action.
Drawings
The invention is further described with reference to the following figures and examples:
fig. 1 is a block diagram of an automatic driving system based on reinforcement learning and multi-sensor fusion.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Example (b):
the preferred embodiments of the present invention will be further described with reference to the accompanying drawings.
As shown in fig. 1, the automatic driving system of the present invention includes three subsystems, specifically including: perception system, control system and execution system. The automatic driving system capable of interacting with the environment, automatically judging the external environment and making a driving decision is constructed by using deep learning and reinforcement learning technologies, and the automatic driving system continuously evolves in the process of exploring the dynamic behavior of the vehicle driving so as to improve the environment perception and decision-making capability of the vehicle.
And the sensing system processes the camera, the laser radar and the GPS information through a deep learning network model, and acquires real-time vehicle and pedestrian information and a 3-dimensional street view map.
The control system processes the environmental information from the perception system through the reinforcement learning network model, carries out travelable area recognition and target path optimal planning, predicts people and objects of surrounding vehicles, matches with the record of driver action according to vehicle body state data, and makes the current optimal action selection, including instructions of acceleration, braking, lane change, turning and the like.
The execution system: and the control system gives instructions for controlling the steering wheel, the accelerator and the brake, executes corresponding operations and feeds back the operation results to the control system.
The camera acquires pictures around a driving vehicle, and a Convolutional Neural Network (CNN) is utilized to train picture samples, wherein each network layer is trained to locally extract a subset of characteristic points of vehicles, pedestrians, lane lines, traffic lights and signs generated by the previous network layer. In addition, each layer predicts the input of the current network layer with definite geometric constraint correction. The combination of coarse-fine cascade connection and geometric optimization realizes lane line identification and classification, traffic signal lamp and sign understanding, so that the system can position a large number of vehicles, pedestrians, lane lines, traffic signal lamps and signs. For the targets from near to far, a deep learning initialization region is used for online detection, learning and tracking; for the targets from far to near, a reverse order tracking method is used to enhance the automatic labeling capability. And continuously tracking the detection target, and obtaining relative speed information of the vehicle and the pedestrian through distance estimation to realize the prediction of the people, the vehicles and the objects of the surrounding vehicles.
Firstly, scanning a scene by a laser radar to obtain 3-dimensional point cloud data, denoising and filtering the original point cloud data, and outputting the preprocessed point cloud data;
inputting 3-dimensional point cloud by adopting a PointNet network framework, extracting characteristic points by utilizing a multilayer perceptron of the network, further obtaining contour lines of characteristic objects, splicing the contour lines, and constructing an object model.
Simultaneously shooting a scene picture by a camera, and manufacturing a picture object outline characteristic label (buildings, people, vehicles and the like);
and taking the laser radar obtained characteristic object contour and the picture label as input, matching the characteristic contour of the laser radar obtained characteristic object contour and the picture label through a CNN neural network, and outputting 3 object model color texture information.
Thirdly, the drivable area is divided through the laser radar and the video image, laser radar data and camera data are collected at the same field angle, the laser data are converted into a laser radar depth map, then a drivable area boundary is formed through single-frame ray traversal, the drivable area boundary is marked at the same position on the video image in a striking color, obstacles around a running vehicle are analyzed, a random field optimization algorithm is used, the real image boundary is approached through multiple iterations, drivable area detection is realized, and laser radar data and video fusion is completed.
And according to the state of the vehicle per se, the state data of the vehicles and pedestrians in a certain driving range and the traffic light condition at the intersection, the optimal planning of the driving path is realized by combining a GPS navigator, and whether the vehicle accurately enters the corresponding lane line or not is judged according to the navigation guidance.
The reinforcement learning network processes the information collected by the sensing system, the system automatically matches the collected information of sensing the surrounding environment of the vehicle with the discipline of driver action, through the Markov process modeling of driving behavior, the steering wheel corner control decision of the automatic driving vehicle based on Q learning is carried out, and finally the sensing information is processed into instructions for controlling the steering wheel, the accelerator, the brake and the like.
If an obstacle is detected in front of the vehicle, the vehicle can automatically control the distance to the obstacle, properly slow down the speed measurement, judge the self-judgment drivable area and realize the decision of overtaking or following. If the distance between the automobile and the obstacle exceeds the safe distance, the automobile can be braked emergently.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.
Claims (2)
1. An autonomous driving system based on reinforcement learning and multi-sensor fusion, comprising: the system is characterized in that the sensing system processes data of a laser radar, an image acquisition module and a GPS navigator through a deep learning network, identifies and understands vehicles, pedestrians, lane lines, traffic signs and signal lamps around the vehicles in real time, matches and fuses the laser radar data and the images to form a real-time three-dimensional street view map, and judges a drivable area;
the control system adopts an enhanced learning network to process information collected by the sensing system, predicts people, vehicles and objects around the vehicle, matches the vehicle body state data with the record of driver action, performs Q-learning-based steering wheel turning angle control decision of the automatic driving vehicle by modeling the Markov process of driving behavior, and finally processes the sensing information into instructions for controlling a steering wheel, an accelerator, a brake and the like to make action selection; the control system initializes the area by using an enhanced learning network for the near-to-far targets, detects, learns and tracks on line, continuously tracks the detected targets by using a reverse tracking method for the near-to-far targets, obtains the relative speed information of the vehicles and the pedestrians through distance estimation, and realizes the prediction of the people, the vehicles and the objects around the vehicles;
the execution system executes corresponding operation according to the instruction of the control system and feeds back the operation result to the control system;
the method for forming the real-time three-dimensional street view map and judging the drivable area by the perception system comprises the following steps:
extracting feature points of three-dimensional point cloud data acquired by a laser radar by adopting a multilayer perceptron of a PointNet network framework, further acquiring contour lines of feature objects, splicing the contour lines, and constructing a body model;
making a picture object contour characteristic label for the image acquired by the image acquisition module;
the laser radar obtains a characteristic object contour and a picture label as input, the characteristic object contour and the picture label are matched through a convolutional neural network, and color texture information of an object model is output;
collecting laser radar data and video images at the same field angle, converting the laser radar data into a laser radar depth map, traversing through a single-frame ray to form a drivable area boundary, marking the drivable area boundary with colors at the same position on the video image, analyzing obstacles around a vehicle, and realizing drivable area detection by approaching a real image boundary through multiple iterations by using a random field optimization algorithm.
2. The reinforcement learning and multi-sensor fusion based automatic driving system of claim 1, wherein the perception system utilizes convolutional neural network training picture samples, each network layer is trained to locally extract a subset of feature points of vehicles, pedestrians, lane lines, traffic lights and traffic signs generated by previous network layers, each layer predicts the input of definite geometric constraint to modify the current network layer, and the identification and understanding of the vehicles, pedestrians, lane lines, traffic signs and traffic lights around the vehicles are located and identified in real time through the combination of coarse-fine cascading and geometric optimization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711317899.XA CN108196535B (en) | 2017-12-12 | 2017-12-12 | Automatic driving system based on reinforcement learning and multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711317899.XA CN108196535B (en) | 2017-12-12 | 2017-12-12 | Automatic driving system based on reinforcement learning and multi-sensor fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108196535A CN108196535A (en) | 2018-06-22 |
CN108196535B true CN108196535B (en) | 2021-09-07 |
Family
ID=62574175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711317899.XA Active CN108196535B (en) | 2017-12-12 | 2017-12-12 | Automatic driving system based on reinforcement learning and multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108196535B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12140446B2 (en) | 2023-08-25 | 2024-11-12 | Motional Ad Llc | Automatic annotation of environmental features in a map during navigation of a vehicle |
Families Citing this family (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108803604A (en) * | 2018-06-06 | 2018-11-13 | 深圳市易成自动驾驶技术有限公司 | Vehicular automatic driving method, apparatus and computer readable storage medium |
EP3612857A1 (en) * | 2018-06-25 | 2020-02-26 | Beijing Didi Infinity Technology and Development Co., Ltd. | A high-definition map acquisition system |
CN109099901B (en) * | 2018-06-26 | 2021-09-24 | 中科微易(苏州)智能科技有限公司 | Full-automatic road roller positioning method based on multi-source data fusion |
CN108648457B (en) * | 2018-06-28 | 2021-07-13 | 苏州大学 | Method, device and computer readable storage medium for speed prediction |
WO2020010517A1 (en) * | 2018-07-10 | 2020-01-16 | 深圳大学 | Trajectory prediction method and apparatus |
CN108960183B (en) * | 2018-07-19 | 2020-06-02 | 北京航空航天大学 | Curve target identification system and method based on multi-sensor fusion |
CN109035309B (en) * | 2018-07-20 | 2022-09-27 | 清华大学苏州汽车研究院(吴江) | Stereoscopic vision-based pose registration method between binocular camera and laser radar |
CN108875844A (en) * | 2018-07-20 | 2018-11-23 | 清华大学苏州汽车研究院(吴江) | The matching process and system of lidar image and camera review |
CN108957413A (en) * | 2018-07-20 | 2018-12-07 | 重庆长安汽车股份有限公司 | Sensor target positional accuracy test method |
US20200033869A1 (en) * | 2018-07-27 | 2020-01-30 | GM Global Technology Operations LLC | Systems, methods and controllers that implement autonomous driver agents and a policy server for serving policies to autonomous driver agents for controlling an autonomous vehicle |
CN110824912B (en) * | 2018-08-08 | 2021-05-18 | 华为技术有限公司 | Method and apparatus for training a control strategy model for generating an autonomous driving strategy |
CN110376594B (en) * | 2018-08-17 | 2022-02-01 | 北京京东叁佰陆拾度电子商务有限公司 | Intelligent navigation method and system based on topological graph |
CN110908366B (en) * | 2018-08-28 | 2023-08-25 | 大陆智行科技(上海)有限公司 | Automatic driving method and device |
CN109358614A (en) * | 2018-08-30 | 2019-02-19 | 深圳市易成自动驾驶技术有限公司 | Automatic Pilot method, system, device and readable storage medium storing program for executing |
CN109405824A (en) * | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
CN109318803A (en) * | 2018-09-06 | 2019-02-12 | 武汉乐庭软件技术有限公司 | A kind of automatic vehicle control system based on Multi-sensor Fusion |
CN109050525A (en) * | 2018-09-10 | 2018-12-21 | 武汉乐庭软件技术有限公司 | A kind of automatic vehicle control system merged based on millimeter radar and camera |
CN109271924A (en) * | 2018-09-14 | 2019-01-25 | 盯盯拍(深圳)云技术有限公司 | Image processing method and image processing apparatus |
CN109410238B (en) * | 2018-09-20 | 2021-10-26 | 中国科学院合肥物质科学研究院 | Wolfberry identification and counting method based on PointNet + + network |
CN109146333A (en) * | 2018-09-29 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Navigation algorithm appraisal procedure and device |
CN110969178B (en) * | 2018-09-30 | 2023-09-12 | 毫末智行科技有限公司 | Data fusion system and method for automatic driving vehicle and automatic driving system |
DK180774B1 (en) * | 2018-10-29 | 2022-03-04 | Motional Ad Llc | Automatic annotation of environmental features in a map during navigation of a vehicle |
US10940863B2 (en) * | 2018-11-01 | 2021-03-09 | GM Global Technology Operations LLC | Spatial and temporal attention-based deep reinforcement learning of hierarchical lane-change policies for controlling an autonomous vehicle |
CN109543600A (en) * | 2018-11-21 | 2019-03-29 | 成都信息工程大学 | A kind of realization drivable region detection method and system and application |
CN109657556B (en) * | 2018-11-22 | 2021-04-13 | 北京工业大学 | Method and system for classifying road and surrounding ground objects thereof |
CN109375187B (en) * | 2018-11-28 | 2021-09-10 | 北京行易道科技有限公司 | Method and device for determining radar target |
CN111238494B (en) * | 2018-11-29 | 2022-07-19 | 财团法人工业技术研究院 | Carrier, carrier positioning system and carrier positioning method |
CN109583415B (en) * | 2018-12-11 | 2022-09-30 | 兰州大学 | Traffic light detection and identification method based on fusion of laser radar and camera |
CN111413957B (en) * | 2018-12-18 | 2021-11-02 | 北京航迹科技有限公司 | System and method for determining driving actions in autonomous driving |
CN111338333B (en) * | 2018-12-18 | 2021-08-31 | 北京航迹科技有限公司 | System and method for autonomous driving |
US10955853B2 (en) | 2018-12-18 | 2021-03-23 | Beijing Voyager Technology Co., Ltd. | Systems and methods for autonomous driving |
CN109508017A (en) * | 2018-12-28 | 2019-03-22 | 镇江市高等专科学校 | Intelligent carriage control method |
CN109407679B (en) * | 2018-12-28 | 2022-12-23 | 百度在线网络技术(北京)有限公司 | Method and device for controlling an unmanned vehicle |
CN109720275A (en) * | 2018-12-29 | 2019-05-07 | 重庆集诚汽车电子有限责任公司 | Multi-sensor Fusion vehicle environmental sensory perceptual system neural network based |
CN109887123B (en) * | 2019-01-03 | 2021-09-07 | 北京百度网讯科技有限公司 | Data processing method and device, black box system and vehicle |
CN109878512A (en) * | 2019-01-15 | 2019-06-14 | 北京百度网讯科技有限公司 | Automatic Pilot control method, device, equipment and computer readable storage medium |
US11468690B2 (en) * | 2019-01-30 | 2022-10-11 | Baidu Usa Llc | Map partition system for autonomous vehicles |
US10627823B1 (en) * | 2019-01-30 | 2020-04-21 | StradVision, Inc. | Method and device for performing multiple agent sensor fusion in cooperative driving based on reinforcement learning |
US10503174B1 (en) * | 2019-01-31 | 2019-12-10 | StradVision, Inc. | Method and device for optimized resource allocation in autonomous driving on the basis of reinforcement learning using data from lidar, radar, and camera sensor |
CN113412506B (en) * | 2019-02-13 | 2023-06-13 | 日立安斯泰莫株式会社 | Vehicle control device and electronic control system |
CN109961509B (en) * | 2019-03-01 | 2020-05-05 | 北京三快在线科技有限公司 | Three-dimensional map generation and model training method and device and electronic equipment |
CN110045729B (en) * | 2019-03-12 | 2022-09-13 | 北京小马慧行科技有限公司 | Automatic vehicle driving method and device |
US11402220B2 (en) * | 2019-03-13 | 2022-08-02 | Here Global B.V. | Maplets for maintaining and updating a self-healing high definition map |
CN109991978B (en) * | 2019-03-19 | 2021-04-02 | 莫日华 | Intelligent automatic driving method and device based on network |
CN110134124B (en) * | 2019-04-29 | 2022-04-29 | 北京小马慧行科技有限公司 | Vehicle running control method and device, storage medium and processor |
TWI794486B (en) * | 2019-04-30 | 2023-03-01 | 先進光電科技股份有限公司 | Mobile vehicle assistance system and processing method thereof |
CN109855646B (en) * | 2019-04-30 | 2020-02-28 | 奥特酷智能科技(南京)有限公司 | Distributed centralized autopilot system and method |
CN110162040B (en) * | 2019-05-10 | 2022-06-17 | 重庆大学 | Low-speed automatic driving trolley control method and system based on deep learning |
CN110091875A (en) * | 2019-05-14 | 2019-08-06 | 长沙理工大学 | Deep learning type intelligent driving context aware systems based on Internet of Things |
US11467591B2 (en) * | 2019-05-15 | 2022-10-11 | Baidu Usa Llc | Online agent using reinforcement learning to plan an open space trajectory for autonomous vehicles |
CN111986472B (en) * | 2019-05-22 | 2023-04-28 | 阿里巴巴集团控股有限公司 | Vehicle speed determining method and vehicle |
CN110345959B (en) * | 2019-06-10 | 2023-11-03 | 同济人工智能研究院(苏州)有限公司 | Path planning method based on gate point |
CN110281949B (en) * | 2019-06-28 | 2020-12-18 | 清华大学 | Unified hierarchical decision-making method for automatic driving |
US11994862B2 (en) * | 2019-07-06 | 2024-05-28 | Huawei Technologies Co., Ltd. | Method and system for training reinforcement learning agent using adversarial sampling |
CN112208539A (en) * | 2019-07-09 | 2021-01-12 | 奥迪股份公司 | System, vehicle, method, and medium for autonomous driving of a vehicle |
CN110347043B (en) * | 2019-07-15 | 2023-03-10 | 武汉天喻信息产业股份有限公司 | Intelligent driving control method and device |
CN110598743A (en) * | 2019-08-12 | 2019-12-20 | 北京三快在线科技有限公司 | Target object labeling method and device |
CN110427034B (en) * | 2019-08-13 | 2022-09-02 | 浙江吉利汽车研究院有限公司 | Target tracking system and method based on vehicle-road cooperation |
CN111144211B (en) | 2019-08-28 | 2023-09-12 | 华为技术有限公司 | Point cloud display method and device |
CN110525342A (en) * | 2019-08-30 | 2019-12-03 | 的卢技术有限公司 | A kind of vehicle-mounted auxiliary driving method of AR-HUD based on deep learning and its system |
KR102664123B1 (en) * | 2019-09-09 | 2024-05-09 | 현대자동차주식회사 | Apparatus and method for generating vehicle data, and vehicle system |
CN110687905A (en) * | 2019-09-11 | 2020-01-14 | 珠海市众创芯慧科技有限公司 | Unmanned intelligent vehicle based on integration of multiple sensing technologies |
CN110598637B (en) * | 2019-09-12 | 2023-02-24 | 齐鲁工业大学 | Unmanned system and method based on vision and deep learning |
CN112578781B (en) * | 2019-09-29 | 2022-12-30 | 华为技术有限公司 | Data processing method, device, chip system and medium |
JP7259685B2 (en) * | 2019-09-30 | 2023-04-18 | トヨタ自動車株式会社 | Driving control device for automatic driving vehicle, stop target, driving control system |
CN110634297B (en) * | 2019-10-08 | 2020-08-07 | 交通运输部公路科学研究所 | Signal lamp state identification and passing control system based on vehicle-mounted laser radar |
CN110717007A (en) * | 2019-10-15 | 2020-01-21 | 财团法人车辆研究测试中心 | Map data positioning system and method applying roadside feature identification |
CN110710852B (en) * | 2019-10-30 | 2020-11-17 | 广州铁路职业技术学院(广州铁路机械学校) | Meal delivery method, system, medium and intelligent device based on meal delivery robot |
CN110758243B (en) * | 2019-10-31 | 2024-04-02 | 的卢技术有限公司 | Surrounding environment display method and system in vehicle running process |
CN110764507A (en) * | 2019-11-07 | 2020-02-07 | 舒子宸 | Artificial intelligence automatic driving system for reinforcement learning and information fusion |
CN110716552A (en) * | 2019-11-11 | 2020-01-21 | 朱云 | Novel driving system for automobile, train, subway and airplane |
CN110991489B (en) * | 2019-11-11 | 2023-10-10 | 苏州智加科技有限公司 | Marking method, device and system for driving data |
WO2021134357A1 (en) * | 2019-12-30 | 2021-07-08 | 深圳元戎启行科技有限公司 | Perception information processing method and apparatus, computer device and storage medium |
KR20210087271A (en) * | 2020-01-02 | 2021-07-12 | 삼성전자주식회사 | Apparatus and method for displaying navigation information of three dimention augmented reality |
CN113156911A (en) * | 2020-01-22 | 2021-07-23 | 重庆金康新能源汽车有限公司 | Combined virtual and real-world environment for automated driving vehicle planning and control testing |
JP7248957B2 (en) * | 2020-02-10 | 2023-03-30 | トヨタ自動車株式会社 | vehicle controller |
CN111208821B (en) * | 2020-02-17 | 2020-11-03 | 润通智能科技(郑州)有限公司 | Automobile automatic driving control method and device, automatic driving device and system |
US11592570B2 (en) * | 2020-02-25 | 2023-02-28 | Baidu Usa Llc | Automated labeling system for autonomous driving vehicle lidar data |
CN111427349A (en) * | 2020-03-27 | 2020-07-17 | 齐鲁工业大学 | Vehicle navigation obstacle avoidance method and system based on laser and vision |
CN111591306B (en) * | 2020-03-30 | 2022-07-12 | 浙江吉利汽车研究院有限公司 | Driving track planning method of automatic driving vehicle, related equipment and storage medium |
CN111695504A (en) * | 2020-06-11 | 2020-09-22 | 重庆大学 | Fusion type automatic driving target detection method |
CN111857132B (en) * | 2020-06-19 | 2024-04-19 | 深圳宏芯宇电子股份有限公司 | Central control type automatic driving method and system and central control system |
EP4141663A4 (en) * | 2020-07-17 | 2023-05-31 | Huawei Technologies Co., Ltd. | Data processing method and apparatus, and intelligent vehicle |
CN112068574A (en) * | 2020-10-19 | 2020-12-11 | 中国科学技术大学 | Control method and system for unmanned vehicle in dynamic complex environment |
CN113160400A (en) * | 2021-03-12 | 2021-07-23 | 榆林神华能源有限责任公司 | Underground terrain positioning method, storage medium and system |
CN113359722A (en) * | 2021-05-26 | 2021-09-07 | 上海联知智能科技有限公司 | Automatic driving automobile |
CN113610970A (en) * | 2021-08-30 | 2021-11-05 | 上海智能网联汽车技术中心有限公司 | Automatic driving system, device and method |
CN113963096B (en) * | 2021-09-01 | 2022-07-05 | 泰瑞数创科技(北京)有限公司 | Artificial intelligence-based city three-dimensional map video stream interaction method and system |
CN113867334B (en) * | 2021-09-07 | 2023-05-05 | 华侨大学 | Unmanned path planning method and system for mobile machinery |
CN114056352B (en) * | 2021-12-24 | 2024-07-02 | 上海海积信息科技股份有限公司 | Automatic driving control device and vehicle |
CN114179835B (en) * | 2021-12-30 | 2024-01-05 | 清华大学苏州汽车研究院(吴江) | Automatic driving vehicle decision training method based on reinforcement learning in real scene |
CN114419572B (en) * | 2022-03-31 | 2022-06-17 | 国汽智控(北京)科技有限公司 | Multi-radar target detection method and device, electronic equipment and storage medium |
CN114581748B (en) * | 2022-05-06 | 2022-09-23 | 南京大学 | Multi-agent perception fusion system based on machine learning and implementation method thereof |
CN115630335B (en) * | 2022-10-28 | 2023-06-27 | 北京中科东信科技有限公司 | Road information generation method based on multi-sensor fusion and deep learning model |
CN115727855A (en) * | 2022-11-29 | 2023-03-03 | 长城汽车股份有限公司 | Intelligent driving path learning method, automatic cruising method, related equipment and vehicle |
CN116101275A (en) * | 2023-04-12 | 2023-05-12 | 禾多科技(北京)有限公司 | Obstacle avoidance method and system based on automatic driving |
CN116259028A (en) * | 2023-05-06 | 2023-06-13 | 杭州宏景智驾科技有限公司 | Abnormal scene detection method for laser radar, electronic device and storage medium |
CN116902003B (en) * | 2023-07-31 | 2024-02-06 | 合肥海普微电子有限公司 | Unmanned method based on laser radar and camera mixed mode |
CN117809275B (en) * | 2024-02-28 | 2024-07-05 | 江苏天一航空工业股份有限公司 | Environment sensing method and system based on 360-degree circular system of civil aviation vehicle |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824050B (en) * | 2014-02-17 | 2017-03-15 | 北京旷视科技有限公司 | A kind of face key independent positioning method returned based on cascade |
CN106575367B (en) * | 2014-08-21 | 2018-11-06 | 北京市商汤科技开发有限公司 | Method and system for the face critical point detection based on multitask |
GB201616095D0 (en) * | 2016-09-21 | 2016-11-02 | Univ Oxford Innovation Ltd | A neural network and method of using a neural network to detect objects in an environment |
CN107066935B (en) * | 2017-01-25 | 2020-11-24 | 网易(杭州)网络有限公司 | Hand posture estimation method and device based on deep learning |
CN107226087B (en) * | 2017-05-26 | 2019-03-26 | 西安电子科技大学 | A kind of structured road automatic Pilot transport vehicle and control method |
CN107235044B (en) * | 2017-05-31 | 2019-05-28 | 北京航空航天大学 | A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior |
-
2017
- 2017-12-12 CN CN201711317899.XA patent/CN108196535B/en active Active
Non-Patent Citations (2)
Title |
---|
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation;R. Qi Charles等;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20171109;第77-85页 * |
城区不确定环境下无人驾驶车辆行为决策方法研究;耿新力;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20171115(第11期);正文第10-12页 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12140446B2 (en) | 2023-08-25 | 2024-11-12 | Motional Ad Llc | Automatic annotation of environmental features in a map during navigation of a vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN108196535A (en) | 2018-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108196535B (en) | Automatic driving system based on reinforcement learning and multi-sensor fusion | |
US11774261B2 (en) | Automatic annotation of environmental features in a map during navigation of a vehicle | |
US11885910B2 (en) | Hybrid-view LIDAR-based object detection | |
JP7341864B2 (en) | System and method for registering 3D data with 2D image data | |
US10108867B1 (en) | Image-based pedestrian detection | |
US11682137B2 (en) | Refining depth from an image | |
Jebamikyous et al. | Autonomous vehicles perception (avp) using deep learning: Modeling, assessment, and challenges | |
EP3647734A1 (en) | Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle | |
CN113313154A (en) | Integrated multi-sensor integrated automatic driving intelligent sensing device | |
EP3647733A1 (en) | Automatic annotation of environmental features in a map during navigation of a vehicle | |
Deepika et al. | Obstacle classification and detection for vision based navigation for autonomous driving | |
KR102595886B1 (en) | Multi-modal segmentation network for enhanced semantic labeling in mapping | |
GB2609060A (en) | Machine learning-based framework for drivable surface annotation | |
CN115951326A (en) | Object detection method, system and storage medium | |
CN115705693A (en) | Method, system and storage medium for annotation of sensor data | |
CN115713687A (en) | Method, system, and medium for determining dynamic parameters of an object | |
Nuhel et al. | Developing a self-driving autonomous car using artificial intelligence algorithm | |
KR20230167694A (en) | Automatic lane marking extraction and classification from lidar scans | |
Memon et al. | Self-driving car using lidar sensing and image processing | |
Sanberg et al. | From stixels to asteroids: Towards a collision warning system using stereo vision | |
Charaya | LiDAR for Object Detection in Self Driving Cars | |
Shan et al. | Experimental Study of Multi-Camera Infrastructure Perception for V2X-Assisted Automated Driving in Highway Merging | |
Feng et al. | A survey to applications of deep learning in autonomous driving | |
CN117152579A (en) | System and computer-implemented method for a vehicle | |
KR20230020932A (en) | Scalable and realistic camera blokage dataset generation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |