[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021232274A1 - 传感器模块及包括其的自动驾驶系统和车辆 - Google Patents

传感器模块及包括其的自动驾驶系统和车辆 Download PDF

Info

Publication number
WO2021232274A1
WO2021232274A1 PCT/CN2020/091220 CN2020091220W WO2021232274A1 WO 2021232274 A1 WO2021232274 A1 WO 2021232274A1 CN 2020091220 W CN2020091220 W CN 2020091220W WO 2021232274 A1 WO2021232274 A1 WO 2021232274A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
sensor module
sensor
sub
sensors
Prior art date
Application number
PCT/CN2020/091220
Other languages
English (en)
French (fr)
Inventor
阳一斌
Original Assignee
深圳元戎启行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳元戎启行科技有限公司 filed Critical 深圳元戎启行科技有限公司
Priority to CN202080007594.2A priority Critical patent/CN113287076A/zh
Priority to PCT/CN2020/091220 priority patent/WO2021232274A1/zh
Publication of WO2021232274A1 publication Critical patent/WO2021232274A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • This application relates to a sensor module and an automatic driving system and a vehicle including the sensor module.
  • the sensor layout adopted by the existing L4 level driverless cars mainly includes two parts: the roof sensor module and the body sensor module.
  • the sensors located on the roof are relatively concentrated, so there is a problem that the overall height of the roof sensor module is relatively high and the volume is relatively large.
  • the installation of the body sensor requires more modification and design of the vehicle itself, the design cost is relatively high, and the installation is more difficult.
  • the existing designs often lead to the need to increase the number of sensors to monitor the omni-directional environment around the vehicle.
  • a sensor module for an automatic driving system and an automatic driving system including the sensor module are provided.
  • a sensor module for an automatic driving system may include:
  • the first sub-sensor module which is arranged on the top of the vehicle, is used to obtain images of environmental objects around the vehicle and the first distance between the objects around the vehicle and the vehicle, and generate image information and first distance information;
  • the second sub-sensor module which is detachably arranged on both sides of the vehicle, is used to obtain the second distance between the objects around the vehicle and the vehicle, and generate second distance information;
  • the third sub-sensor module which is arranged in the center of the front of the vehicle, is used to obtain the third distance of objects around the vehicle from the vehicle, and generate third distance information
  • the first sub-sensor module includes a plurality of first sensors, and the plurality of first sensors are arranged radially at equal angular intervals.
  • a plurality of first sensors may be horizontally arranged in the same plane with respect to the ground, and used to obtain images of environmental objects around the vehicle to generate the first image information in the image information
  • the first sub-sensor Modules can also include:
  • a second sensor which is arranged at the center of the first sub-sensor module, and is used to generate first distance information
  • At least one third sensor which is arranged in the middle of the front side of the first sub-sensor module and faces the front of the vehicle, is used to obtain an image of a traffic light in front of the vehicle to generate second image information in the image information.
  • the third sensor may be arranged to be inclined upward by about 3° to 7° with respect to the horizontal plane in the plane where the plurality of first sensors 11 are arranged.
  • the third sensor may be arranged to be inclined upward by 5° with respect to the horizontal plane.
  • the second sub-sensor module can be installed on the rearview mirrors on both sides of the vehicle through a mounting bracket that can adjust its mounting angle.
  • the number of first sensors may be four.
  • the number of first sensors may be six.
  • the second sensor may be a mechanical lidar.
  • the range of the first installation angle of the sensor of the second sub-sensor module relative to the horizontal plane may be between 0° and -15°
  • the second installation angle of the sensor of the third sub-sensor module relative to the horizontal plane is The range can be between -15° and -25°.
  • an automatic driving system may include:
  • the sensor module collects image information of environmental objects around the vehicle and distance information of objects around the vehicle from the vehicle;
  • An information processing unit which includes an image data processing center and a cloud data processing center.
  • the image data processing center obtains image information from the sensor module for processing
  • the cloud data processing center obtains distance information from the sensor module for processing;
  • a graphics processing unit which obtains processed image information and processed distance information from the image data processing center and the cloud data processing center respectively, and performs fusion processing to generate a real-time three-dimensional environment model around the vehicle;
  • the control center obtains the driving parameters of the vehicle according to the generated real-time three-dimensional environment model around the vehicle to control the driving of the vehicle.
  • a vehicle including the sensor module described above.
  • Fig. 1 is a perspective view of a vehicle equipped with a sensor module according to an embodiment of the present application.
  • Fig. 2 is a front view of a vehicle equipped with a sensor module according to an embodiment of the present application.
  • Fig. 3 is a top view of a vehicle equipped with a sensor module according to an embodiment of the present application.
  • Fig. 4 is a top view of a vehicle equipped with a sensor module according to an embodiment of the present application with the upper housing of the first sub-sensor module removed.
  • Fig. 5 is a modification of the first sub-sensor module according to the embodiment of the present application.
  • Fig. 6 is a side view of a detection area of a sensor module according to an embodiment of the present application.
  • Fig. 7 is a front view of a detection area of a sensor module according to an embodiment of the present application.
  • Fig. 8 is a top view of a detection area of a sensor module according to an embodiment of the present application.
  • Fig. 9 is a side view of a detection area of a sensor module according to a comparative example.
  • Fig. 10 is a rear view of a detection area of a sensor module according to a comparative example.
  • Fig. 1 is a perspective view of a vehicle adopting a sensor module according to an example of the present application.
  • the sensor module 100 includes: a first sub-sensor module 1, which is arranged on the top of the vehicle, and is used to obtain images of environmental objects around the vehicle and the first distance between the objects around the vehicle and the vehicle, and generate image information And the first distance information; the second sub-sensor module 2, which is detachably installed on both sides of the vehicle, is used to obtain the second distance between the objects around the vehicle and the own vehicle, and generate the second distance information; and the third sub-sensor module
  • the sensor module 3 which is arranged in the center of the front of the vehicle, is used to obtain the third distance between the objects around the vehicle and the own vehicle, and to generate third distance information.
  • the first sub-sensor module 1 includes a plurality of first sensors 11, and the plurality of first sensors 11 may be arranged radially at equal angular intervals.
  • the autonomous driving system can process image information, first distance information, second distance information, and third distance information through the information processing unit and graphics processing unit (GPU) to generate a three-dimensional environment model around the vehicle, and can be based on the generated real-time
  • the three-dimensional environment model determines the environmental characteristics around the vehicle, for example, to determine whether there are pedestrians, other vehicles, obstacles, etc.
  • the automatic driving system can pass through the control center Generate corresponding instructions to enable the vehicle to avoid pedestrians, other vehicles or obstacles, or to change lanes, etc., to ensure the safety of the vehicle during automatic driving.
  • the multiple first sensors 11 of the first sub-sensor module 1 may be horizontally arranged in the same plane with respect to the ground, and are used to obtain environmental objects (for example, lanes, zebra crossings, other vehicles, etc.) around the vehicle. Pedestrians, obstacles, etc.) images to generate the first image information in the image information.
  • the first sub-sensor module 1 may further include: a second sensor 12 arranged in the center of the first sub-sensor module 1 for generating first distance information; and at least one third sensor 13 arranged in the first The central part of the front side of the sub-sensor module 1 and facing the front of the vehicle is used to obtain an image of a traffic light in front of the vehicle to generate the second image information in the image information.
  • the first sensor 11 is an image sensor. Since the plurality of first sensors 11 are arranged radially at equal angular intervals, the plurality of first sensors 11 can sequentially acquire images of environmental objects around the vehicle within a short period of time (for example, within 10 ms) to Generate the first image information (ie, 360° environmental image information), and transmit the acquired data related to the first image information to the image data processing center of the information processing unit of the automatic driving system for data splicing, thereby generating the relevant information 360° panoramic image information around the vehicle.
  • a short period of time for example, within 10 ms
  • the number of the first sensors 11 may be six. At this time, the six first sensors 11 may be arranged at 60° radial equiangular intervals, and the first sensors 11 The recognition distance can be about 300m, the horizontal field of view angle can be about 70°, and the vertical field of view angle can be about 45°. In another example, as shown in FIG. 5, the number of the first sensors 11 may be four. At this time, the four first sensors 11 may be arranged at 90° radial equiangular intervals, and the level of the first sensors 11 The angle of view can be about 100° or 120°. However, the number and arrangement of the first sensors 11 are not limited to this. As long as the function of acquiring a 360° panoramic image around the vehicle can be satisfied, those skilled in the art can change the number and arrangement of the first sensors 11 arbitrarily according to actual needs.
  • the first sensor 11 may have a first annular field of view area boundary 11C, as shown in FIGS. 6 and 8. According to the installation height of the first sensor 11 and the length of the body of the vehicle on which the first sensor 11 is installed, the distance d 1 of the first annular field of view area boundary 11C from the first blind zone of the vehicle is at most 1.8 m. Images of all environmental objects (or referred to as environmental images) in an area larger than the boundary 11C of the first annular field of view area can be acquired by the first sensor 11. It should be noted that the first annular field-of-view area boundary 11C shown in FIGS. 6 and 8 is only an example, and the first blind zone distance d 1 can be changed according to the installation height of the first sensor 11 on the vehicle and the length of the vehicle body. Changes, therefore, should not be construed as limiting the scope of this application.
  • the second sensor 12 is a distance sensor, for example, a mechanical lidar, which adopts a mechanical rotating structure, and scans the environment around the vehicle by rotating its laser transmitter module and receiver module, thereby obtaining the 360° of the vehicle.
  • the first distance of an object for example, other vehicles, pedestrians, obstacles, etc.
  • the first distance information is generated.
  • the recognition distance of the second sensor 12 may be about 200m
  • the horizontal field of view angle of the second sensor 12 may be about 180°
  • the vertical field of view angle of the second sensor 12 may be about -25° to +15. °. Therefore, the first distance information may be the distance information between the distant objects around the vehicle and the own vehicle.
  • the second sensor 12 can be set as close as possible to as much as possible.
  • the plane where the first sensors 11 are located for example, the distance between the second sensor 12 and the plane where the multiple first sensors 11 are located may be about 100 mm-140 mm, preferably 120 mm.
  • the second sensor 12 may have a second annular field of view area boundary 12C, as shown in FIGS. 6 to 8. According to the installation height of the second sensor 12 on the vehicle and the size (length ⁇ width) of the first sub-sensor module, the distance d 2 of the second annular field of view area boundary 12C from the second blind zone of the vehicle is at most 4 m. The area larger than the boundary 12C of the second annular field of view area is the effective detection range of the second sensor 12.
  • the second sensor 12 may be a sensor with a larger field of view to further increase the detection range of the second sensor 12. It should be noted that the second annular field of view area boundary 12C shown in FIGS.
  • the second blind zone distance d 2 can be based on the installation height of the second sensor 12 on the vehicle and the first sub-sensor The size of the module changes, therefore, it should not be construed as limiting the scope of the application to this.
  • the third sensor 13 is an image sensor, and may be arranged to be inclined upwards by about 3° to 7°, preferably 5°, relative to the horizontal plane in the plane where the plurality of sensors 11 are located, so as to obtain the position in front of the vehicle.
  • An object image of a specific height for example, an image of a traffic light in front of the vehicle is acquired to generate second image information.
  • the third sensor 13 can perform image acquisition and generate second image information every small time period (for example, 10 ms), and send data related to the second image information to the image
  • the data processing center recognizes traffic lights to ensure that autonomous vehicles comply with traffic rules.
  • the number of the third sensor 13 may be one or two.
  • the recognition distance of the third sensor 13 may be about 100 m
  • the horizontal field angle of the third sensor 13 may be about 35°
  • the vertical field angle of the third sensor 13 may be about 15°. Therefore, the third sensor 13 can obtain all information about traffic lights in front of the road during the automatic driving of the vehicle, so as to ensure that the vehicle can always abide by the traffic rules during the automatic driving.
  • FIG. 4 shows an example of the layout of six first sensors 11 and two third sensors 13 according to the present application
  • FIG. 5 shows four first sensors 11 and one third sensor according to the present application
  • 13 is an example of the layout, but it is only an example. This application is not limited to this. As long as it satisfies the function of fully acquiring the image of the traffic light in front of the autonomous vehicle and the 360° panoramic image around the vehicle, those skilled in the art can follow Various changes are made to the layout of the first sensor 11 and the third sensor 13 in the actual situation.
  • the plurality of sensors 11 are arranged horizontally in the same plane, the second sensor 12 is arranged as close as possible to the plane where the plurality of first sensors 11 are located, and the third sensor 13 is arranged in the same plane.
  • the height of the first sub-sensor module 1 applied to the roof of unmanned vehicles of L4 and higher can be greatly reduced.
  • the height of the roof sensor module in the prior art is usually about 400 mm, while the height of the first sub-sensor module 1 according to the present application may be only 270 mm.
  • the size of the first sub-sensor module 1 can be reduced. It can be reduced from (1200mm ⁇ 1500mm) ⁇ (1000mm ⁇ 1200mm) (length ⁇ width) in the prior art to 800mm ⁇ 800mm (length ⁇ width), which greatly reduces the first sub-sensor module 1 located on the top of the vehicle.
  • the overall volume of the self-driving vehicle reduces the forward resistance of the self-driving vehicle and improves the beauty of the self-driving vehicle.
  • the detection range and detection object of different sensors in the first sub-sensor module 1 can be fully utilized without interfering or affecting each other. And close to the ground to detect the environmental information around the vehicle. Therefore, the first sub-sensor module 1 according to the present application can maximize its own limited space while ensuring the effective detection of multiple sensors.
  • the second sub-sensor module 2 may include a fourth sensor 21 and a fifth sensor 22.
  • the fourth sensor 21 and the fifth sensor 22 may be distance sensors, for example, a mechanical lidar or a solid-state lidar, preferably a mechanical lidar, and are detachably installed on the rearview mirrors on both sides of the vehicle through mounting brackets. .
  • the fourth sensor 21 and the fifth sensor 22 can respectively adjust the angles between the fourth sensor 21 and the horizontal plane through the mounting support, so that the second sub-sensor module 2 can realize the required detection range according to the detection requirements of different vehicles.
  • the structure of the second sub-sensor module 2 according to the present application is relatively simple and does not involve complexity.
  • the process of vehicle modification is also conducive to the installation, integration and subsequent maintenance of the automatic driving system, and can save design, manufacturing and maintenance costs.
  • the mounting support according to the present application can be any type of mounting support, as long as the fourth sensor 21 and the fifth sensor 22 can be mounted on the rearview mirror of the vehicle and the installation angle relative to the horizontal plane can be adjusted. .
  • the recognition distance of the fourth sensor 21 and the fifth sensor 22 may be about 20m
  • the horizontal field of view angle of the fourth sensor 21 and the fifth sensor 22 may be about 180°
  • the fourth sensor 21 and the fifth sensor The vertical field of view angle of 22 may be approximately -15° to +15°. Therefore, the fourth sensor 21 and the fifth sensor 22 can generate corresponding second distance information according to their first installation angles relative to the horizontal plane.
  • the range of the first installation angle may be between 0° and -15°
  • the second distance information may be the distance information of the close object on the side of the autonomous vehicle from the own vehicle, and has a third view.
  • the field area boundary 2C is shown in Figure 7. According to the first installation angle of the fourth sensor 21 and the fifth sensor 22 on the vehicle, the distance d 3 of the third field of view area boundary 2C from the third blind zone of the vehicle is at most 0.5 m.
  • the area larger than the boundary 2C of the third field of view area is the effective detection range of the fourth sensor 21 and the fifth sensor 22.
  • the fourth sensor 21 and the fifth sensor 22 may be sensors with a larger field of view to further increase the effective detection range of the second sub-sensor module 2. It should be noted that the third field of view area boundary 2C shown in FIG. 7 is only an example, and should not be construed as limiting the scope of the present application to this.
  • the present application detachably and flexibly installs the second sub-sensor module 2 at the rearview mirror of the vehicle, not only It can simplify the vehicle modification design, reduce the design cost, and save the space of the first sub-sensor module 1, thereby reducing the height of the first sub-sensor module 1, thereby reducing the second sub-sensor module 2 and the first sub-sensor module.
  • the height of the roof sensor module increases accordingly, thereby causing the first annular field of view boundary 11C of the first sensor 11''distance from the vehicle first blind d' 1 is about 2.5m and the second sensor 12 'of the second annular area boundary field 12C' from the second vehicle blind spot '2 is approximately 6m distance d, while causing the second sub- The distance d'3 of the third field of view area boundary 2C of the sensor module 2'to the third blind zone of the vehicle is about 1.2m, and according to the present application, d 1 is at most 1.8m, d 2 is at most 4m, and d 3 is at most 0.5m, which greatly reduces the range of the blind zone of the sensor module 1.
  • the third sub-sensor module 3 may include a sixth sensor 31.
  • the sixth sensor 31 may be a distance sensor, for example, a solid-state lidar or a mechanical lidar, preferably a solid-state lidar.
  • the sixth sensor 31 may be fixed at the center of a bumper at the front of the host vehicle, for example.
  • the recognition distance of the sixth sensor 31 may be about 20m
  • the horizontal field of view angle of the sixth sensor 31 may be about 180°
  • the vertical field of view angle of the sixth sensor 31 may be about -45° to +45. °. Therefore, the sixth sensor 31 can generate corresponding third distance information according to its second installation angle relative to the horizontal plane.
  • the range of the second installation angle may be between -15° and -25°
  • the third distance information may be the distance information of a nearby object in front of the autonomous vehicle, and has a fourth field of view area boundary 3C, such as Shown in Figure 6.
  • the distance of the fourth field of view area boundary 3C from the fourth blind zone of the vehicle is at most 0.2 m.
  • the area larger than the boundary 3C of the fourth field of view area is the effective detection range of the sixth sensor 31.
  • the sixth sensor 31 may be a sensor with a larger field of view to further increase the detection range of the third sub-sensor module 3. It should be noted that the fourth field of view area boundary 3C shown in FIG. 6 is only an example, and should not be construed as limiting the scope of the present application to this.
  • the second sensor 12 of the first sub-sensor module 1, the fourth sensor 21 and the fifth sensor 22 of the second sub-sensor module 2, and the sixth sensor 31 of the third sub-sensor module 3 The distance information (for example, the first distance information, the second distance information, and the third distance information) of almost all objects near and far around the vehicle are generated together in their respective effective detection ranges, so as to provide effective The safety guarantee to the greatest extent avoids blind spots during driving.
  • the image information of environmental objects generated by the first sensor 11 of the first sub-sensor module 1 ie, the first image information
  • the distance information ie, the first distance information
  • the second sensor 12 of the first sub-sensor module 1, the fourth sensor 21 and the fifth sensor 22 of the second sub-sensor module 2, and the sixth sensor of the third sub-sensor module 3 31 Acquire the distances of surrounding objects in a short period of time (for example, within 10ms), generate corresponding distance information, and transmit the generated distance information cloud data to the point cloud data processing of the information processing unit
  • the center performs data filtering and splicing processing. Since the image data processing center and the point cloud data processing center of the information processing unit are two different sub-processing centers, the image data processing center and the point cloud data processing center can collect information from corresponding sensors at the same time.
  • the image data processing center processes the raw data related to the first image information from the first sensor 11 and the point cloud data processing center processes the raw data related to the first distance information from the second sensor 12, the raw data from the fourth sensor 21 and the point cloud data processing center.
  • the image data processing center and the point cloud data processing center respectively send the processed data to the graphics Processing unit (GPU), GPU fusion process the two kinds of processed data to generate a real-time three-dimensional environment model with color information around the vehicle, so that the control center of the autonomous driving system can obtain the automatic
  • the driving parameters of the driving vehicle such as road characteristics, driving characteristics, environmental characteristics, etc., to control the speed of the autonomous vehicle in real time, determine whether the autonomous vehicle changes lanes, when to change lanes, whether to turn, when to turn, etc., so that Self-driving vehicles can avoid other vehicles, pedestrians or obstacles under the premise of observing traffic rules, and can avoid drastic changes in the speed of self-d
  • all sensors can generate environmental information required by the installed area, and different sensors complement each other. Although the detection areas of different sensors partially overlap, this overlap helps the sensor to detect redundancy and safety and improve the utilization of sensor resources.
  • the sensor module according to the present application uses a smaller number of sensors to realize the monitoring of 360° environmental information around the vehicle.
  • the sensor module according to the present application has a simple structure, low design, manufacturing, and maintenance costs, and can improve the aesthetics of an autonomous vehicle.
  • the sensor module described herein may be applied to various automatic driving systems, for example, driving assistance systems (for example, advanced driving assistance systems), unmanned driving systems, and the like.
  • driving assistance systems for example, advanced driving assistance systems
  • unmanned driving systems and the like.
  • a vehicle which includes the sensor module 100 as described above.
  • an automatic driving system includes: the above-mentioned sensor module 100, which collects image information of environmental objects around the vehicle and distance information of objects around the vehicle from the vehicle; information processing, which includes an image data processing center and cloud data Processing center, the image data processing center acquires image information from the sensor module 100 for processing, the cloud data processing center acquires distance information from the sensor module 100 for processing; the graphics processing unit, which acquires respectively from the image data processing center and the cloud data processing center
  • the processed image information and the processed distance information are fused to generate a real-time three-dimensional environment model around the vehicle; and a control center that obtains the vehicle’s information based on the generated real-time three-dimensional environment model around the vehicle
  • Driving parameters such as road characteristics, driving characteristics, environmental characteristics, etc., to control the driving of the vehicle, for example, the speed of the vehicle on a flat road, the speed and timing of changing lanes, the speed and time of turning, etc., to ensure that the vehicle is automatically Safety and comfort
  • FIGS. 1 to 8 are only examples, and do not constitute a limitation on the application of the solution of the present application to other movable devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种用于自动驾驶系统的传感器模块(100)以及包括该传感器模块(100)的自动驾驶系统和车辆,该传感器模块(100)包括:第一子传感器模块(1),其设置在车辆的顶部,用于获取车辆周围的环境物体的图像和车辆周围的物体距该车辆的第一距离,并生成图像信息和第一距离信息;第二子传感器模块(2),其可拆卸地设置在车辆的两侧,用于获取车辆周围的物体距该车辆的第二距离,并生成第二距离信息;以及第三子传感器模块(3),其设置在车辆的前部的中央,用于获取车辆周围的物体距该车辆的第三距离,并生成第三距离信息,其中,第一子传感器模块(1)包括多个第一传感器(11),多个第一传感器(11)呈放射状等角间隔布置。

Description

传感器模块及包括其的自动驾驶系统和车辆 技术领域
本申请涉及传感器模块以及包括该传感器模块的自动驾驶系统和车辆。
背景技术
现有的L4级别无人驾驶汽车所采用的传感器布局主要包含两部分:车顶传感器模块和车身传感器模块。一般地,由于需要兼顾不同的车顶传感器的检测范围,位于车顶上的传感器设置得相对集中,从而存在车顶传感器模块整体高度较高,体积较大的问题。此外,由于安装车身传感器需要对车辆本身进行较多的改装设计,因此导致设计成本相对较高,而且安装难度较大。此外,在传感器数量方面,由于传感器本身的限制,现有的设计常常导致需要增加传感器数量实现对车辆周围全方位环境的监测。
发明内容
为了克服现有技术中存在的问题,根据本申请公开的各种实施例,提供一种用于自动驾驶系统的传感器模块和包括该传感器模块的自动驾驶系统。
根据本申请的一方面,一种用于自动驾驶系统的传感器模块可包括:
第一子传感器模块,其设置在车辆的顶部,用于获取车辆周围的环境物体的图像和该车辆周围的物体距该车辆的第一距离,并生成图像信息和第一距离信息;
第二子传感器模块,其可拆卸地设置在车辆的两侧,用于获取该车辆周围的物体距该车辆的第二距离,并生成第二距离信息;以及
第三子传感器模块,其设置在车辆的前部的中央,用于获取该车辆周围的物体距该车辆的第三距离,并生成第三距离信息,
其中,第一子传感器模块包括多个第一传感器,多个第一传感器呈放射状等角间隔布置。
在其中一个示例中,多个第一传感器可相对于地面水平地设置在同一平面内,用于获取车辆周围的环境物体的图像,以生成图像信息中的第一图像信息,并且第一子传感器模块还可包括:
第二传感器,其设置在第一子传感器模块的中央,用于生成第一距离信息;以及
至少一个第三传感器,其设置在第一子传感器模块的前侧的中部,并面向车辆的前方,用于获取车辆前方的交通灯的图像,以生成图像信息中的第二图像信息。
在其中一个示例中,第三传感器可设置为在设置有多个第一传感器11的平面中相对于水平面向上倾斜约3°至7°。
在其中一个示例中,第三传感器可设置为相对于水平面向上倾斜5°。
在其中一个示例中,第二子传感器模块可通过能调整其安装角度的安装支座设置在车辆的两侧的后视镜上。
在其中一个示例中,第一传感器的数量可以为4个。
在其中一个示例中,第一传感器的数量可以为6个。
在其中一个示例中,第二传感器可以为机械激光雷达。
在其中一个示例中,第二子传感器模块的传感器相对于水平面的第一安装角度的范围可在0°至-15°之间,第三子传感器模块的传感器相对于水平面的第二安装角度的范围可在-15°至-25°之间。
根据本公开的另一方面,一种自动驾驶系统可包括:
根据以上所述的传感器模块,其采集车辆周围的环境物体的图像信息和车辆周围的物体距车辆的距离信息;
信息处理单元,其包括图像数据处理中心和云数据处理中心,图像数据处理中心从传感器模块获取图像信息以进行处理,云数据处理中心从传感器模块获取距离信息以进行处理;
图形处理单元,其分别从图像数据处理中心和云数据处理中心获取处理后的图像信息和处理后的距离信息,进行融合处理,以生成车辆周围的实时三维立体环境模型;以及
控制中心,其根据所生成的车辆周围的实时三维立体环境模型获取车辆的行驶参数,以控制车辆的行驶。
根据本公开的又一方面,提供一种包括根据以上所述的传感器模块的车辆。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得显而易见。
附图说明
为了更清楚地说明本申请的实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1是根据本申请的实施例的安装有传感器模块的车辆的透视图。
图2是根据本申请的实施例的安装有传感器模块的车辆的前视图。
图3是根据本申请的实施例的安装有传感器模块的车辆的俯视图。
图4是根据本申请的实施例的安装有传感器模块的车辆的去掉第一子传感器模块的上壳体后的俯视图。
图5是根据本申请的实施例的第一子传感器模块的变型。
图6是根据本申请的实施例的传感器模块的检测区域的侧视图。
图7是根据本申请的实施例的传感器模块的检测区域的前视图。
图8是根据本申请的实施例的传感器模块的检测区域的俯视图。
图9是根据对比示例的传感器模块的检测区域的侧视图。
图10是根据对比示例的传感器模块的检测区域的后视图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
下文将示例而非限制地以该传感器模块应用在车辆上例来对其进行阐述。图1是采用了根据本申请的一个示例的传感器模块的车辆的透视图。参照图1,传感器模块100包括:第一子传感器模块1,其设置在车辆的顶部,用于获取车辆周围的环境物体的图像和车辆周围的物体距本车辆的第一距离,并生成图像信息和第一距离信息;第二子传感器模块2,其可拆卸地设置在车辆的两侧,用于获取车辆周围的物体距本车辆的第二距离,并生成第二距离信息;以及第三子传感器模块3,其设置在车辆的前部的中央,用于获取车辆周围的物体距本车辆的第三距离,并生成第三距离信息。其中,第一子传感器模块1包括多个第一传感器11,多个第一传感器11可呈放射状等角间隔布置。自动驾驶系统可通过信息处理单元和图形处理单元(GPU)处理图像信息、第一距离信息、第二距离信息和第三距离信息来生成本车辆周围的三维立体环境模型,并可根据所生成的实时三维立体环境模型确定本车辆周围的环境特征,例如,确定本车辆周围是否存在行人、其他车辆、障碍物等及其距离本车辆的距离,交通灯的状态等,从而自动驾驶系统可通过控制中心生成相对应的指令,以使本车辆在遵守交通规则的前提下避开行人、其他车辆或障碍物,或者进行变道等,确保本车辆自动驾驶时的安全性。
在一个实施例中,第一子传感器模块1的多个第一传感器11可相对于地面水平地设置在同一平面内,用于获取本车辆周围的环境物体(例如,车道、斑马线、其他车辆、行人、障碍物等)的图像,以生成图像信息中的第一图像信息。此外,第一子传感器模块1还可包括:第二传感器12,其设置在第一子传感器模块1的中央,用于生成第一距离信 息;以及至少一个第三传感器13,其设置在第一子传感器模块1的前侧的中部,并面向车辆的前方,用于获取本车辆前方的交通灯的图像,以生成图像信息中的第二图像信息。
在一些示例中,参照图2至图4,第一传感器11为图像传感器。由于多个第一传感器11呈放射状等角间隔布置,因此,多个第一传感器11可在一个很短的时间段内(例如,在10ms内)依次获取本车辆周围的环境物体的图像,以生成第一图像信息(即,360°环境图像信息),并将所获取的与第一图像信息有关的数据传输至自动驾驶系统的信息处理单元的图像数据处理中心进行数据拼接,从而生成关于本车辆周围的360°全景图像信息。
在一些示例中,如图2至图4所示,第一传感器11的数量可以为6个,此时,6个第一传感器11可呈60°的放射状等角间隔布置,并且第一传感器11的识别距离可约为300m,水平视场角可约为70°,垂直视场角可约为45°。在另一示例中,如图5所示,第一传感器11的数量可以为4个,此时,4个第一传感器11可呈90°的放射状等角间隔布置,并且第一传感器11的水平视场角可约为100°或120°。然而,第一传感器11的数量和布置不限于此,只要能够满足获取车辆周围的360°全景图像的功能,本领域技术人员可根据实际需求任意改变第一传感器11的数量和布置。
第一传感器11可具有第一环形视场区域边界11C,如图6和图8所示。根据第一传感器11的安装高度和安装有第一传感器11的车辆的车身的长度,第一环形视场区域边界11C距车辆的第一盲区距离d 1至多为1.8m。大于第一环形视场区域边界11C的区域内的所有环境物体的图像(或称为环境图像)均可被第一传感器11所获取。应注意的是,图6和图8中所示出的第一环形视场区域边界11C仅为示例,第一盲区距离d 1可根据第一传感器11在车辆上的安装高度以及车身的长度而改变,因此,不应理解为将本申请的范围限制于此。
在一些示例中,第二传感器12为距离传感器,例如,机械激光雷达,其采用机械旋转结构,通过旋转其激光发射器模块与接收器模块来扫描车辆周围的环境,由此获取本车辆的360°范围内的物体(例如,其他车辆、行人、障碍物等)距本车辆的第一距离,生成第一距离信息。
在一些示例中,第二传感器12的识别距离可约为200m,第二传感器12的水平视场角可约为180°,第二传感器12的垂直视场角可约为-25°至+15°。因此,第一距离信息可以为车辆周围的较远处的物体距本车辆的距离信息。在一些示例中,在兼顾降低第一子传感器模块1的高度并保证第一子传感器模块1中的各个传感器互不干扰、有效工作的情况下,第二传感器12可设置为尽可能地靠近多个第一传感器11所在的平面,例如,第二传感器12距多个第一传感器11所在的平面的距离可约为100mm-140mm,优选120mm。第二传感器12可具有第二环形视场区域边界12C,如图6至图8所示。根据第二传感器 12在车辆上的安装高度和第一子传感器模块的大小(长×宽),第二环形视场区域边界12C距车辆的第二盲区距离d 2至多为4m。大于第二环形视场区域边界12C的区域为第二传感器12的有效检测范围。第二传感器12可以是具有更大视场角的传感器,以进一步提高第二传感器12的检测范围。应注意的是,图6至图8中所示出的第二环形视场区域边界12C仅为示例,第二盲区距离d 2可根据第二传感器12在车辆上的安装高度以及第一子传感器模块的大小而改变,因此,不应理解为将本申请的范围限制于此。
在一些示例中,第三传感器13为图像传感器,并且可设置为在多个传感器11所在的平面中相对于水平面向上倾斜约3°至7°,优选地,5°,以获取本车辆前方的特定高度的物体图像,例如,获取本车辆前方的关于交通灯的图像,以生成第二图像信息。在车辆行驶的过程中,第三传感器13每间隔一个较小的时间段(例如,10ms),可进行一次图像获取并生成第二图像信息,并将与第二图像信息有关的数据发送至图像数据处理中心进行交通灯的识别,从而确保自动驾驶车辆遵守交通规则。
第三传感器13的数量可以为1个或2个。在一些示例中,第三传感器13的识别距离可约为100m,第三传感器13的水平视场角可约为35°,第三传感器13的垂直视场角可约为15°。因此,第三传感器13可获取车辆在自动驾驶时道路前方的所有关于交通灯的信息,从而确保车辆能够在自动驾驶时始终遵守交通规则。
虽然图4中示出了根据本申请的6个第一传感器11和2个第三传感器13的布局示例以及图5中示出了根据本申请的4个第一传感器11和1个第三传感器13的布局示例,但其仅是示例,本申请不限于此,只要满足实现充分获取自动驾驶车辆的前方的交通灯的图像和车辆周围的360°的全景图像的功能,本领域技术人员可根据实际情况对第一传感器11和第三传感器13的布局进行各种改变。
根据本申请的实施例,通过将多个传感器11水平地布置在同一平面内、将第二传感器12设置为尽可能地靠近多个第一传感器11所在的平面以及将第三传感器13设置在多个第一传感器11所在的平面中,能够较大程度的缩小应用于L4及更高级别的无人驾驶车辆的车顶的第一子传感器模块1的高度。例如,现有技术中的车顶传感器模块的高度通常为400mm左右,而根据本申请的第一子传感器模块1的高度可仅为270mm。此外,根据本申请的第一子传感器模块1的设计,不仅可降低第一子传感器模块1的高度,还可减小第一子传感器模块1的大小,例如,第一子传感器模块1的大小可从现有技术中的(1200mm~1500mm)×(1000mm~1200mm)(长×宽)减小至800mm×800mm(长×宽),这极大地降低了位于车辆顶部的第一子传感器模块1的整体体积,从而降低了自动驾驶车辆的前进阻力,提升了自动驾驶车辆的美感。
此外,通过以上方式设置第一子传感器模块1,可在保证第一子传感器模块1中的不 同的传感器的之间互不干扰或影响的情况下充分利用它们的检测范围和检测对象,由远及近地检测本车辆周围的环境信息。因此,根据本申请的第一子传感器模块1能够最大限度地利用其自身的有限空间同时保证多个传感器的有效检测。
在一些实施例中,第二子传感器模块2可包括第四传感器21和第五传感器22。第四传感器21和第五传感器22可以为距离传感器,例如,机械激光雷达或固态激光雷达,优选机械激光雷达,并且分别通过安装支座可拆卸地设置在本车辆的两侧的后视镜上。第四传感器21和第五传感器22可分别通过安装支座调整其与水平面的夹角,从而使得第二子传感器模块2可根据不同车辆的检测需求实现所需的检测范围。由于第四传感器21和第五传感器22通过安装支座可拆卸地安装在后视镜上,因此与现有技术相比,根据本申请的第二子传感器模块2的结构相对简单,不涉及复杂的车辆改装过程,同时有利于自动驾驶系统的安装、集成和后期的维护工作,并可节省设计、制造和维护成本。此外,根据本申请的安装支座可以是任意类型的安装支座,只要能够将第四传感器21和第五传感器22安装在车辆的后视镜上并可调整其相对于水平面的安装角度即可。
在一些示例中,第四传感器21和第五传感器22的识别距离可约为20m,第四传感器21和第五传感器22的水平视场角可约为180°,第四传感器21和第五传感器22的垂直视场角可约为-15°至+15°。因此,第四传感器21和第五传感器22可根据其相对于水平面的第一安装角度来生成相应的第二距离信息。
在一些示例中,第一安装角度的范围可在0°至-15°之间,第二距离信息可以为自动驾驶车辆的侧面的近处的物体距本车辆的距离信息,并具有第三视场区域边界2C,如图7所示。根据第四传感器21和第五传感器22在车辆上的第一安装角度,第三视场区域边界2C距车辆的第三盲区距离d 3至多为0.5m。大于第三视场区域边界2C的区域为第四传感器21和第五传感器22的有效检测范围。此外,第四传感器21和第五传感器22可以是具有更大视场角的传感器,以进一步提高第二子传感器模块2的有效检测范围。应注意的是,图7中所示出的第三视场区域边界2C仅为示例,不应理解为将本申请的范围限制于此。
作为一个示例,相比于将第二子传感器模块设置在车顶传感器模块中的对比示例,本申请通过将第二子传感器模块2可拆卸地、灵活地设置于车辆的后视镜处,不仅可简化对车辆的改装设计,降低设计成本,还节省了第一子传感器模块1的空间并由此降低了第一子传感器模块1的高度,进而降低了第二子传感器模块2和第一子传感器模块1的各类传感器的盲区范围。例如,参照图9和图10,当第二子传感器模块2’设置在车顶传感器模块中时,车顶传感器模块的高度相应增加,从而致使第一传感器11’的第一环形视场边界11C’距车辆的第一盲区距离d’ 1约为2.5m并且第二传感器12’的第二环形视场区域边界12C’距车辆的第二盲区距离d’ 2约为6m,同时致使第二子传感器模块2’的第三视场区域边 界2C’距车辆的第三盲区距离d’ 3约为1.2m,而根据本申请,d 1至多为1.8m,d 2至多为4m,d 3至多为0.5m,这大大降低了传感器模块1的盲区的范围。
在一些实施例中,第三子传感器模块3可包括第六传感器31。第六传感器31可以为距离传感器,例如,固态激光雷达或机械激光雷达,优选固态激光雷达。第六传感器31可例如固定在本车辆的前部的保险杠的中央处。
在一些示例中,第六传感器31的识别距离可约为20m,第六传感器31的水平视场角可约为180°,第六传感器31的垂直视场角可约为-45°至+45°。因此,第六传感器31可根据其相对于水平面的第二安装角度来生成相应的第三距离信息。该第二安装角度的范围可在-15°至-25°之间,第三距离信息可以为自动驾驶车辆的前方的近处的物体的距离信息,并具有第四视场区域边界3C,如图6所示。根据第六传感器31在车辆上的第二安装角度,第四视场区域边界3C距车辆的第四盲区距离至多为0.2m。大于第四视场区域边界3C的区域为第六传感器31的有效检测范围。第六传感器31可以是具有更大视场角的传感器,以进一步提高第三子传感器模块3的检测范围。应注意的是,图6中所示出的第四视场区域边界3C仅为示例,不应理解为将本申请的范围限制于此。
针对车辆周围的距离信息,第一子传感器模块1的第二传感器12、第二子传感器模块2的第四传感器21和第五传感器22以及第三子传感器模块3的第六传感器31可在其各自的有效检测范围内共同生成本车辆周围的近处和远处的几乎所有物体的距离信息(例如,第一距离信息、第二距离信息和第三距离信息),从而为自动驾驶车辆提供有效的安全保障,最大程度地避免出现行驶过程中的盲区。此外,由于车辆在行驶过程中不进行倒退行驶,并且第一子传感器模块1的第一传感器11所生成的环境物体的图像信息(即,第一图像信息)和第二传感器12所生成的物体的距离信息(即,第一距离信息)可满足自动驾驶车辆的实际行驶需求,因此,可不在本车辆的尾部另外设置传感器,但本申请并不必然受限于此,可根据实际需求在车辆尾部设置另外的传感器。
作为一个示例,在车辆行驶的过程中,第一子传感器模块1的第二传感器12、第二子传感器模块2的第四传感器21和第五传感器22以及第三子传感器模块3的第六传感器31在一个很短的时间段内(例如,在10ms内)依次获取周围的物体的距离,生成相对应的距离信息,并将所生成的距离信息云数据传输至信息处理单元的点云数据处理中心进行数据过滤、拼接处理。由于信息处理单元的图像数据处理中心与点云数据处理中心是两个不同的子处理中心,因此图像数据处理中心和点云数据处理中心可同时从相应的传感器采集信息。在图像数据处理中心处理来自第一传感器11的与第一图像信息有关的原始数据以及点云数据处理中心处理来自第二传感器12的与第一距离信息有关的原始数据、来自第四传感器21和第五传感器22的与第二距离信息有关的原始数据以及来自第六传感器 31的与第三距离有关的原始数据之后,图像数据处理中心和点云数据处理中心分别将处理后的数据发送至图形处理单元(GPU),GPU将两种处理后的数据进行融合处理,生成本车辆周围的具有颜色信息的实时三维立体环境模型,从而自动驾驶系统的控制中心可根据所生成的三维模型来获取自动驾驶车辆的行驶参数,例如,道路特征、驾驶特征、环境特征等,以例如实时控制自动驾驶车辆的速度、确定自动驾驶车辆是否变道、何时变道、是否转弯、何时转弯等,使得自动驾驶车辆能够在遵守交通规则的前提下避让其他车辆、行人或障碍物等,并能够避免自动驾驶车辆的速度在行驶过程中出现剧烈的变化,提高乘坐人员的舒适度。
根据本申请的传感器模块的结构设计,所有的传感器均可生成所安装区域需求的环境信息,并且不同的传感器之间相互补充。虽然不同的传感器的检测区域部分地重叠,但此种重叠有助于传感器检测冗余安全,提高传感器资源的利用率。
此外,与现有技术相比,根据本申请的传感器模块利用较少的传感器数量实现了对车辆周围的360°环境信息的监测。相对于现有技术,根据本申请的传感器模块结构简单,设计、制造、维护成本低,并且可提高自动驾驶车辆的美感。
根据本公开的各个实施例,本文所述的传感器模块可应用于各种自动驾驶系统,例如,辅助驾驶系统(例如,高级驾驶辅助系统)、无人驾驶系统等。
此外,根据本申请的另一实施例,提供一种车辆,该车辆包括如上所述的传感器模块100。
此外,根据本申请的又一实施例,提供一种自动驾驶系统。该自动驾驶系统包括:以上所述的传感器模块100,其采集本车辆周围的环境物体的图像信息和本车辆周围的物体距本车辆的距离信息;信息处理,其包括图像数据处理中心和云数据处理中心,图像数据处理中心从传感器模块100获取图像信息以进行处理,云数据处理中心从传感器模块100获取距离信息以进行处理;图形处理单元,其分别从图像数据处理中心和云数据处理中心获取处理后的图像信息和处理后的距离信息,进行融合处理,以生成本车辆周围的实时三维立体环境模型;以及控制中心,其根据所生成的本车辆周围的实时三维立体环境模型获取本车辆的行驶参数,例如,道路特征、驾驶特征、环境特征等,以控制本车辆的行驶,例如,车辆在平坦道路的速度、变道的速度和时机、转弯的速度和时间等,以保证车辆在自动驾驶时的安全性和舒适性。
本领域技术人员可以理解,图1至图8中示出的结构仅为示例,并不构成对本申请的方案应用于其他可移动设备上的限定。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾, 都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (11)

  1. 一种传感器模块,所述传感器模块用于自动驾驶系统,包括:
    第一子传感器模块,其设置在车辆的顶部,用于获取所述车辆周围的环境物体的图像和所述车辆周围的物体距所述车辆的第一距离,并生成图像信息和第一距离信息;
    第二子传感器模块,其可拆卸地设置在所述车辆的两侧,用于获取所述车辆周围的物体距所述车辆的第二距离,并生成第二距离信息;以及
    第三子传感器模块,其设置在所述车辆的前部的中央,用于获取所述车辆周围的物体距所述车辆的第三距离,并生成第三距离信息,
    其中,所述第一子传感器模块包括多个第一传感器,所述多个第一传感器呈放射状等角间隔布置。
  2. 根据权利要求1所述的传感器模块,其特征在于,所述多个第一传感器相对于地面水平地设置在同一平面内,用于获取所述车辆周围的环境物体的图像,以生成所述图像信息中的第一图像信息,并且所述第一子传感器模块还包括:
    第二传感器,其设置在所述第一子传感器模块的中央,用于生成所述第一距离信息;以及
    至少一个第三传感器,其设置在所述第一子传感器模块的前侧的中部,并面向所述车辆的前方,用于获取所述车辆前方的交通灯的图像,以生成所述图像信息中的第二图像信息。
  3. 根据权利要求2所述的传感器模块,其特征在于,所述第三传感器设置为在设置有所述多个第一传感器11的平面中相对于水平面向上倾斜约3°至7°。
  4. 根据权利要求3所述的传感器模块,其特征在于,所述第三传感器设置为相对于水平面向上倾斜5°。
  5. 根据权利要求1所述的传感器模块,其特征在于,所述第二子传感器模块通过能调整其安装角度的安装支座设置在所述车辆的两侧的后视镜上。
  6. 根据权利要求1所述的传感器模块,其特征在于,所述第一传感器的数量为4个。
  7. 根据权利要求1所述的传感器模块,其特征在于,所述第一传感器的数量为6个。
  8. 根据权利要求2所述的传感器模块,其特征在于,所述第二传感器为机械激光雷达。
  9. 根据权利要求1所述的传感器模块,其特征在于,所述第二子传感器模块的传感器相对于水平面的第一安装角度的范围在0°至-15°之间,所述第三子传感器模块的传感器相对于水平面的第二安装角度的范围在-15°至-25°之间。
  10. 一种自动驾驶系统,包括:
    根据权利要求1所述的传感器模块,其采集车辆周围的环境物体的图像信息和所述车辆周围的物体距所述车辆的距离信息;
    信息处理单元,其包括图像数据处理中心和云数据处理中心,所述图像数据处理中心从所述传感器模块获取图像信息以进行处理,所述云数据处理中心从所述传感器模块获取距离信息以进行处理;
    图形处理单元,其分别从所述图像数据处理中心和所述云数据处理中心获取处理后的所述图像信息和处理后的所述距离信息,进行融合处理,以生成所述车辆周围的实时三维立体环境模型;以及
    控制中心,其根据所生成的所述车辆周围的实时三维立体环境模型获取所述车辆的行驶参数,以控制所述车辆的行驶。
  11. 一种车辆,包括根据权利要求1所述的传感器模块。
PCT/CN2020/091220 2020-05-20 2020-05-20 传感器模块及包括其的自动驾驶系统和车辆 WO2021232274A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080007594.2A CN113287076A (zh) 2020-05-20 2020-05-20 传感器模块及包括其的自动驾驶系统和车辆
PCT/CN2020/091220 WO2021232274A1 (zh) 2020-05-20 2020-05-20 传感器模块及包括其的自动驾驶系统和车辆

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/091220 WO2021232274A1 (zh) 2020-05-20 2020-05-20 传感器模块及包括其的自动驾驶系统和车辆

Publications (1)

Publication Number Publication Date
WO2021232274A1 true WO2021232274A1 (zh) 2021-11-25

Family

ID=77275571

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091220 WO2021232274A1 (zh) 2020-05-20 2020-05-20 传感器模块及包括其的自动驾驶系统和车辆

Country Status (2)

Country Link
CN (1) CN113287076A (zh)
WO (1) WO2021232274A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114719867A (zh) * 2022-05-24 2022-07-08 北京捷升通达信息技术有限公司 一种基于传感器的车辆导航方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114261340A (zh) * 2021-12-02 2022-04-01 智己汽车科技有限公司 一种汽车固态激光雷达后视镜与汽车

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103655A (zh) * 2017-05-23 2017-08-29 郑州云海信息技术有限公司 基于云计算的汽车驾驶互助系统及方法
US10114117B2 (en) * 2013-09-10 2018-10-30 Scania Cv Ab Detection of an object by use of a 3D camera and a radar
CN109116846A (zh) * 2018-08-29 2019-01-01 五邑大学 一种自动驾驶方法、装置、计算机设备和存储介质
CN109515448A (zh) * 2018-12-12 2019-03-26 安徽江淮汽车集团股份有限公司 一种自动驾驶传感器布置方法和结构
CN110386073A (zh) * 2019-07-12 2019-10-29 深圳元戎启行科技有限公司 无人驾驶汽车车顶传感器集成装置及无人驾驶汽车
CN209955874U (zh) * 2019-03-08 2020-01-17 深圳市大疆创新科技有限公司 一种车辆以及用于安装于车辆的侧后视镜
US20200058987A1 (en) * 2018-08-17 2020-02-20 Metawave Corporation Multi-layer, multi-steering antenna system for autonomous vehicles
CN210554536U (zh) * 2019-06-25 2020-05-19 白犀牛智达(北京)科技有限公司 一种自动驾驶汽车

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10114117B2 (en) * 2013-09-10 2018-10-30 Scania Cv Ab Detection of an object by use of a 3D camera and a radar
CN107103655A (zh) * 2017-05-23 2017-08-29 郑州云海信息技术有限公司 基于云计算的汽车驾驶互助系统及方法
US20200058987A1 (en) * 2018-08-17 2020-02-20 Metawave Corporation Multi-layer, multi-steering antenna system for autonomous vehicles
CN109116846A (zh) * 2018-08-29 2019-01-01 五邑大学 一种自动驾驶方法、装置、计算机设备和存储介质
CN109515448A (zh) * 2018-12-12 2019-03-26 安徽江淮汽车集团股份有限公司 一种自动驾驶传感器布置方法和结构
CN209955874U (zh) * 2019-03-08 2020-01-17 深圳市大疆创新科技有限公司 一种车辆以及用于安装于车辆的侧后视镜
CN210554536U (zh) * 2019-06-25 2020-05-19 白犀牛智达(北京)科技有限公司 一种自动驾驶汽车
CN110386073A (zh) * 2019-07-12 2019-10-29 深圳元戎启行科技有限公司 无人驾驶汽车车顶传感器集成装置及无人驾驶汽车

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114719867A (zh) * 2022-05-24 2022-07-08 北京捷升通达信息技术有限公司 一种基于传感器的车辆导航方法及系统

Also Published As

Publication number Publication date
CN113287076A (zh) 2021-08-20

Similar Documents

Publication Publication Date Title
US10331963B2 (en) Camera apparatus and in-vehicle system capturing images for vehicle tasks
US20210325897A1 (en) Sensor array for an autonomously operated utility vehicle and method for surround-view image acquisition
US20180372875A1 (en) Sensor configuration for an autonomous semi-truck
JP6409680B2 (ja) 運転支援装置、運転支援方法
JP6332384B2 (ja) 車両用物標検出システム
US10366512B2 (en) Around view provision apparatus and vehicle including the same
CN107650908A (zh) 无人车环境感知系统
WO2021232274A1 (zh) 传感器模块及包括其的自动驾驶系统和车辆
WO2018180579A1 (ja) 撮像制御装置、および撮像制御装置の制御方法、並びに移動体
US20220373645A1 (en) Sensor Validation and Calibration
US20190235519A1 (en) Intermediate mounting component and sensor system for a mansfield bar of a cargo trailer
US20200341118A1 (en) Mirrors to extend sensor field of view in self-driving vehicles
US10882464B2 (en) Vehicle-mounted camera, vehicle-mounted camera apparatus, and method of supporting vehicle-mounted camera
US20210099622A1 (en) Imaging system and vehicle window used for the same
US11252338B2 (en) Infrared camera system and vehicle
CN112109716A (zh) 自动驾驶牵引车的感知系统及自动驾驶牵引车
CN115218888A (zh) 用于更新高清地图的系统和方法
WO2018021487A1 (ja) 光送受信装置、通信システム及び光送受信方法並びに自律運転車駐車場
CN207274661U (zh) 无人车环境感知系统
JP2024508812A (ja) 支援運転方法、パーキングスロット、チップ、電子デバイス、および記憶媒体
JP2008165610A (ja) 道路区画線認識装置
US20210325900A1 (en) Swarming for safety
CN214492889U (zh) 一种汽车的环境感知系统及其汽车
US20240336197A1 (en) Driving assistance system and vehicle
US20230093035A1 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936315

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/04/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20936315

Country of ref document: EP

Kind code of ref document: A1