WO2024161481A1 - Vehicle control device - Google Patents
Vehicle control device Download PDFInfo
- Publication number
- WO2024161481A1 WO2024161481A1 PCT/JP2023/002942 JP2023002942W WO2024161481A1 WO 2024161481 A1 WO2024161481 A1 WO 2024161481A1 JP 2023002942 W JP2023002942 W JP 2023002942W WO 2024161481 A1 WO2024161481 A1 WO 2024161481A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional object
- vehicle
- camera
- target
- vehicle control
- Prior art date
Links
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000002093 peripheral effect Effects 0.000 abstract 2
- 238000000034 method Methods 0.000 description 98
- 238000001514 detection method Methods 0.000 description 71
- 239000007787 solid Substances 0.000 description 18
- 230000004927 fusion Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 102100022443 CXADR-like membrane protein Human genes 0.000 description 4
- 101000901723 Homo sapiens CXADR-like membrane protein Proteins 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 235000004607 Chlorophora excelsa Nutrition 0.000 description 2
- 241000595436 Milicia excelsa Species 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Definitions
- the present invention relates to a vehicle control device.
- millimeter wave radar is generally cheaper than cameras, it has low detection accuracy, so there is a high risk of it not working if the vehicle is controlled based only on the sensing results of the millimeter wave radar. For example, when the vehicle turns at an intersection, the millimeter wave radar may mistakenly group a pedestrian crossing the sidewalk in the same direction with a (stationary) three-dimensional object in the vicinity, which could result in the pedestrian not being detected correctly and a collision with the pedestrian.
- Patent Document 1 As a countermeasure, attention has been focused on camera x millimeter wave radar fusion technology, which uses millimeter wave radar and a camera to improve detection accuracy.
- Patent Document 1 the following Patent Document 1 is disclosed.
- Patent Document 1 discloses a technology that aims to improve the obstacle detection rate by determining whether a target is moving on the road surface based on target information obtained by radar and vehicle driving information.
- Patent Document 1 is based on the assumption that the detection results of the target information obtained by the radar are correct, and does not take into account erroneous grouping by the radar. Therefore, if the radar erroneously groups moving targets such as pedestrians with stationary targets such as posts, there is a risk that the moving speed of the moving target will be detected as lower than it actually is.
- the object of the present invention is to provide a vehicle control device capable of highly accurate sensing in driving assistance technology by suppressing erroneous grouping of millimeter wave radar using a camera x millimeter wave radar algorithm.
- the vehicle control device has a camera that recognizes three-dimensional objects around the vehicle, and a millimeter wave radar that recognizes targets in the vicinity of the vehicle, and is characterized in that the three-dimensional object is set as an independent target based on information about the three-dimensional object recognized by the camera, and when the millimeter wave radar recognizes the three-dimensional object and other targets as the surrounding targets, it recognizes the three-dimensional object as a target different from the other targets.
- the present invention provides a vehicle control device that can suppress erroneous grouping of millimeter wave radar.
- FIG. 1 is a configuration explanatory diagram showing an example of a hardware configuration of a system including a controller (vehicle control device) according to a first embodiment of the present invention.
- 4 is a flowchart showing a process flow of a fusion program executed by a controller (vehicle control device) in the first embodiment of the present invention.
- 2 is an overhead view of a host vehicle and its surrounding objects at a certain time t0 in the first embodiment of the present invention.
- 2 is an overhead view of a host vehicle and its surrounding objects at a certain time t1 after a certain time has elapsed from a time t0 in the first embodiment of the present invention.
- FIG. 5 is an explanatory diagram of a radar detection point cloud viewed on a coordinate system fixed at the time (time t 1 ) in FIG. 4 in the first embodiment of the present invention.
- 2 is an overhead view of a host vehicle and its surrounding objects at a certain time t2 after a certain time has elapsed from time t1 in the first embodiment of the present invention.
- 10 is a flowchart showing a process flow of a fusion program executed by a controller (vehicle control device) according to a second embodiment of the present invention.
- 13 is an overhead view of the host vehicle and its surrounding objects at a certain time t3 in the second embodiment of the present invention.
- FIG. 11 is an overhead view of the host vehicle and its surrounding objects at time t3 in the second embodiment of the present invention, in which a dummy target is placed in a camera occlusion region of a stationary three-dimensional object O OBJ .
- 13 is an overhead view of the host vehicle and its surrounding objects at a certain time t4 after a certain time has elapsed from the time t3 in the second embodiment of the present invention.
- FIG. 11 is an explanatory diagram of a radar detection point cloud viewed on a fixed coordinate system at the time (time t 4 ) in FIG. 10 in the second embodiment of the present invention.
- FIG. 11 is an explanatory diagram of a radar detection point cloud viewed on a fixed coordinate system at the time (time t 4 ) in FIG. 10 in the second embodiment of the present invention.
- 11 is an overhead view of the host vehicle and its surrounding objects at a certain time t5 after a certain time has elapsed from the time t4 in the second embodiment of the present invention.
- 10 is a flowchart showing a process flow of a fusion program executed by a controller (vehicle control device) according to a third embodiment of the present invention.
- 13 is an overhead view of the host vehicle and its surrounding objects at a certain time t6 in the third embodiment of the present invention.
- FIG. 13 is an overhead view of the host vehicle and its surrounding objects at time t6 in the third embodiment of the present invention, in which a dummy target is placed in the camera occlusion area of a pedestrian OPED .
- FIG. 13 is an overhead view of the host vehicle and its surrounding objects at a certain time t7 after a certain time has elapsed from the time t6 in the third embodiment of the present invention.
- FIG. 17 is an explanatory diagram of a radar detection point cloud viewed on a fixed coordinate system at the time (time t 7 ) in FIG. 16 in the third embodiment of the present invention.
- FIG. 11 is an overhead view of the host vehicle and its surrounding objects at a certain time t8 after a certain time has elapsed from the time t7 in the third embodiment of the present invention.
- the vehicle coordinate system is set with the vehicle's longitudinal (length) direction as the X-axis, the vehicle's lateral (width) direction as the Y-axis, the center point of the front end of the vehicle body as the origin, and angles set at 0 degrees in the forward direction of the vehicle and positive in the counterclockwise direction.
- FIG. 1 is a configuration diagram illustrating an example of the hardware configuration of a system including a controller (vehicle control device) in the first embodiment of the present invention.
- This system includes an external information acquisition unit H1, which is composed of a radar device (hereinafter, sometimes simply referred to as a radar or millimeter wave radar) H11 that detects the left and right front or sides, a camera device (hereinafter, sometimes simply referred to as a camera) H12 that detects the front, and the like.
- a radar device hereinafter, sometimes simply referred to as a radar or millimeter wave radar
- a camera device hereinafter, sometimes simply referred to as a camera
- a vehicle information acquisition unit H2 including a speed sensor H21, a steering angle sensor H22, a yaw rate sensor H23, etc.
- a controller H3 that calculates fusion target information that integrates the detection results of the radar and the camera based on the external environment information from the external environment information acquisition unit H1 and the host vehicle information acquired by the host vehicle information acquisition unit H2, and outputs, as necessary, a warning command to the driver to avoid a collision with the fusion target, a brake command to stop the host vehicle, a steering command to turn the host vehicle, etc.
- the vehicle control unit H4 is composed of an alarm device H41 which performs alarm control based on an alarm command calculated by the controller H3, a brake system H42 which performs brake control based on a brake command calculated by the controller H3, and a steering system H43 which performs steering control based on a steering command calculated by the controller H3.
- Figure 2 shows the processing flow of the fusion program executed by the controller H3.
- step S101 the camera detection results (X position, Y position, width, target attributes, etc.) are obtained from the external information acquisition unit H1.
- radar detection point information (X position, Y position, radar reflection intensity, etc.) obtained from the external information acquisition unit H1 is acquired.
- process S103 the time axes of the radar detection point information and camera detection results obtained in processes S101 and S102 are unified, and the radar detection point information and camera detection results at the same time are calculated.
- process S104 the position coordinates of the radar detection point information and camera detection results obtained at the same time in process S103 are unified to the center of the front end of the vehicle body.
- process S105 it is determined whether or not a stationary three-dimensional object is present in the camera detection results obtained in process S104.
- Process S105 If it is determined that a stationary three-dimensional object is present based on the camera's detection results (Process S105: Yes), the coordinate origin is fixed in Process S106, and the X position, Y position, and width of the stationary three-dimensional object detected by the camera are stored.
- process S108 the X and Y positions of the vehicle from the fixed coordinate origin and the yaw angle when the yaw angle at the time the coordinates were fixed are estimated as 0 degrees. Furthermore, the coordinate origin of the camera and radar detection results obtained in process S104 is moved to the fixed coordinate origin and rotated by the yaw angle.
- step S105 determines whether stationary three-dimensional object exists (step S105: No)
- step S107 it is determined that the coordinate origin is fixed (step S107: Yes)
- step S108 is executed.
- process S109 if it is determined that the vehicle has not passed the stationary three-dimensional object stored in process S106 (stationary three-dimensional object detected by the camera) (process S109: No), in process S110, the radar detection points near the stationary three-dimensional object stored in process S106 (stationary three-dimensional object detected by the camera) are excluded from grouping.
- step S109 determines whether the vehicle has passed the stationary three-dimensional object stored in step S106 (step S109: Yes). If it is determined in step S109 that the vehicle has passed the stationary three-dimensional object stored in step S106 (step S109: Yes), the coordinate fix is released in step S111.
- step S112 grouping processing is performed on radar detection points other than those that are not subject to grouping. This is a process in which radar detection points that are close to each other are regarded as the same target and are treated as a single detection representative point.
- process S105 determines whether the camera detection results indicate that no stationary three-dimensional object exists (process S105: No), and then in process S107 it is determined that the coordinate origin is not fixed (process S107: No), process S112 is executed.
- FIG. 3 is an overhead view of the vehicle and its surrounding objects at a certain time t0 .
- a stationary solid object O OBJ exists in the area where the detection area A L of the radar device H11 and the detection area A CAM of the camera device H12 overlap, and a pedestrian O PED exists in the detection area A L of the radar device H11.
- the stationary solid object O OBJ can be identified as a stationary solid object by the camera, the coordinate origin is fixed in processing S106, and the X position, Y position, and width of the stationary solid object are recorded.
- Fig. 4 is an overhead view of the host vehicle and its surrounding objects at a certain time t1 after a certain time has elapsed from time t0 .
- a stationary three-dimensional object OOBJ and a pedestrian OPED are present in a detection area AAL of a radar device H11 outside a detection area ACAM of a camera device H12.
- Fig. 5 shows the radar detection point cloud as viewed on the coordinates fixed in Fig. 4.
- process S110 all radar detection points within a circle of diameter W OBJ centered on the position (X' OBJ (t 0 ), Y' OBJ (t 0 )) of the stationary three-dimensional object recorded in process S106 are assigned an attribute of not being grouped.
- process S112 grouping process is performed on the radar detection points other than those with the non-grouping attribute.
- FIG. 6 is an overhead view of the vehicle and its surrounding objects at a certain time t2 after a certain time has elapsed since time t1 .
- a stationary three-dimensional object O OBJ and a pedestrian O PED are present in a detection area A L of the radar device H11 outside the detection area A CAM of the camera device H12.
- FIG. 6 shows an example of a scene in which it is determined in process S109 that the vehicle has passed the stationary three-dimensional object recorded in process S106.
- the coordinates (X' OBJ (t 0 ), Y' OBJ (t 0 )) of the stationary three-dimensional object recorded in process S106 are converted into coordinates with the X axis in the longitudinal direction of the vehicle, the Y axis in the lateral direction of the vehicle, the center point of the front end of the vehicle body as the origin, and the angle being positive counterclockwise with the vehicle front direction as 0 degrees, to be (X'' OBJ (t 0 ), Y'' OBJ (t 0 ).
- the overall length of the vehicle is L OVERALL , if formula (1) is established, it is determined that the host vehicle has passed the stationary three-dimensional object.
- Example 1 there is only one stationary three-dimensional object to avoid cluttering the diagram, but if there are multiple stationary three-dimensional objects, the processing of Example 1 is performed on all stationary three-dimensional objects detected by the camera.
- Example 2 Next, a second embodiment of the present invention will be described with reference to Figures 7 to 12.
- the scene is one in which the pedestrian moves from a state in which the pedestrian and the three-dimensional object are located apart to a state in which the pedestrian moves close to the three-dimensional object
- the processing will be described using an example of a scene in which the three-dimensional object and the pedestrian are already close to each other at the time of detection by the camera device and the radar device. Note that the same parts and processing in the second embodiment and the first embodiment are given the same reference numerals, and the description thereof will not be repeated.
- the vehicle coordinate system is set with the vehicle's longitudinal (length) direction as the X-axis, the vehicle's lateral (width) direction as the Y-axis, the center point of the front end of the vehicle body as the origin, and angles set at 0 degrees in the forward direction of the vehicle and positive in the counterclockwise direction.
- FIG. 7 shows a flowchart of the fusion program processing executed by the controller H3 in Example 2.
- process S201 it is determined whether the area, height, or width of the stationary three-dimensional object detected by the camera in process S106 is equal to or greater than a threshold value.
- the azimuth angle of the stationary three-dimensional object detected by the camera is calculated in Process S202.
- the azimuth angle is determined based on the camera installation position.
- process S203 a dummy target is placed a certain distance away from the position of the stationary three-dimensional object detected by the camera in the azimuth direction obtained in process S202. This is equivalent to assuming that the camera is unable to detect a target in the blind spot of the stationary three-dimensional object, in other words, that the target may be in the camera occlusion area of the stationary three-dimensional object.
- step S109 is performed, and if the determination in step S109 is No, step S110 is performed.
- steps S109 and S110 are the same as in Example 1, the explanation will be omitted.
- process S111 is carried out, and then in process S205, the dummy target is deleted from the radar's grouping objects.
- Process S111 is the same as in Example 1, so a description thereof will be omitted.
- step S112 is carried out, but as this is the same as in Example 1, a detailed explanation will be omitted.
- process S206 it is determined whether the grouping object obtained in process S112 is present near the dummy target.
- step S206 If it is determined that the grouping object obtained in step S112 is present near the dummy target (step S206: Yes), the dummy target is removed from the radar's grouping objects in step S205, and the process ends.
- process S206 No
- FIG. 8 is an overhead view of the vehicle and its surrounding objects at a certain time t3 .
- a stationary solid object O OBJ and a pedestrian O PED are present in an area where the detection area A L of the radar device H11 and the detection area A CAM of the camera device H12 overlap.
- the stationary solid object O OBJ can be identified as a stationary solid object by the camera, the coordinate origin is fixed and the X position, Y position, and width of the stationary solid object are recorded in process S106.
- the above-mentioned processing is performed when the camera can detect and identify a stationary three-dimensional object.
- process S201 it is determined whether the area, height, or width of the stationary three-dimensional object is equal to or greater than a threshold value based on equations (2), (3), and (4).
- the threshold values for area, height, and width are S TH , H TH , and W TH .
- step S202 the azimuth angle ⁇ OBJ (t 3 ) of the stationary three-dimensional object O OBJ with the camera installation position as the origin is calculated based on the formula (5).
- L FCAM is the distance between the front end of the vehicle body and the camera.
- a dummy target is placed at a position ( X'DUM ( t3 ), Y'DUM ( t3 )) a distance L0 away from the position ( X'OBJ ( t3 ), Y'OBJ ( t3 )) of the stationary three-dimensional object in the direction of azimuth angle ⁇ OBJ ( t3 ).
- radar grouping processing is a process in which nearby radar detection points are regarded as a single object, and the grouping range is generally determined as a circle. Therefore, L0 is calculated from the width WOBJ of the stationary three-dimensional object detected by the camera and the diameter RDUM of the radar grouping range using equation (6).
- X' DUM (t 3 ) and Y' DUM (t 3 ) are calculated using equations (7) and (8), respectively.
- process S204 it is assumed that the dummy target has been detected by the radar, and is treated as a grouping object for the radar.
- Fig. 10 is an overhead view of the host vehicle and its surrounding objects at a certain time t4 after a certain time has elapsed from time t3 .
- a stationary three-dimensional object OOBJ a pedestrian OPED , and a dummy target ODUM are present in a detection area AAL of the radar device H11 outside a detection area ACAM of the camera device H12.
- Fig. 11 shows the radar detection point cloud as viewed on the coordinates fixed in Fig. 10.
- process S110 all radar detection points within a circle of diameter W OBJ centered on the position (X' OBJ (t 3 ), Y' OBJ (t 3 )) of the stationary three-dimensional object recorded in process S106 are assigned an attribute of non-grouping target.
- process S112 grouping process is performed on radar detection points other than those with the non-grouping target attribute.
- process S206 if the grouping object obtained in process S112 exists within a circle of diameter R DUM centered on the position (X' DUM (t 3 ), Y' DUM (t 3 )) of the dummy object obtained in process S203, it is determined that the dummy object actually exists, and in process S205, the dummy object is erased.
- Fig. 12 is an overhead view of the vehicle and its surrounding objects at a certain time t5 , a certain time after the time t4 .
- a stationary three-dimensional object OOBJ and a pedestrian OPED are present in a detection area A- L of the radar device H11 outside a detection area A- CAM of the camera device H12.
- Fig. 12 shows an example of a scene in which it is determined in step S109 that the vehicle has passed the stationary three-dimensional object recorded in step S106, as in Fig. 6, and therefore a detailed description of the process will be omitted.
- Example 2 there is only one stationary three-dimensional object to avoid cluttering the diagram, but if there are multiple stationary three-dimensional objects, the processing of Example 2 is performed on all stationary three-dimensional objects detected by the camera.
- the vehicle coordinate system is set with the vehicle's longitudinal (length) direction as the X-axis, the vehicle's lateral (width) direction as the Y-axis, the center point of the front end of the vehicle body as the origin, and angles set at 0 degrees in the forward direction of the vehicle and positive in the counterclockwise direction.
- FIG. 13 shows a flowchart of the fusion program processing executed by the controller H3 in Example 3.
- process S301 it is determined whether or not a pedestrian (or two-wheeled vehicle) is present in the camera detection results obtained in process S104.
- the azimuth angle of the pedestrian (or two-wheeled vehicle) detected by the camera is calculated in Process S302.
- the azimuth angle is set to originate from the camera installation position.
- process S303 a dummy target is placed a certain distance away from the position of the pedestrian (or motorcycle) detected by the camera in the azimuth direction obtained in process S302. This is equivalent to assuming that the camera is unable to detect a target in the blind spot of the pedestrian (or motorcycle), in other words, that the target may be in the camera occlusion area of the pedestrian (or motorcycle).
- process S304 the dummy target placed in process S303 is retained as a grouping object for the radar.
- step S108 is performed, but since step S108 is the same as in Example 1, a detailed explanation will be omitted.
- process S306 it is determined whether the vehicle has passed the dummy target obtained in process S305.
- step S306 If the vehicle has not passed the dummy target (step S306: No), the radar detection points near the dummy target are excluded from grouping in step S307.
- process S111 is carried out, and then process S205 is carried out.
- Process S111 is the same as in Example 1, and process S205 is the same as in Example 2, so a description thereof will be omitted.
- step S107 is performed. Step S107 is the same as in Example 1, so a description thereof will be omitted.
- step S112 is carried out, but as this is the same as in Example 1, a detailed explanation will be omitted.
- FIG. 14 is an overhead view of the vehicle and its surrounding objects at a certain time t6 .
- a stationary solid object O OBJ and a pedestrian O PED exist in the area where the detection area A L of the radar device H11 and the detection area A CAM of the camera device H12 overlap.
- the stationary solid object O OBJ exists in the camera occlusion area of the pedestrian O PED , the stationary solid object O OBJ cannot be detected by the camera device H12.
- Fig. 15 is an overhead view of the vehicle and its surrounding objects at time t6 , and in order to prevent the diagram from becoming too complicated, stationary three-dimensional objects O OBJ that cannot be detected by the camera are removed, and dummy targets are placed in the camera occlusion area of the pedestrian O PED .
- the dummy target placement process will now be described.
- the pedestrian O PED can be identified as a pedestrian by the camera, in process S302, the azimuth angle ⁇ PED ( t6 ) of the pedestrian O PED with the camera installation position as the origin is calculated based on equation (9).
- a dummy target is placed at a position ( X'DUM ( t6 ), Y'DUM ( t6 )) a distance L P away in the azimuth angle ⁇ PED ( t6 ) direction from the pedestrian position ( X'PED ( t6 ), Y'PED ( t6 )).
- L P is calculated from the width W PED of the pedestrian detected by the camera and the diameter R DUM of the radar grouping range using equation (10), as in process S203 of the second embodiment.
- X' DUM (t 6 ) and Y' DUM (t 6 ) are calculated by the equations (11) and (12), respectively.
- step S304 it is assumed that the dummy target has been detected by the radar, and is treated as a grouping object for the radar.
- process S305 the coordinate origin is fixed, and the X and Y positions of the dummy object are recorded.
- the above process is performed when the camera is able to detect and identify pedestrians, and in Fig. 15, the coordinates of the dummy object just before the camera is no longer able to detect and identify pedestrians are stored as ( X'DUM ( t6 ), Y'DUM ( t6 )).
- Fig. 16 is an overhead view of the host vehicle and its surrounding objects at a certain time t7 , which is a certain time after the time t6 .
- a stationary three-dimensional object OOBJ a pedestrian OPED , and a dummy target ODUM are present in a detection area AAL of the radar device H11 outside a detection area ACAM of the camera device H12.
- Fig. 17 shows the radar detection point cloud as viewed on the coordinates fixed in Fig. 16.
- process S307 all radar detection points within a circle of diameter R DUM centered on the position (X' DUM (t 6 ), Y' DUM (t 6 )) of the dummy target recorded in process S305 are assigned an attribute of non-grouping.
- process S112 grouping process is performed on radar detection points other than those with the non-grouping attribute.
- process S206 if the grouping object obtained in process S112 exists within the circle of diameter R DUM centered on the position (X' DUM (t 6 ), Y' DUM (t 6 )) of the dummy target obtained in process S305, it is determined that the dummy target actually exists, and the dummy target is erased in process S205.
- Fig. 18 is an overhead view of the host vehicle and its surrounding objects at a certain time t8 after a certain time has elapsed from time t7 .
- a stationary three-dimensional object OOBJ and a pedestrian OPED are present in a detection area AAL of the radar device H11 outside a detection area ACAM of the camera device H12.
- Fig. 18 shows an example of a scene in which it is determined in step S306 that the host vehicle has passed the dummy object recorded in step S305, as in Fig. 6, and therefore a detailed description of the process will be omitted.
- Example 3 there is only one stationary object to avoid cluttering the diagram, but if there are multiple stationary objects, the processing of Example 3 is performed on all stationary objects detected by the camera.
- a moving object such as a pedestrian
- the stationary object cannot be detected by the camera due to camera occlusion by the pedestrian, etc.
- a dummy target is placed in the camera occlusion area of the pedestrian, etc. (area near the moving object that is blocked by the moving object), and this is treated as a stationary object and an attribute is given to the stationary object that it is not subject to grouping.
- This makes it possible to detect the stationary object separately (as different targets) in the radar detection area outside the camera detection area, without erroneously grouping the stationary object with a moving object such as a pedestrian nearby.
- multiple radar devices can also be used.
- the controller (vehicle control device) H3 of this embodiment has a camera that recognizes three-dimensional objects around the vehicle, and a millimeter wave radar that recognizes targets in the vicinity of the vehicle, and sets the three-dimensional object as an independent target based on the information of the three-dimensional object recognized by the camera (by giving the three-dimensional object an attribute that is not subject to grouping (by the millimeter wave radar)), and when the millimeter wave radar recognizes the three-dimensional object and other targets as the surrounding targets, it recognizes the three-dimensional object as a target different from the other targets.
- a first virtual target (dummy target) is set in an area near the three-dimensional object where the millimeter wave radar has difficulty detecting the three-dimensional object, and (by giving the three-dimensional object an attribute that is not subject to grouping (by the millimeter wave radar)) when the first virtual target is detected by the millimeter wave radar due to the movement of the vehicle, the three-dimensional object and the first virtual target are recognized as different targets.
- the first virtual target is placed, and if it is less than the predetermined value (threshold), the first virtual target is not placed.
- a second virtual target (dummy target) is placed in the vicinity of the moving target that is blocked by the moving target, and (by giving the second virtual target an attribute that is not subject to grouping (by the millimeter wave radar)) when the second virtual target is detected by the millimeter wave radar due to the movement of the host vehicle, the moving target and the second virtual target are recognized as different targets.
- this embodiment targets a vehicle equipped with an external information acquisition device that is composed of a millimeter wave radar and a camera and acquires information on the vehicle's traveling position and traveling environment, and in a scene where surrounding obstacles present in front of and to the sides of the vehicle are detected, any target detected by the camera as an arbitrary three-dimensional object is excluded from grouping, and fusion is performed based on the grouping results of the radar detection points other than those that are not grouped (embodiment 1), and in addition to the above-mentioned processing, a dummy target is placed in the camera occlusion area of the arbitrary three-dimensional object according to the height, width, area, etc.
- an external information acquisition device that is composed of a millimeter wave radar and a camera and acquires information on the vehicle's traveling position and traveling environment, and in a scene where surrounding obstacles present in front of and to the sides of the vehicle are detected, any target detected by the camera as an arbitrary three-dimensional object is excluded from grouping, and fusion is performed based on the
- a vehicle control device can be provided that can suppress erroneous grouping of millimeter wave radar.
- the present invention is not limited to the above-mentioned embodiment, but includes various modifications.
- the above-mentioned embodiment is described in detail to explain the present invention in an easy-to-understand manner, and is not necessarily limited to having all of the configurations described.
- the above-mentioned configurations, functions, processing units, processing means, etc. may be realized in hardware, in part or in whole, for example by designing them as integrated circuits.
- the above-mentioned configurations, functions, etc. may be realized in software, by a processor interpreting and executing a program that realizes each function.
- Information on the programs, tables, files, etc. that realize each function can be stored in a memory, a recording device such as a hard disk or SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
- H1 External information acquisition unit H11 Radar device (millimeter wave radar) H12 Camera equipment (camera) H2 Vehicle information acquisition unit H3 Controller (vehicle control device) H4 Vehicle control unit
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
Abstract
Provided is a vehicle control device capable of highly accurate sensing, by reducing incorrect grouping of millimeter-wave radars by using a camera x millimeter-wave radar algorithm, in driving assistance technologies. This vehicle control device comprises: a camera that recognizes a three-dimensional object around a host vehicle; and a millimeter-wave radar that recognizes peripheral targets of the host vehicle, sets the three-dimensional object as an independent target on the basis of the information on the three-dimensional object recognized by the camera, and when the millimeter-wave radar recognizes the three-dimensional object and another target as peripheral targets, recognizes the three-dimensional object as a target different from the other target.
Description
本発明は、車両制御装置に関する。
The present invention relates to a vehicle control device.
自動車の運転支援制御において、前方の障害物をカメラやミリ波レーダで構成される前方センサを用いて検知し、衝突の危険性がある場合に自動でブレーキをかけて衝突を回避、および、衝突被害を軽減するシステムが幅広く普及している。
In automotive driving assistance control, systems that use forward sensors consisting of cameras and millimeter-wave radar to detect obstacles ahead and automatically apply the brakes to avoid a collision or mitigate damage if there is a risk of collision are becoming widespread.
近年は、より安全性を高めるために交差点や横断歩道などにおいて前方センサの検知範囲外から急に自車前方に障害物(歩行者、自転車等)が飛び出してくるような場合であっても、適切に障害物を検知してブレーキをかける必要性が生じてきた。また、自動車アセスメント(例えばEURO NCAP等)にて自車交差点旋回時に同一方向に歩道を横断する歩行者との衝突を回避する交差点AEB順行歩行者のプロトコル等が登場してきた。
In recent years, in order to improve safety, there has been a need to properly detect obstacles and apply the brakes even when an obstacle (pedestrian, bicycle, etc.) suddenly appears in front of the vehicle from outside the detection range of the forward sensor at intersections and crosswalks. Also, automobile assessments (such as EURO NCAP) have introduced protocols such as AEB pedestrian forward intersection protocols, which avoid collisions with pedestrians crossing the sidewalk in the same direction when the vehicle turns at an intersection.
これらのシステムは、衝突の十分手前のタイミングから障害物を検知できている必要がある。しかし、前方センサの検知範囲外から急に自車前方に飛び出してくるような障害物に対しては、カメラやミリ波レーダで構成される前方センサを用いた検出では不十分である。
These systems need to be able to detect obstacles well before a collision occurs. However, detection using forward sensors consisting of cameras and millimeter-wave radar is insufficient for obstacles that suddenly appear in front of the vehicle from outside the forward sensor's detection range.
現状では、側方はコストの面でカメラをつけることは行わず、ミリ波レーダのみでセンシングするケースがほとんどである。しかしながら、一般的にミリ波レーダはカメラと比較し、コストが安い一方で検知精度が低いため、ミリ波レーダのセンシング結果のみに基づいて自車を制御するには不作動リスクが高い。例えば、自車交差点旋回時に同一方向に歩道を横断する歩行者とその近傍の(静止)立体物をミリ波レーダが誤ってグルーピングしてしまうことにより、歩行者を正しく検知できず、歩行者と衝突してしまう可能性がある。
Currently, due to cost considerations, cameras are not installed on the sides, and in most cases sensing is done using millimeter wave radar alone. However, while millimeter wave radar is generally cheaper than cameras, it has low detection accuracy, so there is a high risk of it not working if the vehicle is controlled based only on the sensing results of the millimeter wave radar. For example, when the vehicle turns at an intersection, the millimeter wave radar may mistakenly group a pedestrian crossing the sidewalk in the same direction with a (stationary) three-dimensional object in the vicinity, which could result in the pedestrian not being detected correctly and a collision with the pedestrian.
そこで、対策としてミリ波レーダとカメラを用いて検知精度を向上させるカメラ×ミリ波レーダのフュージョン技術が注目されており、例えば以下の特許文献1が開示されている。
As a countermeasure, attention has been focused on camera x millimeter wave radar fusion technology, which uses millimeter wave radar and a camera to improve detection accuracy. For example, the following Patent Document 1 is disclosed.
特許文献1では、レーダによって得られた物標情報と車両の走行情報とに基づいて物標が路面を移動しているか否かを判断することで、障害物検出率の向上を図った技術が開示されている。
Patent Document 1 discloses a technology that aims to improve the obstacle detection rate by determining whether a target is moving on the road surface based on target information obtained by radar and vehicle driving information.
特許文献1では、レーダで得られた物標情報の検知結果が正しいという前提であり、レーダの誤グルーピングが考慮されていないため、レーダが歩行者等の移動物標とポスト等の静止物標とを誤ってグルーピングした場合に移動物標の移動速度が実際よりも小さく検出される虞がある。
Patent Document 1 is based on the assumption that the detection results of the target information obtained by the radar are correct, and does not take into account erroneous grouping by the radar. Therefore, if the radar erroneously groups moving targets such as pedestrians with stationary targets such as posts, there is a risk that the moving speed of the moving target will be detected as lower than it actually is.
本発明の目的は、運転支援技術において、カメラ×ミリ波レーダのアルゴリズムによってミリ波レーダの誤グルーピングを抑制することにより、高精度なセンシングが可能な車両制御装置を提供することにある。
The object of the present invention is to provide a vehicle control device capable of highly accurate sensing in driving assistance technology by suppressing erroneous grouping of millimeter wave radar using a camera x millimeter wave radar algorithm.
上記課題を解決するため、本発明に係る車両制御装置は、自車両周辺にある立体物を認識するカメラと、前記自車両の周辺物標を認識するミリ波レーダと、を有し、前記カメラにより認識した前記立体物の情報に基づき前記立体物を独立した物標として設定し、前記ミリ波レーダが前記立体物と他の物標を前記周辺物標として認識する際、前記立体物を前記他の物標とは異なる物標として認識することを特徴とする。
In order to solve the above problems, the vehicle control device according to the present invention has a camera that recognizes three-dimensional objects around the vehicle, and a millimeter wave radar that recognizes targets in the vicinity of the vehicle, and is characterized in that the three-dimensional object is set as an independent target based on information about the three-dimensional object recognized by the camera, and when the millimeter wave radar recognizes the three-dimensional object and other targets as the surrounding targets, it recognizes the three-dimensional object as a target different from the other targets.
本発明によれば、ミリ波レーダの誤グルーピングを抑制することが可能な車両制御装置を提供することができる。
The present invention provides a vehicle control device that can suppress erroneous grouping of millimeter wave radar.
上述した以外の課題、構成、及び効果は以下の実施形態の説明により明らかにされる。
Problems, configurations, and advantages other than those mentioned above will become clear from the description of the embodiments below.
以下、本発明の実施例を図面に基づいて説明する。
The following describes an embodiment of the present invention with reference to the drawings.
[実施例1]
まず、本発明の実施例1を図1~図6を参照しながら説明する。 [Example 1]
First, a first embodiment of the present invention will be described with reference to FIGS.
まず、本発明の実施例1を図1~図6を参照しながら説明する。 [Example 1]
First, a first embodiment of the present invention will be described with reference to FIGS.
本実施例では、車両前後(長さ)方向をX軸、車両左右(幅)方向をY軸、車両ボディ前端中心点を原点、角度に関しては車両前方方向を0度として反時計周りにプラスとして車両座標系を設定する。
In this embodiment, the vehicle coordinate system is set with the vehicle's longitudinal (length) direction as the X-axis, the vehicle's lateral (width) direction as the Y-axis, the center point of the front end of the vehicle body as the origin, and angles set at 0 degrees in the forward direction of the vehicle and positive in the counterclockwise direction.
図1は、本発明の実施例1におけるコントローラ(車両制御装置)を含むシステムのハードウェア構成の一例を示す構成説明図である。
FIG. 1 is a configuration diagram illustrating an example of the hardware configuration of a system including a controller (vehicle control device) in the first embodiment of the present invention.
本システムは、左右の前方または側方を検知するレーダ装置(以下、単にレーダもしくはミリ波レーダと記載する場合がある)H11と前方を検知するカメラ装置(以下、単にカメラと記載する場合がある)H12等で構成される外界情報取得部H1と、
速度センサH21、舵角センサH22、ヨーレートセンサH23等で構成される自車両情報取得部H2と、
前記外界情報取得部H1からの外界情報と、前記自車両情報取得部H2で取得される自車両情報に基づいて、レーダとカメラの検知結果を統合したフュージョン物標情報を演算し、必要に応じてフュージョン物標との衝突を回避するためのドライバへの警報指令、自車を停止させるためのブレーキ指令、自車を旋回させるための操舵指令等を出力するコントローラH3と、
前記コントローラH3で演算される警報指令に基づいて、警報制御を行う警報器H41、前記コントローラH3で演算されるブレーキ指令に基づいて、ブレーキ制御を行うブレーキシステムH42、前記コントローラH3で演算される操舵指令に基づいて、操舵制御を行う操舵システムH43で構成される車両制御部H4と、から構成される。 This system includes an external information acquisition unit H1, which is composed of a radar device (hereinafter, sometimes simply referred to as a radar or millimeter wave radar) H11 that detects the left and right front or sides, a camera device (hereinafter, sometimes simply referred to as a camera) H12 that detects the front, and the like.
a vehicle information acquisition unit H2 including a speed sensor H21, a steering angle sensor H22, a yaw rate sensor H23, etc.;
a controller H3 that calculates fusion target information that integrates the detection results of the radar and the camera based on the external environment information from the external environment information acquisition unit H1 and the host vehicle information acquired by the host vehicle information acquisition unit H2, and outputs, as necessary, a warning command to the driver to avoid a collision with the fusion target, a brake command to stop the host vehicle, a steering command to turn the host vehicle, etc.;
The vehicle control unit H4 is composed of an alarm device H41 which performs alarm control based on an alarm command calculated by the controller H3, a brake system H42 which performs brake control based on a brake command calculated by the controller H3, and a steering system H43 which performs steering control based on a steering command calculated by the controller H3.
速度センサH21、舵角センサH22、ヨーレートセンサH23等で構成される自車両情報取得部H2と、
前記外界情報取得部H1からの外界情報と、前記自車両情報取得部H2で取得される自車両情報に基づいて、レーダとカメラの検知結果を統合したフュージョン物標情報を演算し、必要に応じてフュージョン物標との衝突を回避するためのドライバへの警報指令、自車を停止させるためのブレーキ指令、自車を旋回させるための操舵指令等を出力するコントローラH3と、
前記コントローラH3で演算される警報指令に基づいて、警報制御を行う警報器H41、前記コントローラH3で演算されるブレーキ指令に基づいて、ブレーキ制御を行うブレーキシステムH42、前記コントローラH3で演算される操舵指令に基づいて、操舵制御を行う操舵システムH43で構成される車両制御部H4と、から構成される。 This system includes an external information acquisition unit H1, which is composed of a radar device (hereinafter, sometimes simply referred to as a radar or millimeter wave radar) H11 that detects the left and right front or sides, a camera device (hereinafter, sometimes simply referred to as a camera) H12 that detects the front, and the like.
a vehicle information acquisition unit H2 including a speed sensor H21, a steering angle sensor H22, a yaw rate sensor H23, etc.;
a controller H3 that calculates fusion target information that integrates the detection results of the radar and the camera based on the external environment information from the external environment information acquisition unit H1 and the host vehicle information acquired by the host vehicle information acquisition unit H2, and outputs, as necessary, a warning command to the driver to avoid a collision with the fusion target, a brake command to stop the host vehicle, a steering command to turn the host vehicle, etc.;
The vehicle control unit H4 is composed of an alarm device H41 which performs alarm control based on an alarm command calculated by the controller H3, a brake system H42 which performs brake control based on a brake command calculated by the controller H3, and a steering system H43 which performs steering control based on a steering command calculated by the controller H3.
図2は、前記コントローラH3にて実行されるフュージョンプログラムの処理フローを示している。
Figure 2 shows the processing flow of the fusion program executed by the controller H3.
初めに処理S101にて、外界情報取得部H1より得られたカメラの検知結果(X位置、Y位置、幅、物標属性等)を取得する。
First, in step S101, the camera detection results (X position, Y position, width, target attributes, etc.) are obtained from the external information acquisition unit H1.
次に処理S102にて、外界情報取得部H1より得られたレーダの検知点情報(X位置、Y位置、レーダー反射強度等)を取得する。
Next, in process S102, radar detection point information (X position, Y position, radar reflection intensity, etc.) obtained from the external information acquisition unit H1 is acquired.
次に処理S103にて、処理S101および処理S102で取得したレーダの検知点情報とカメラ検知結果の時間軸を統一し、同一時刻におけるレーダの検知点情報とカメラ検知結果を算出する。
Next, in process S103, the time axes of the radar detection point information and camera detection results obtained in processes S101 and S102 are unified, and the radar detection point information and camera detection results at the same time are calculated.
次に処理S104にて、処理S103で得られた同一時刻におけるレーダの検知点情報とカメラ検知結果の位置座標を車両ボディ前端中心に統一する。
Next, in process S104, the position coordinates of the radar detection point information and camera detection results obtained at the same time in process S103 are unified to the center of the front end of the vehicle body.
次に処理S105にて、処理S104で得られたカメラの検知結果に静止立体物が存在するか否かを判断する。
Next, in process S105, it is determined whether or not a stationary three-dimensional object is present in the camera detection results obtained in process S104.
カメラの検知結果に静止立体物が存在すると判断した場合(処理S105:Yes)、処理S106にて、座標原点を固定し、カメラで検知した静止立体物のX位置、Y位置、幅を記憶する。
If it is determined that a stationary three-dimensional object is present based on the camera's detection results (Process S105: Yes), the coordinate origin is fixed in Process S106, and the X position, Y position, and width of the stationary three-dimensional object detected by the camera are stored.
次に処理S108にて、固定された座標原点からの自車両のX位置、Y位置および座標固定時点のヨー角を0度としたときのヨー角を推定する。さらに処理S104で得られたカメラとレーダの検知結果の座標原点を固定された座標原点に移動し、ヨー角分回転させる。
Next, in process S108, the X and Y positions of the vehicle from the fixed coordinate origin and the yaw angle when the yaw angle at the time the coordinates were fixed are estimated as 0 degrees. Furthermore, the coordinate origin of the camera and radar detection results obtained in process S104 is moved to the fixed coordinate origin and rotated by the yaw angle.
一方、処理S105にて、カメラの検知結果に静止立体物が存在しないと判断し(処理S105:No)、次に処理S107にて、座標原点が固定中であると判断した場合(処理S107:Yes)、処理S108を実施する。
On the other hand, if it is determined in step S105 that the camera's detection results indicate that no stationary three-dimensional object exists (step S105: No), and then in step S107 it is determined that the coordinate origin is fixed (step S107: Yes), step S108 is executed.
次に処理S109にて、自車両が処理S106で記憶した静止立体物(カメラで検知した静止立体物)を通りすぎていないと判断した場合(処理S109:No)、処理S110にて、処理S106で記憶した静止立体物(カメラで検知した静止立体物)近傍のレーダ検知点をグルーピング対象外とする。
Next, in process S109, if it is determined that the vehicle has not passed the stationary three-dimensional object stored in process S106 (stationary three-dimensional object detected by the camera) (process S109: No), in process S110, the radar detection points near the stationary three-dimensional object stored in process S106 (stationary three-dimensional object detected by the camera) are excluded from grouping.
一方、処理S109にて、自車両が処理S106で記憶した静止立体物を通りすぎたと判断した場合(処理S109:Yes)、処理S111にて、座標固定を解除する。
On the other hand, if it is determined in step S109 that the vehicle has passed the stationary three-dimensional object stored in step S106 (step S109: Yes), the coordinate fix is released in step S111.
次に処理S112にて、グルーピング対象外以外のレーダ検知点でグルーピング処理を実施する。これは近い位置にあるレーダ検知点同士を同じ物標と捉えて1つの検知代表点にする処理である。
Next, in step S112, grouping processing is performed on radar detection points other than those that are not subject to grouping. This is a process in which radar detection points that are close to each other are regarded as the same target and are treated as a single detection representative point.
一方、処理S105にて、カメラの検知結果に静止立体物が存在しないと判断し(処理S105:No)、次に処理S107にて、座標原点が固定中でないと判断した場合(処理S107:No)、処理S112を実施する。
On the other hand, if it is determined in process S105 that the camera detection results indicate that no stationary three-dimensional object exists (process S105: No), and then in process S107 it is determined that the coordinate origin is not fixed (process S107: No), process S112 is executed.
以上で処理が終了する。
The process is now complete.
より具体的に実施例1の処理を説明するためのシーン例を図で示す。例として自車両が交差点に進入する前において、交差点の角に静止立体物が存在し、それをレーダ装置H11およびカメラ装置H12で検知できており、また自車両の左側の歩道を歩行者が自車両と同じ進行方向に歩行しており、自車が交差点旋回時には歩行者が静止立体物近傍にいるようなシーンを時系列で説明する。図3は、ある時刻t0における自車両およびその周辺物体の俯瞰図である。図3では、レーダ装置H11の検知エリアALとカメラ装置H12の検知エリアACAMが重畳する領域において静止立体物OOBJが存在し、レーダ装置H11の検知エリアALにおいて歩行者OPEDが存在している。図3において、静止立体物OOBJをカメラで静止立体物であると識別できている場合、処理S106にて、座標原点を固定し、静止立体物のX位置、Y位置、幅を記録する。前述の処理はカメラで静止立体物を検知および識別できているときに実施され、図3では、カメラで静止立体物を検知および識別できなくなる直前の時刻における静止立体物の座標を(X'OBJ(t0),Y'OBJ(t0))、幅をWOBJとして記憶する。
A scene example for more specifically explaining the processing of the first embodiment is shown in the figure. As an example, a scene in which a stationary solid object exists at the corner of the intersection before the vehicle enters the intersection, and the stationary solid object is detected by the radar device H11 and the camera device H12, and a pedestrian is walking on the sidewalk on the left side of the vehicle in the same traveling direction as the vehicle, and the pedestrian is near the stationary solid object when the vehicle turns the intersection is described in chronological order. FIG. 3 is an overhead view of the vehicle and its surrounding objects at a certain time t0 . In FIG. 3, a stationary solid object O OBJ exists in the area where the detection area A L of the radar device H11 and the detection area A CAM of the camera device H12 overlap, and a pedestrian O PED exists in the detection area A L of the radar device H11. In FIG. 3, if the stationary solid object O OBJ can be identified as a stationary solid object by the camera, the coordinate origin is fixed in processing S106, and the X position, Y position, and width of the stationary solid object are recorded. The above-mentioned processing is performed when the camera can detect and identify a stationary three-dimensional object, and in Figure 3, the coordinates of the stationary three-dimensional object at the time just before the camera can no longer detect and identify it are stored as ( X'OBJ ( t0 ), Y'OBJ ( t0 )) and the width is stored as WOBJ .
図4は、時刻t0からある時間経過したある時刻t1における自車両及びその周辺物体の俯瞰図である。図4では、カメラ装置H12の検知エリアACAM外のレーダ装置H11の検知エリアALにおいて静止立体物OOBJおよび歩行者OPEDが存在している。
Fig. 4 is an overhead view of the host vehicle and its surrounding objects at a certain time t1 after a certain time has elapsed from time t0 . In Fig. 4, a stationary three-dimensional object OOBJ and a pedestrian OPED are present in a detection area AAL of a radar device H11 outside a detection area ACAM of a camera device H12.
図5は、図4のときに固定された座標上で見たレーダ検知点群を示している。図5において、処理S110にて、処理S106で記録した静止立体物の位置(X'OBJ(t0),Y'OBJ(t0))を中心とした直径WOBJの円内のレーダ検知点はすべてグルーピング対象外という属性を付与する。次に処理S112にて、グルーピング対象外属性以外のレーダ検知点でグルーピング処理を実施する。
Fig. 5 shows the radar detection point cloud as viewed on the coordinates fixed in Fig. 4. In Fig. 5, in process S110, all radar detection points within a circle of diameter W OBJ centered on the position (X' OBJ (t 0 ), Y' OBJ (t 0 )) of the stationary three-dimensional object recorded in process S106 are assigned an attribute of not being grouped. Next, in process S112, grouping process is performed on the radar detection points other than those with the non-grouping attribute.
図6は、時刻t1からある時間経過したある時刻t2における自車両およびその周辺物体の俯瞰図である。図6では、カメラ装置H12の検知エリアACAM外のレーダ装置H11の検知エリアALにおいて静止立体物OOBJおよび歩行者OPEDが存在している。図6は、処理S109で自車両が処理S106において記録した静止立体物を通りすぎたと判断するシーン例を示している。処理S106で記録した静止立体物の座標(X'OBJ(t0),Y'OBJ(t0))を、車両前後方向をX軸、車両左右方向をY軸、車両ボディ前端中心点を原点、角度に関しては車両前方方向を0度として反時計周りにプラスとした座標に変換した座標を(X''OBJ(t0),Y''OBJ(t0))とする。このとき、車両全長をLOVERALLとしたとき、(1)式が成立すると、自車両が静止立体物を通りすぎたと判断する。
FIG. 6 is an overhead view of the vehicle and its surrounding objects at a certain time t2 after a certain time has elapsed since time t1 . In FIG. 6, a stationary three-dimensional object O OBJ and a pedestrian O PED are present in a detection area A L of the radar device H11 outside the detection area A CAM of the camera device H12. FIG. 6 shows an example of a scene in which it is determined in process S109 that the vehicle has passed the stationary three-dimensional object recorded in process S106. The coordinates (X' OBJ (t 0 ), Y' OBJ (t 0 )) of the stationary three-dimensional object recorded in process S106 are converted into coordinates with the X axis in the longitudinal direction of the vehicle, the Y axis in the lateral direction of the vehicle, the center point of the front end of the vehicle body as the origin, and the angle being positive counterclockwise with the vehicle front direction as 0 degrees, to be (X'' OBJ (t 0 ), Y'' OBJ (t 0 ). In this case, when the overall length of the vehicle is L OVERALL , if formula (1) is established, it is determined that the host vehicle has passed the stationary three-dimensional object.
上記実施例1の説明では、図が煩雑になることを防ぐため、静止立体物は1つとしているが、静止立体物が複数存在する場合、実施例1の処理はカメラで検知したすべての静止立体物に対して実施する。
In the above explanation of Example 1, there is only one stationary three-dimensional object to avoid cluttering the diagram, but if there are multiple stationary three-dimensional objects, the processing of Example 1 is performed on all stationary three-dimensional objects detected by the camera.
本実施例では、カメラで検知した静止立体物にはグルーピング対象外という属性を付与することで、カメラの検知エリアから外れたレーダ検知エリアにおいて静止立体物とその近傍にいる歩行者等の移動物体とを誤ってグルーピングすることなく、各々を分離して(異なる物標として)検知することが可能である。また、本実施例では、レーダ装置は1つしか記載していないが、レーダ装置が複数でも適用可能である。
In this embodiment, by assigning the attribute of non-grouping to stationary three-dimensional objects detected by the camera, it is possible to detect each separately (as different targets) without mistakenly grouping stationary three-dimensional objects with moving objects such as pedestrians nearby in the radar detection area outside the camera detection area. Also, although only one radar device is described in this embodiment, it is also applicable to multiple radar devices.
[実施例2]
次に、本発明の実施例2を図7~図12を参照しながら説明する。実施例1では、歩行者と立体物が離れた位置にいる状態から歩行者が移動して立体物近傍に移動するシーンであったが、本実施例2では、カメラ装置およびレーダ装置で検知した時点ですでに立体物と歩行者が近傍にいるシーンを例にとり処理を説明する。なお、実施例2と上記実施例1とで同じ部位や同じ処理には同じ符号を付し、その説明は繰り返さない。 [Example 2]
Next, a second embodiment of the present invention will be described with reference to Figures 7 to 12. In the first embodiment, the scene is one in which the pedestrian moves from a state in which the pedestrian and the three-dimensional object are located apart to a state in which the pedestrian moves close to the three-dimensional object, but in the second embodiment, the processing will be described using an example of a scene in which the three-dimensional object and the pedestrian are already close to each other at the time of detection by the camera device and the radar device. Note that the same parts and processing in the second embodiment and the first embodiment are given the same reference numerals, and the description thereof will not be repeated.
次に、本発明の実施例2を図7~図12を参照しながら説明する。実施例1では、歩行者と立体物が離れた位置にいる状態から歩行者が移動して立体物近傍に移動するシーンであったが、本実施例2では、カメラ装置およびレーダ装置で検知した時点ですでに立体物と歩行者が近傍にいるシーンを例にとり処理を説明する。なお、実施例2と上記実施例1とで同じ部位や同じ処理には同じ符号を付し、その説明は繰り返さない。 [Example 2]
Next, a second embodiment of the present invention will be described with reference to Figures 7 to 12. In the first embodiment, the scene is one in which the pedestrian moves from a state in which the pedestrian and the three-dimensional object are located apart to a state in which the pedestrian moves close to the three-dimensional object, but in the second embodiment, the processing will be described using an example of a scene in which the three-dimensional object and the pedestrian are already close to each other at the time of detection by the camera device and the radar device. Note that the same parts and processing in the second embodiment and the first embodiment are given the same reference numerals, and the description thereof will not be repeated.
本実施例では、車両前後(長さ)方向をX軸、車両左右(幅)方向をY軸、車両ボディ前端中心点を原点、角度に関しては車両前方方向を0度として反時計周りにプラスとして車両座標系を設定する。
In this embodiment, the vehicle coordinate system is set with the vehicle's longitudinal (length) direction as the X-axis, the vehicle's lateral (width) direction as the Y-axis, the center point of the front end of the vehicle body as the origin, and angles set at 0 degrees in the forward direction of the vehicle and positive in the counterclockwise direction.
図7に、実施例2における前記コントローラH3にて実行されるフュージョンプログラムの処理のフローチャートを示している。
FIG. 7 shows a flowchart of the fusion program processing executed by the controller H3 in Example 2.
始めの処理S101から処理S108までは実施例1と同じであるため、説明は割愛する。
The initial steps S101 to S108 are the same as in Example 1, so a detailed explanation will be omitted.
次に処理S201にて、処理S106でカメラで検知した静止立体物の面積または高さまたは幅が閾値以上であるか否かを判断する。
Next, in process S201, it is determined whether the area, height, or width of the stationary three-dimensional object detected by the camera in process S106 is equal to or greater than a threshold value.
カメラで検知した静止立体物の面積または高さまたは幅が閾値以上であると判断した場合(処理S201:Yes)、処理S202にて、カメラで検知した静止立体物の方位角を算出する。方位角はカメラ設置位置を原点とする。
If it is determined that the area, height, or width of the stationary three-dimensional object detected by the camera is equal to or greater than the threshold (Process S201: Yes), the azimuth angle of the stationary three-dimensional object detected by the camera is calculated in Process S202. The azimuth angle is determined based on the camera installation position.
次に処理S203にて、カメラで検知した静止立体物の位置から処理S202で得られた方位角方向に対して一定距離遠方にダミー物標を配置する。これは、静止立体物の死角にいる物標をカメラで検知できていない、つまり静止立体物のカメラオクルージョン領域に物標がいるかもしれないと仮定することに等しい。
Next, in process S203, a dummy target is placed a certain distance away from the position of the stationary three-dimensional object detected by the camera in the azimuth direction obtained in process S202. This is equivalent to assuming that the camera is unable to detect a target in the blind spot of the stationary three-dimensional object, in other words, that the target may be in the camera occlusion area of the stationary three-dimensional object.
次に処理S204にて、処理S203で設定したダミー物標をレーダのグルーピング物体として保持する。
Next, in process S204, the dummy target set in process S203 is retained as a grouping object for the radar.
次に処理S109を実施し、処理S109の判断がNoである場合、処理S110を実施するが、処理S109と処理S110は実施例1と同じであるため、説明は割愛する。
Next, step S109 is performed, and if the determination in step S109 is No, step S110 is performed. However, since steps S109 and S110 are the same as in Example 1, the explanation will be omitted.
一方、自車両が静止立体物を通り過ぎたと判断した場合(処理S109:Yes)、処理S111を実施し、次に処理S205にて、ダミー物標をレーダのグルーピング物体から消去する。処理S111は実施例1と同じであるため、説明は割愛する。
On the other hand, if it is determined that the vehicle has passed a stationary three-dimensional object (process S109: Yes), process S111 is carried out, and then in process S205, the dummy target is deleted from the radar's grouping objects. Process S111 is the same as in Example 1, so a description thereof will be omitted.
次に処理S112を実施するが、実施例1と同じであるため、説明は割愛する。
Next, step S112 is carried out, but as this is the same as in Example 1, a detailed explanation will be omitted.
次に処理S206にて、ダミー物標近傍に処理S112で得られたグルーピング物体が存在するか否かを判断する。
Next, in process S206, it is determined whether the grouping object obtained in process S112 is present near the dummy target.
ダミー物標近傍に処理S112で得られたグルーピング物体が存在すると判断した場合(処理S206:Yes)、処理S205にて、ダミー物標をレーダのグルーピング物体から消去し、処理を終了する。
If it is determined that the grouping object obtained in step S112 is present near the dummy target (step S206: Yes), the dummy target is removed from the radar's grouping objects in step S205, and the process ends.
一方、ダミー物標近傍に処理S112で得られたグルーピング物体が存在しないと判断した場合(処理S206:No)、そのまま処理を終了する。
On the other hand, if it is determined that the grouping object obtained in process S112 does not exist near the dummy target (process S206: No), the process ends.
より具体的に実施例2の処理を説明するためのシーン例を図で示す。例として自車両が交差点に進入する前において、交差点の角に静止立体物と歩行者が近傍位置に存在し、自車が交差点旋回時に歩行者が自車方向に近づいてくるようなシーンを時系列で説明する。図8は、ある時刻t3における自車両およびその周辺物体の俯瞰図である。図8では、レーダ装置H11の検知エリアALとカメラ装置H12の検知エリアACAMが重畳する領域において静止立体物OOBJおよび歩行者OPEDが存在している。ただし、歩行者OPEDは静止立体物OOBJのカメラオクルージョン領域に存在するため、カメラ装置H12では歩行者OPEDは検知できていないものとする。図8において、静止立体物OOBJをカメラで静止立体物であると識別できている場合、処理S106にて、座標原点を固定し、静止立体物のX位置、Y位置、幅を記録する。前述の処理はカメラで静止立体物を検知および識別できているときに実施され、図8では、カメラで静止立体物を検知および識別できなくなる直前の時刻における静止立体物の座標を(X'OBJ(t3),Y'OBJ(t3))、幅をWOBJ、高さをHOBJとして記憶する。
A scene example for more specifically explaining the process of the second embodiment is shown in the figure. As an example, a scene in which a stationary solid object and a pedestrian are present at a nearby position at the corner of the intersection before the vehicle enters the intersection, and the pedestrian approaches the vehicle when the vehicle turns the intersection is explained in time series. FIG. 8 is an overhead view of the vehicle and its surrounding objects at a certain time t3 . In FIG. 8, a stationary solid object O OBJ and a pedestrian O PED are present in an area where the detection area A L of the radar device H11 and the detection area A CAM of the camera device H12 overlap. However, since the pedestrian O PED is present in the camera occlusion area of the stationary solid object O OBJ , it is assumed that the camera device H12 cannot detect the pedestrian O PED . In FIG. 8, if the stationary solid object O OBJ can be identified as a stationary solid object by the camera, the coordinate origin is fixed and the X position, Y position, and width of the stationary solid object are recorded in process S106. The above-mentioned processing is performed when the camera can detect and identify a stationary three-dimensional object. In FIG. 8, the coordinates of the stationary three-dimensional object at the time just before the camera can no longer detect and identify it are stored as ( X'OBJ ( t3 ), Y'OBJ ( t3 )), its width as WOBJ , and its height as HOBJ .
図9は、時刻t3における自車両及びその周辺物体の俯瞰図であり、図が煩雑になることを防ぐため、カメラで検知できていない歩行者OPEDを消去し、静止立体物OOBJのカメラオクルージョン領域にダミー物標を配置した図である。ダミー物標配置処理について説明する。始めに処理S201にて、式(2)、(3)、(4)に基づいて静止立体物の面積または高さまたは幅が閾値以上であるかを判断する。面積、高さ、幅の各閾値はSTH、HTH、WTHとする。
9 is an overhead view of the vehicle and its surrounding objects at time t3 , and in order to prevent the diagram from becoming too complicated, the pedestrian O PED that has not been detected by the camera is removed, and a dummy target is placed in the camera occlusion area of the stationary three-dimensional object O OBJ . The dummy target placement process will be described. First, in process S201, it is determined whether the area, height, or width of the stationary three-dimensional object is equal to or greater than a threshold value based on equations (2), (3), and (4). The threshold values for area, height, and width are S TH , H TH , and W TH .
次に式(2)、(3)、(4)のいずれかが成立した場合、処理S202にて、(5)式に基づきカメラ設置位置を原点とした静止立体物OOBJの方位角θOBJ(t3)を算出する。
ここで、LFCAMは車両ボディ前端-カメラ間距離とする。
Next, if any of the formulas (2), (3), and (4) is satisfied, in step S202, the azimuth angle θ OBJ (t 3 ) of the stationary three-dimensional object O OBJ with the camera installation position as the origin is calculated based on the formula (5).
Here, L FCAM is the distance between the front end of the vehicle body and the camera.
次に処理S203にて、静止立体物の位置(X'OBJ(t3),Y'OBJ(t3))から方位角θOBJ(t3)方向にある距離LO遠方の位置(X'DUM(t3),Y'DUM(t3))にダミー物標を配置する。前述したようにレーダのグルーピング処理は近くにあるレーダ検知点を一つの物体とみなす処理であり、そのグルーピング範囲は一般に円で決定されている。そのため、LOはカメラで検知した静止立体物の幅WOBJとレーダのグルーピング範囲の直径RDUMから(6)式により算出する。
Next, in process S203, a dummy target is placed at a position ( X'DUM ( t3 ), Y'DUM ( t3 )) a distance L0 away from the position ( X'OBJ ( t3 ), Y'OBJ ( t3 )) of the stationary three-dimensional object in the direction of azimuth angle θOBJ ( t3 ). As mentioned above, radar grouping processing is a process in which nearby radar detection points are regarded as a single object, and the grouping range is generally determined as a circle. Therefore, L0 is calculated from the width WOBJ of the stationary three-dimensional object detected by the camera and the diameter RDUM of the radar grouping range using equation (6).
また、X'DUM(t3),Y'DUM(t3)はそれぞれ(7)、(8)式により算出する。
Moreover, X' DUM (t 3 ) and Y' DUM (t 3 ) are calculated using equations (7) and (8), respectively.
次に処理S204にて、ダミー物標をレーダで検知したと仮定し、レーダのグルーピング物体とする。
Next, in process S204, it is assumed that the dummy target has been detected by the radar, and is treated as a grouping object for the radar.
図10は、時刻t3からある時間経過したある時刻t4における自車両及びその周辺物体の俯瞰図である。図10では、カメラ装置H12の検知エリアACAM外のレーダ装置H11の検知エリアALにおいて静止立体物OOBJおよび歩行者OPEDおよびダミー物標ODUMが存在している。
Fig. 10 is an overhead view of the host vehicle and its surrounding objects at a certain time t4 after a certain time has elapsed from time t3 . In Fig. 10, a stationary three-dimensional object OOBJ , a pedestrian OPED , and a dummy target ODUM are present in a detection area AAL of the radar device H11 outside a detection area ACAM of the camera device H12.
図11は、図10のときに固定された座標上で見たレーダ検知点群を示している。図11において、処理S110にて、処理S106で記録した静止立体物の位置(X'OBJ(t3),Y'OBJ(t3))を中心とした直径WOBJの円内のレーダ検知点はすべてグルーピング対象外という属性を付与する。次に処理S112にて、グルーピング対象外属性以外のレーダ検知点でグルーピング処理を実施する。次に処理S206にて、処理S203で得られたダミー物標の位置(X'DUM(t3),Y'DUM(t3))を中心とした直径RDUMの円内に処理S112で得られたグルーピング物体が存在する場合は、ダミー物標が実際に存在していると判断し、処理S205にて、ダミー物標を消去する。
Fig. 11 shows the radar detection point cloud as viewed on the coordinates fixed in Fig. 10. In Fig. 11, in process S110, all radar detection points within a circle of diameter W OBJ centered on the position (X' OBJ (t 3 ), Y' OBJ (t 3 )) of the stationary three-dimensional object recorded in process S106 are assigned an attribute of non-grouping target. Next, in process S112, grouping process is performed on radar detection points other than those with the non-grouping target attribute. Next, in process S206, if the grouping object obtained in process S112 exists within a circle of diameter R DUM centered on the position (X' DUM (t 3 ), Y' DUM (t 3 )) of the dummy object obtained in process S203, it is determined that the dummy object actually exists, and in process S205, the dummy object is erased.
図12は、時刻t4からある時間経過したある時刻t5における自車両およびその周辺物体の俯瞰図である。図12では、カメラ装置H12の検知エリアACAM外のレーダ装置H11の検知エリアALにおいて静止立体物OOBJおよび歩行者OPEDが存在している。図12は、図6と同様に、処理S109で自車両が処理S106において記録した静止立体物を通りすぎたと判断するシーン例を示しているため、具体的な処理についての説明は割愛する。
Fig. 12 is an overhead view of the vehicle and its surrounding objects at a certain time t5 , a certain time after the time t4 . In Fig. 12, a stationary three-dimensional object OOBJ and a pedestrian OPED are present in a detection area A- L of the radar device H11 outside a detection area A- CAM of the camera device H12. Fig. 12 shows an example of a scene in which it is determined in step S109 that the vehicle has passed the stationary three-dimensional object recorded in step S106, as in Fig. 6, and therefore a detailed description of the process will be omitted.
上記実施例2の説明では、図が煩雑になることを防ぐため、静止立体物は1つとしているが、静止立体物が複数存在する場合、実施例2の処理はカメラで検知したすべての静止立体物に対して実施する。
In the above explanation of Example 2, there is only one stationary three-dimensional object to avoid cluttering the diagram, but if there are multiple stationary three-dimensional objects, the processing of Example 2 is performed on all stationary three-dimensional objects detected by the camera.
本実施例では、カメラで検知した静止立体物にはグルーピング対象外という属性を付与することおよび静止立体物が大きい場合にはそのカメラオクルージョン領域(立体物によってミリ波レーダの検知が困難となる立体物の近傍領域)に物標が存在すると仮定することで、カメラの検知エリアから外れたレーダ検知エリアにおいて静止立体物とその近傍にいる歩行者等の移動物体とを誤ってグルーピングすることなく、各々を分離して(異なる物標として)検知することが可能である。また、本実施例では、レーダ装置は1つしか記載していないが、レーダ装置が複数でも適用可能である。
In this embodiment, by assigning an attribute to stationary three-dimensional objects detected by the camera that indicates that they are not subject to grouping, and by assuming that the target exists in the camera occlusion area (the area near the object that makes it difficult for the millimeter wave radar to detect it) if the stationary three-dimensional object is large, it is possible to detect each separately (as different targets) in a radar detection area outside the camera detection area, without mistakenly grouping stationary three-dimensional objects and moving objects such as pedestrians nearby. Also, although only one radar device is described in this embodiment, multiple radar devices can also be used.
[実施例3]
次に、本発明の実施例3を図13~図18を参照しながら説明する。実施例1、2とは異なるシーンとして、歩行者等の移動物体が静止立体物と自車両との間に存在し、静止立体物をカメラで検知できないシーンを例にとり処理を説明する。なお、実施例3と上記実施例1、2とで同じ部位や同じ処理には同じ符号を付し、その説明は繰り返さない。 [Example 3]
Next, a third embodiment of the present invention will be described with reference to Figures 13 to 18. As a scene different from those of the first and second embodiments, a scene in which a moving object such as a pedestrian exists between a stationary three-dimensional object and the vehicle itself, and the stationary three-dimensional object cannot be detected by the camera, will be taken as an example to describe the processing. Note that the same parts and processing in the third embodiment and those in the first and second embodiments will be given the same reference numerals, and the description thereof will not be repeated.
次に、本発明の実施例3を図13~図18を参照しながら説明する。実施例1、2とは異なるシーンとして、歩行者等の移動物体が静止立体物と自車両との間に存在し、静止立体物をカメラで検知できないシーンを例にとり処理を説明する。なお、実施例3と上記実施例1、2とで同じ部位や同じ処理には同じ符号を付し、その説明は繰り返さない。 [Example 3]
Next, a third embodiment of the present invention will be described with reference to Figures 13 to 18. As a scene different from those of the first and second embodiments, a scene in which a moving object such as a pedestrian exists between a stationary three-dimensional object and the vehicle itself, and the stationary three-dimensional object cannot be detected by the camera, will be taken as an example to describe the processing. Note that the same parts and processing in the third embodiment and those in the first and second embodiments will be given the same reference numerals, and the description thereof will not be repeated.
本実施例では、車両前後(長さ)方向をX軸、車両左右(幅)方向をY軸、車両ボディ前端中心点を原点、角度に関しては車両前方方向を0度として反時計周りにプラスとして車両座標系を設定する。
In this embodiment, the vehicle coordinate system is set with the vehicle's longitudinal (length) direction as the X-axis, the vehicle's lateral (width) direction as the Y-axis, the center point of the front end of the vehicle body as the origin, and angles set at 0 degrees in the forward direction of the vehicle and positive in the counterclockwise direction.
図13に、実施例3における前記コントローラH3にて実行されるフュージョンプログラムの処理のフローチャートを示している。
FIG. 13 shows a flowchart of the fusion program processing executed by the controller H3 in Example 3.
始めの処理S101から処理S104までは実施例1と同じであるため、説明は割愛する。
The initial steps S101 to S104 are the same as in Example 1, so a detailed explanation will be omitted.
次に処理S301にて、処理S104で得られたカメラの検知結果に歩行者(または2輪車)が存在するか否かを判断する。
Next, in process S301, it is determined whether or not a pedestrian (or two-wheeled vehicle) is present in the camera detection results obtained in process S104.
カメラの検知結果に歩行者(または2輪車)が存在すると判断した場合(処理S301:Yes)、処理S302にて、カメラで検知した歩行者(または2輪車)の方位角を算出する。方位角はカメラ設置位置を原点とする。
If it is determined that a pedestrian (or two-wheeled vehicle) is present in the camera detection results (Process S301: Yes), the azimuth angle of the pedestrian (or two-wheeled vehicle) detected by the camera is calculated in Process S302. The azimuth angle is set to originate from the camera installation position.
次に処理S303にて、カメラで検知した歩行者(または2輪車)の位置から処理S302で得られた方位角方向に対して一定距離遠方にダミー物標を配置する。これは、歩行者(または2輪車)の死角にいる物標をカメラで検知できていない、つまり歩行者(または2輪車)のカメラオクルージョン領域に物標がいるかもしれないと仮定することに等しい。
Next, in process S303, a dummy target is placed a certain distance away from the position of the pedestrian (or motorcycle) detected by the camera in the azimuth direction obtained in process S302. This is equivalent to assuming that the camera is unable to detect a target in the blind spot of the pedestrian (or motorcycle), in other words, that the target may be in the camera occlusion area of the pedestrian (or motorcycle).
次に処理S304にて、処理S303で配置したダミー物標をレーダのグルーピング物体として保持する。
Next, in process S304, the dummy target placed in process S303 is retained as a grouping object for the radar.
次にS305で、座標原点を固定し、ダミー物標の位置を記憶する。
Next, in S305, the coordinate origin is fixed and the position of the dummy target is stored.
次に処理S108を実施するが、処理S108は実施例1と同じであるため、説明は割愛する。
Next, step S108 is performed, but since step S108 is the same as in Example 1, a detailed explanation will be omitted.
次に処理S306で、自車両が処理S305で得られたダミー物標を通り過ぎたか否かを判断する。
Next, in process S306, it is determined whether the vehicle has passed the dummy target obtained in process S305.
自車両がダミー物標を通り過ぎていない場合(処理S306:No)、処理S307にて、ダミー物標近傍のレーダ検知点をグルーピング対象外とする。
If the vehicle has not passed the dummy target (step S306: No), the radar detection points near the dummy target are excluded from grouping in step S307.
一方、自車両がダミー物標を通り過ぎたと判断した場合(処理S306:Yes)、処理S111を実施し、次に処理S205を実施する。処理S111は実施例1と同じであり、処理S205は実施例2と同じであるため、説明は割愛する。
On the other hand, if it is determined that the vehicle has passed the dummy target (process S306: Yes), process S111 is carried out, and then process S205 is carried out. Process S111 is the same as in Example 1, and process S205 is the same as in Example 2, so a description thereof will be omitted.
一方、処理S301にて、カメラの検知結果に歩行者(または2輪車)が存在しないと判断した場合(処理S301:No)、処理S107を実施する。処理S107は実施例1と同じであるため、説明は割愛する。
On the other hand, if it is determined in step S301 that no pedestrian (or two-wheeled vehicle) is present in the camera detection results (step S301: No), step S107 is performed. Step S107 is the same as in Example 1, so a description thereof will be omitted.
次に処理S112を実施するが、実施例1と同じであるため、説明は割愛する。
Next, step S112 is carried out, but as this is the same as in Example 1, a detailed explanation will be omitted.
以降の処理S206、処理S205は実施例2と同じであるため、説明は割愛する。
The subsequent steps S206 and S205 are the same as those in Example 2, so the explanation will be omitted.
以上で処理を終了する。
This completes the process.
より具体的に実施例3の処理を説明するためのシーン例を図で示す。例として自車両が交差点に進入する前において、自車両の左前方の歩道を歩行者が自車両の進行方向と同一方向に移動しており、交差点の角に存在する立体物がカメラの検知領域から外れるまで常にその歩行者のカメラオクルージョン領域に存在するようなシーンを時系列で説明する。図14は、ある時刻t6における自車両およびその周辺物体の俯瞰図である。図14では、レーダ装置H11の検知エリアALとカメラ装置H12の検知エリアACAMが重畳する領域において静止立体物OOBJおよび歩行者OPEDが存在している。ただし、静止立体物OOBJは歩行者OPEDのカメラオクルージョン領域に存在するため、カメラ装置H12では静止立体物OOBJは検知できていないものとする。
A scene example for more specifically explaining the processing of the third embodiment is shown in the figure. As an example, a scene in which a pedestrian moves in the same direction as the traveling direction of the vehicle on the sidewalk in front of the vehicle on the left before the vehicle enters an intersection, and a solid object at the corner of the intersection always exists in the camera occlusion area of the pedestrian until it leaves the detection area of the camera is explained in chronological order. FIG. 14 is an overhead view of the vehicle and its surrounding objects at a certain time t6 . In FIG. 14, a stationary solid object O OBJ and a pedestrian O PED exist in the area where the detection area A L of the radar device H11 and the detection area A CAM of the camera device H12 overlap. However, since the stationary solid object O OBJ exists in the camera occlusion area of the pedestrian O PED , the stationary solid object O OBJ cannot be detected by the camera device H12.
図15は、時刻t6における自車両及びその周辺物体の俯瞰図であり、図が煩雑になることを防ぐため、カメラで検知できていない静止立体物OOBJを消去し、歩行者OPEDのカメラオクルージョン領域にダミー物標を配置した図である。ダミー物標配置処理について説明する。図15において、歩行者OPEDをカメラで歩行者であると識別できている場合、処理S302にて、(9)式に基づきカメラ設置位置を原点とした歩行者OPEDの方位角θPED(t6)を算出する。
Fig. 15 is an overhead view of the vehicle and its surrounding objects at time t6 , and in order to prevent the diagram from becoming too complicated, stationary three-dimensional objects O OBJ that cannot be detected by the camera are removed, and dummy targets are placed in the camera occlusion area of the pedestrian O PED . The dummy target placement process will now be described. In Fig. 15, if the pedestrian O PED can be identified as a pedestrian by the camera, in process S302, the azimuth angle θ PED ( t6 ) of the pedestrian O PED with the camera installation position as the origin is calculated based on equation (9).
次に処理S303にて、歩行者の位置(X'PED(t6),Y'PED(t6))から方位角θPED(t6)方向に距離LP遠方の位置(X'DUM(t6),Y'DUM(t6))にダミー物標を配置する。LPは、実施例2の処理S203と同様にカメラで検知した歩行者の幅WPEDとレーダのグルーピング範囲の直径RDUMから(10)式により算出する。
Next, in process S303, a dummy target is placed at a position ( X'DUM ( t6 ), Y'DUM ( t6 )) a distance L P away in the azimuth angle θ PED ( t6 ) direction from the pedestrian position ( X'PED ( t6 ), Y'PED ( t6 )). L P is calculated from the width W PED of the pedestrian detected by the camera and the diameter R DUM of the radar grouping range using equation (10), as in process S203 of the second embodiment.
また、X'DUM(t6),Y'DUM(t6)はそれぞれ(11)、(12)式により算出する。
Moreover, X' DUM (t 6 ) and Y' DUM (t 6 ) are calculated by the equations (11) and (12), respectively.
次に処理S304にて、ダミー物標をレーダで検知したと仮定し、レーダのグルーピング物体とする。
Next, in step S304, it is assumed that the dummy target has been detected by the radar, and is treated as a grouping object for the radar.
次に処理S305にて、座標原点を固定し、ダミー物標のX位置、Y位置を記録する。前述の処理はカメラで歩行者を検知および識別できているときに実施され、図15では、カメラで歩行者を検知および識別できなくなる直前の時刻におけるダミー物標の座標を(X'DUM(t6),Y'DUM(t6))として記憶する。
Next, in process S305, the coordinate origin is fixed, and the X and Y positions of the dummy object are recorded. The above process is performed when the camera is able to detect and identify pedestrians, and in Fig. 15, the coordinates of the dummy object just before the camera is no longer able to detect and identify pedestrians are stored as ( X'DUM ( t6 ), Y'DUM ( t6 )).
図16は、時刻t6からある時間経過したある時刻t7における自車両及びその周辺物体の俯瞰図である。図16では、カメラ装置H12の検知エリアACAM外のレーダ装置H11の検知エリアALにおいて静止立体物OOBJおよび歩行者OPEDおよびダミー物標ODUMが存在している。
Fig. 16 is an overhead view of the host vehicle and its surrounding objects at a certain time t7 , which is a certain time after the time t6 . In Fig. 16, a stationary three-dimensional object OOBJ , a pedestrian OPED , and a dummy target ODUM are present in a detection area AAL of the radar device H11 outside a detection area ACAM of the camera device H12.
図17は、図16のときに固定された座標上で見たレーダ検知点群を示している。図17おいて、処理S307にて、処理S305で記録したダミー物標の位置(X'DUM(t6),Y'DUM(t6))を中心とした直径RDUMの円内のレーダ検知点はすべてグルーピング対象外という属性を付与する。次に処理S112にて、グルーピング対象外属性以外のレーダ検知点でグルーピング処理を実施する。次に処理S206にて、処理S305で得られたダミー物標の位置(X'DUM(t6),Y'DUM(t6))を中心とした直径RDUMの円内に処理S112で得られたグルーピング物体が存在する場合は、ダミー物標が実際に存在していると判断し、処理S205にて、ダミー物標を消去する。
Fig. 17 shows the radar detection point cloud as viewed on the coordinates fixed in Fig. 16. In Fig. 17, in process S307, all radar detection points within a circle of diameter R DUM centered on the position (X' DUM (t 6 ), Y' DUM (t 6 )) of the dummy target recorded in process S305 are assigned an attribute of non-grouping. Next, in process S112, grouping process is performed on radar detection points other than those with the non-grouping attribute. Next, in process S206, if the grouping object obtained in process S112 exists within the circle of diameter R DUM centered on the position (X' DUM (t 6 ), Y' DUM (t 6 )) of the dummy target obtained in process S305, it is determined that the dummy target actually exists, and the dummy target is erased in process S205.
図18は、時刻t7からある時間経過したある時刻t8における自車両およびその周辺物体の俯瞰図である。図18では、カメラ装置H12の検知エリアACAM外のレーダ装置H11の検知エリアALにおいて静止立体物OOBJおよび歩行者OPEDが存在している。図18は、図6と同様に処理S306で自車両が処理S305において記録したダミー物標を通りすぎたと判断するシーン例を示しているため、具体的な処理についての説明は割愛する。
Fig. 18 is an overhead view of the host vehicle and its surrounding objects at a certain time t8 after a certain time has elapsed from time t7 . In Fig. 18, a stationary three-dimensional object OOBJ and a pedestrian OPED are present in a detection area AAL of the radar device H11 outside a detection area ACAM of the camera device H12. Fig. 18 shows an example of a scene in which it is determined in step S306 that the host vehicle has passed the dummy object recorded in step S305, as in Fig. 6, and therefore a detailed description of the process will be omitted.
上記実施例3の説明では、図が煩雑になることを防ぐため、静止立体物は1つとしているが、静止立体物が複数存在する場合、実施例3の処理はカメラで検知したすべての静止立体物に対して実施する。
In the above explanation of Example 3, there is only one stationary object to avoid cluttering the diagram, but if there are multiple stationary objects, the processing of Example 3 is performed on all stationary objects detected by the camera.
本実施例では、自車両と静止立体物の間に歩行者等の移動物体が存在し、静止立体物が歩行者等のカメラオクルージョンとなってカメラで検知できない場合には、歩行者等のカメラオクルージョン領域(移動物体により遮蔽される移動物体の近傍領域)にダミー物標物標を配置し、それを静止立体物として静止立体物にはグルーピング対象外という属性を付与することで、カメラの検知エリアから外れたレーダ検知エリアにおいて静止立体物とその近傍にいる歩行者等の移動物体とを誤ってグルーピングすることなく、各々を分離して(異なる物標として)検知することが可能である。また、本実施例では、レーダ装置は1つしか記載していないが、レーダ装置が複数でも適用可能である。
In this embodiment, if a moving object such as a pedestrian is present between the vehicle and a stationary object, and the stationary object cannot be detected by the camera due to camera occlusion by the pedestrian, etc., a dummy target is placed in the camera occlusion area of the pedestrian, etc. (area near the moving object that is blocked by the moving object), and this is treated as a stationary object and an attribute is given to the stationary object that it is not subject to grouping. This makes it possible to detect the stationary object separately (as different targets) in the radar detection area outside the camera detection area, without erroneously grouping the stationary object with a moving object such as a pedestrian nearby. Also, although only one radar device is described in this embodiment, multiple radar devices can also be used.
以上で説明したように、本実施例のコントローラ(車両制御装置)H3は、自車両周辺にある立体物を認識するカメラと、前記自車両の周辺物標を認識するミリ波レーダと、を有し、前記カメラにより認識した前記立体物の情報に基づき(前記立体物に(ミリ波レーダの)グルーピング対象外の属性を付与することで)前記立体物を独立した物標として設定し、前記ミリ波レーダが前記立体物と他の物標を前記周辺物標として認識する際、前記立体物を前記他の物標とは異なる物標として認識する。
As described above, the controller (vehicle control device) H3 of this embodiment has a camera that recognizes three-dimensional objects around the vehicle, and a millimeter wave radar that recognizes targets in the vicinity of the vehicle, and sets the three-dimensional object as an independent target based on the information of the three-dimensional object recognized by the camera (by giving the three-dimensional object an attribute that is not subject to grouping (by the millimeter wave radar)), and when the millimeter wave radar recognizes the three-dimensional object and other targets as the surrounding targets, it recognizes the three-dimensional object as a target different from the other targets.
また、前記立体物によって前記ミリ波レーダの検知が困難となる前記立体物の近傍領域に第1の仮想物標(ダミー物標)を設定し、(前記立体物に(ミリ波レーダの)グルーピング対象外の属性を付与することで、)前記自車両の移動により前記第1の仮想物標が前記ミリ波レーダにより検知された場合、前記立体物と前記第1の仮想物標とを異なる物標として認識する。
In addition, a first virtual target (dummy target) is set in an area near the three-dimensional object where the millimeter wave radar has difficulty detecting the three-dimensional object, and (by giving the three-dimensional object an attribute that is not subject to grouping (by the millimeter wave radar)) when the first virtual target is detected by the millimeter wave radar due to the movement of the vehicle, the three-dimensional object and the first virtual target are recognized as different targets.
また、前記立体物の面積、幅、高さの少なくとも1つが所定の値(閾値)以上である場合に前記第1の仮想物標を配置し、前記所定の値(閾値)よりも小さい場合は前記第1の仮想物標を配置しない。
In addition, if at least one of the area, width, and height of the three-dimensional object is equal to or greater than a predetermined value (threshold), the first virtual target is placed, and if it is less than the predetermined value (threshold), the first virtual target is not placed.
また、前記カメラによって移動物標を認識した場合、前記移動物標により遮蔽される前記移動物標の近傍領域に第2の仮想物標(ダミー物標)を配置し、(前記第2の仮想物標に(ミリ波レーダの)グルーピング対象外の属性を付与することで、)前記自車両の移動により前記第2の仮想物標が前記ミリ波レーダにより検知された場合、前記移動物標と前記第2の仮想物標とを異なる物標として認識する。
In addition, when a moving target is recognized by the camera, a second virtual target (dummy target) is placed in the vicinity of the moving target that is blocked by the moving target, and (by giving the second virtual target an attribute that is not subject to grouping (by the millimeter wave radar)) when the second virtual target is detected by the millimeter wave radar due to the movement of the host vehicle, the moving target and the second virtual target are recognized as different targets.
すなわち、本実施例は、ミリ波レーダとカメラで構成される、自車両の走行する位置及び走行環境の情報等を取得する外界情報取得装置を備えた車両を対象とし、自車の前側方や側方に存在する周囲の障害物を検知するシーンにて、カメラで任意立体物と検知した物標はグルーピング対象外とし、グルーピング対象外以外のレーダ検知点のグルーピング結果に基づいてフュージョンすること(実施例1)、および前述の処理に加え、カメラで任意立体物と検知した物標の高さ、幅、面積等に応じて任意立体物のカメラオクルージョン領域にダミー物標を配置してフュージョンすること(実施例2)、およびカメラで歩行者あるいは2輪車と検知した物標はそのカメラオクルージョン領域にダミー物標を配置し、そのダミー物標はグルーピング対象外とし、グルーピング対象外以外のレーダ検知点のグルーピング結果に基づいてフュージョンすること(実施例3)により高精度なセンシングが可能な車両制御装置を提供する。
In other words, this embodiment targets a vehicle equipped with an external information acquisition device that is composed of a millimeter wave radar and a camera and acquires information on the vehicle's traveling position and traveling environment, and in a scene where surrounding obstacles present in front of and to the sides of the vehicle are detected, any target detected by the camera as an arbitrary three-dimensional object is excluded from grouping, and fusion is performed based on the grouping results of the radar detection points other than those that are not grouped (embodiment 1), and in addition to the above-mentioned processing, a dummy target is placed in the camera occlusion area of the arbitrary three-dimensional object according to the height, width, area, etc. of the target detected by the camera as an arbitrary three-dimensional object, and fusion is performed (embodiment 2), and a dummy target is placed in the camera occlusion area of a target detected by the camera as a pedestrian or two-wheeled vehicle, and the dummy target is excluded from grouping, and fusion is performed based on the grouping results of the radar detection points other than those that are not grouped (embodiment 3).By these means, a vehicle control device capable of highly accurate sensing is provided.
本実施例によれば、ミリ波レーダの誤グルーピングを抑制することが可能な車両制御装置を提供することができる。
According to this embodiment, a vehicle control device can be provided that can suppress erroneous grouping of millimeter wave radar.
なお、本発明は上述の実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。
The present invention is not limited to the above-mentioned embodiment, but includes various modifications. For example, the above-mentioned embodiment is described in detail to explain the present invention in an easy-to-understand manner, and is not necessarily limited to having all of the configurations described.
また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。
Furthermore, the above-mentioned configurations, functions, processing units, processing means, etc. may be realized in hardware, in part or in whole, for example by designing them as integrated circuits. The above-mentioned configurations, functions, etc. may be realized in software, by a processor interpreting and executing a program that realizes each function. Information on the programs, tables, files, etc. that realize each function can be stored in a memory, a recording device such as a hard disk or SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
H1 外界情報取得部
H11 レーダ装置(ミリ波レーダ)
H12 カメラ装置(カメラ)
H2 自車両情報取得部
H3 コントローラ(車両制御装置)
H4 車両制御部 H1 External information acquisition unit H11 Radar device (millimeter wave radar)
H12 Camera equipment (camera)
H2 Vehicle information acquisition unit H3 Controller (vehicle control device)
H4 Vehicle control unit
H11 レーダ装置(ミリ波レーダ)
H12 カメラ装置(カメラ)
H2 自車両情報取得部
H3 コントローラ(車両制御装置)
H4 車両制御部 H1 External information acquisition unit H11 Radar device (millimeter wave radar)
H12 Camera equipment (camera)
H2 Vehicle information acquisition unit H3 Controller (vehicle control device)
H4 Vehicle control unit
Claims (7)
- 自車両周辺にある立体物を認識するカメラと、前記自車両の周辺物標を認識するミリ波レーダと、を有し、
前記カメラにより認識した前記立体物の情報に基づき前記立体物を独立した物標として設定し、前記ミリ波レーダが前記立体物と他の物標を前記周辺物標として認識する際、前記立体物を前記他の物標とは異なる物標として認識することを特徴とする車両制御装置。 The vehicle has a camera that recognizes three-dimensional objects around the vehicle and a millimeter wave radar that recognizes objects around the vehicle,
A vehicle control device characterized in that the three-dimensional object is set as an independent target based on information of the three-dimensional object recognized by the camera, and when the millimeter wave radar recognizes the three-dimensional object and other targets as the surrounding targets, the vehicle control device recognizes the three-dimensional object as a target different from the other targets. - 請求項1に記載の車両制御装置であって、
前記立体物によって前記ミリ波レーダの検知が困難となる前記立体物の近傍領域に第1の仮想物標を設定し、
前記自車両の移動により前記第1の仮想物標が前記ミリ波レーダにより検知された場合、前記立体物と前記第1の仮想物標とを異なる物標として認識することを特徴とする車両制御装置。 The vehicle control device according to claim 1,
a first virtual target is set in a region near the three-dimensional object where the three-dimensional object makes it difficult for the millimeter wave radar to detect the first virtual target;
a vehicle control device that recognizes the three-dimensional object and the first virtual object as different objects when the first virtual object is detected by the millimeter wave radar due to movement of the host vehicle. - 請求項1に記載の車両制御装置であって、
前記立体物の面積、幅、高さの少なくとも1つが所定の値以上である場合に前記第1の仮想物標を配置し、前記所定の値よりも小さい場合は前記第1の仮想物標を配置しないことを特徴とする車両制御装置。 The vehicle control device according to claim 1,
A vehicle control device characterized in that the first virtual target is placed when at least one of the area, width, and height of the three-dimensional object is equal to or greater than a predetermined value, and the first virtual target is not placed when the area, width, and height are smaller than the predetermined value. - 請求項1に記載の車両制御装置であって、
前記カメラによって移動物標を認識した場合、前記移動物標により遮蔽される前記移動物標の近傍領域に第2の仮想物標を配置し、
前記自車両の移動により前記第2の仮想物標が前記ミリ波レーダにより検知された場合、前記移動物標と前記第2の仮想物標とを異なる物標として認識することを特徴とする車両制御装置。 The vehicle control device according to claim 1,
When the moving target is recognized by the camera, a second virtual target is disposed in a vicinity of the moving target that is blocked by the moving target;
When the second virtual target is detected by the millimeter wave radar due to movement of the host vehicle, the moving target and the second virtual target are recognized as different targets. - 請求項1に記載の車両制御装置であって、
前記立体物にグルーピング対象外の属性を付与することで前記立体物を前記他の物標とは異なる物標として認識することを特徴とする車両制御装置。 The vehicle control device according to claim 1,
A vehicle control device characterized in that the three-dimensional object is recognized as a target object different from the other targets by giving the three-dimensional object an attribute that is not subject to grouping. - 請求項2に記載の車両制御装置であって、
前記立体物にグルーピング対象外の属性を付与することで前記立体物と前記第1の仮想物標とを異なる物標として認識することを特徴とする車両制御装置。 The vehicle control device according to claim 2,
A vehicle control device comprising: a vehicle control system that recognizes the three-dimensional object and the first virtual target as different targets by assigning an attribute that is not subject to grouping to the three-dimensional object. - 請求項4に記載の車両制御装置であって、
前記第2の仮想物標にグルーピング対象外の属性を付与することで前記移動物標と前記第2の仮想物標とを異なる物標として認識することを特徴とする車両制御装置。 The vehicle control device according to claim 4,
A vehicle control device comprising: a vehicle control unit that recognizes the moving target and the second virtual target as different targets by assigning an attribute of being excluded from grouping to the second virtual target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2023/002942 WO2024161481A1 (en) | 2023-01-30 | 2023-01-30 | Vehicle control device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2023/002942 WO2024161481A1 (en) | 2023-01-30 | 2023-01-30 | Vehicle control device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024161481A1 true WO2024161481A1 (en) | 2024-08-08 |
Family
ID=92146192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/002942 WO2024161481A1 (en) | 2023-01-30 | 2023-01-30 | Vehicle control device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024161481A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009231937A (en) * | 2008-03-19 | 2009-10-08 | Mazda Motor Corp | Surroundings monitoring device for vehicle |
JP2011198114A (en) * | 2010-03-19 | 2011-10-06 | Honda Motor Co Ltd | Vehicle surrounding monitoring device |
JP2011242860A (en) * | 2010-05-14 | 2011-12-01 | Toyota Motor Corp | Obstacle recognition apparatus |
WO2017170798A1 (en) * | 2016-03-31 | 2017-10-05 | 株式会社デンソー | Object recognition device and object recognition method |
-
2023
- 2023-01-30 WO PCT/JP2023/002942 patent/WO2024161481A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009231937A (en) * | 2008-03-19 | 2009-10-08 | Mazda Motor Corp | Surroundings monitoring device for vehicle |
JP2011198114A (en) * | 2010-03-19 | 2011-10-06 | Honda Motor Co Ltd | Vehicle surrounding monitoring device |
JP2011242860A (en) * | 2010-05-14 | 2011-12-01 | Toyota Motor Corp | Obstacle recognition apparatus |
WO2017170798A1 (en) * | 2016-03-31 | 2017-10-05 | 株式会社デンソー | Object recognition device and object recognition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102572784B1 (en) | Driver assistance system and control method for the same | |
WO2017014080A1 (en) | Driving assistance system | |
JP5083404B2 (en) | Pre-crash safety system | |
JP6361592B2 (en) | Vehicle control device | |
CN109572690B (en) | Vehicle control device | |
WO2017104773A1 (en) | Moving body control device and moving body control method | |
US20210394752A1 (en) | Traveling Control Device, Vehicle, and Traveling Control Method | |
CN114523963B (en) | System and method for predicting road collisions with host vehicles | |
WO2017171082A1 (en) | Vehicle control device and vehicle control method | |
WO2016052586A1 (en) | Driving assistance device | |
WO2018092590A1 (en) | Vehicle control device and vehicle control method | |
US20180144199A1 (en) | Vehicle vision | |
US11235741B2 (en) | Vehicle and control method for the same | |
CN114987455A (en) | Collision avoidance assistance device | |
US20190302771A1 (en) | Method for identifying objects in a traffic space | |
WO2019207639A1 (en) | Action selection device, action selection program, and action selection method | |
US20200242941A1 (en) | Driver assistance system, and control method the same | |
CN113511197A (en) | Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns | |
JP7340097B2 (en) | A method for tracking a remote target vehicle within a peripheral area of a motor vehicle using collision recognition means | |
CN113511198B (en) | Method, apparatus and storage medium for predicting blind zone collision when self-vehicle turns | |
WO2017104413A1 (en) | Object detection device and object detection method | |
WO2024161481A1 (en) | Vehicle control device | |
US10053092B2 (en) | Road environment recognition device, vehicle control device, and vehicle control method | |
CN114364588B (en) | Vehicle collision determination device | |
WO2024121995A1 (en) | Vehicle control device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23919625 Country of ref document: EP Kind code of ref document: A1 |