[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023188685A1 - Computation processing device - Google Patents

Computation processing device Download PDF

Info

Publication number
WO2023188685A1
WO2023188685A1 PCT/JP2023/000607 JP2023000607W WO2023188685A1 WO 2023188685 A1 WO2023188685 A1 WO 2023188685A1 JP 2023000607 W JP2023000607 W JP 2023000607W WO 2023188685 A1 WO2023188685 A1 WO 2023188685A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection range
vehicle
sensor
image
detection
Prior art date
Application number
PCT/JP2023/000607
Other languages
French (fr)
Japanese (ja)
Inventor
真 荒木
Original Assignee
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 本田技研工業株式会社 filed Critical 本田技研工業株式会社
Publication of WO2023188685A1 publication Critical patent/WO2023188685A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an arithmetic processing device.
  • a computing processing device is known that is equipped with a sensor that detects the surrounding situation of a vehicle and recognizes surrounding objects based on the detected information.
  • This type of device is equipped with a vehicle speed sensor that detects the vehicle speed of the target vehicle, and a camera and distance measurement sensor that detect the surrounding conditions of the target vehicle.
  • There is a device that displays an object detected by a distance sensor by switching the display for example, Patent Document 1).
  • the present invention has been made in view of the above-mentioned circumstances, and aims to improve the detection speed of surrounding objects and reduce the cost required for detecting objects.
  • a first sensor that detects vehicle information of a vehicle
  • a second sensor that detects the surrounding situation of the vehicle
  • a calculation process that recognizes surrounding objects based on the detection information of the second sensor.
  • the apparatus includes a detection range determination unit that determines a detection range of the surrounding situation based on a detection result of the first sensor, and a recognition image consisting of an image of the detection range based on detection information of the second sensor.
  • an image recognition unit that recognizes an object to be detected based on the recognition image, and the recognition image is an image of the surrounding situation obtained from the detection information of the second sensor. It is characterized by a smaller amount of data.
  • FIG. 1 is a diagram showing a driving support device according to an embodiment of the arithmetic processing device of the present invention.
  • FIG. 2 is a diagram showing an example of an image taken by a camera.
  • FIG. 3 is a diagram for explaining the basic operation of the driving support device.
  • FIG. 4 is a flowchart showing the detection range determination process.
  • FIG. 5 is a diagram schematically showing a road, a photographed image, and a detection range when the vehicle is turning to the right.
  • FIG. 6 is a diagram schematically showing a road, a detection range, and a center position when the vehicle is traveling straight.
  • FIG. 7 is a diagram showing an example of the angle of view of a photographed image and the angle of view of a detection range.
  • FIG. 1 is a diagram showing a driving support device according to an embodiment of the arithmetic processing device of the present invention.
  • This driving support device 10 is a device mounted on the vehicle 1, and has a function of recognizing objects existing around the vehicle 1 and notifying the recognition results.
  • the vehicle 1 is a motorcycle.
  • the vehicle 1 may be a vehicle other than a motorcycle, for example, a saddle type vehicle including all saddle type vehicles such as a motorcycle, or any vehicle other than a saddle type vehicle.
  • the driving support device 10 includes a surrounding detection sensor 11, a vehicle sensor 12, a calculation section 13, a storage section 14, and an output section 15.
  • the surroundings detection sensor 11 is a sensor that detects the surroundings of the vehicle 1, and includes a camera 21 that photographs at least the front of the vehicle 1.
  • FIG. 2 is a diagram showing an example of an image GC taken by the camera 21. From the perspective of photographing objects existing around the vehicle 1, the camera 21 captures images of the road (road surface) G1 on which the vehicle 1 is traveling, as well as other vehicles G2, facilities G3, and sidewalks located in front of the vehicle 1, to the left and right of the front. Specifications such as the angle of view (horizontal angle of view, vertical angle of view) and resolution, as well as the shooting range, etc., have been established to enable shooting of G4 and the like.
  • the vehicle sensor 12 is a sensor that detects vehicle information DM. More specifically, the vehicle sensor 12 includes a vehicle speed sensor 31 that detects the speed of the vehicle 1 (hereinafter referred to as vehicle speed), and a vehicle speed sensor 31 that detects the speed of the vehicle 1 (hereinafter referred to as the vehicle speed), and the steering angle of the vehicle 1 (also referred to as the steering angle or the steering angle of the front wheels). It includes a steering angle sensor 32 that detects the steering angle, and a tilt angle sensor 33 that detects the tilt angle of the vehicle 1, and outputs vehicle information DM consisting of the vehicle speed, steering angle, and tilt angle of the vehicle 1.
  • the tilt angle detected by the tilt angle sensor 33 is a tilt angle of the vehicle body to the left and right, and is, for example, a roll angle.
  • the arithmetic unit 13 includes an arithmetic processing device capable of performing various arithmetic processes.
  • the calculation unit 13 functions as a detection range determination unit 41, an image generation unit 42, and an image recognition unit 43 by executing the control program DP stored in the storage unit 14 using a calculation processing device.
  • the detection range determination unit 41 performs a surrounding situation determination process to determine a detection range R of the surrounding situation.
  • the image generation unit 42 performs image generation processing to generate a recognition image GT consisting of an image of the detection range R from the captured image GC of the camera 21.
  • the image recognition unit 43 performs image recognition processing on the recognition image GT to recognize the object to be detected.
  • Objects to be detected are objects that may affect the running of the vehicle 1 or objects for which it is desirable to notify the occupants, such as other vehicles, obstacles, and pedestrians.
  • the storage unit 14 is a storage device that stores various data used by the driving support device 10.
  • the storage unit 14 stores detection range determination data DR in addition to the control program DP.
  • the detection range determination data DR is information for determining the detection range R based on the vehicle speed, steering angle, and left and right tilt angle (hereinafter referred to as "left and right tilt angle") of the vehicle 1. For example, map data for determining the detection range R from three parameters including the vehicle speed, steering angle, and left/right tilt angle of the vehicle 1 is applied to the detection range determination data DR.
  • the detection range determining data DR satisfies at least the following three conditions. ⁇ First condition>
  • the detection range determination data DR is such that the larger the vehicle speed is than a predetermined value, the smaller the angle of view (horizontal angle of view, vertical angle of view) corresponding to the size of the detection range R.
  • the angle of view is also formed to increase the angle of view as the vehicle speed decreases.
  • the amount of information change in the captured image GC is generally relatively small.
  • the angle of view when the vehicle speed is a predetermined value is set to the threshold size SS
  • the angle of view is reduced to the threshold size SS or less
  • the angle of view is set to the threshold size SS.
  • the angle of view is made larger than the threshold size SS.
  • the predetermined value and the threshold size SS may be set to appropriate values as long as the speed and angle of view allow appropriate image recognition of the object.
  • the detection range determining data DR is formed into data for correcting the horizontal inclination of the detection range R according to the horizontal inclination angle. Since the camera 21 tilts as the vehicle 1 tilts from side to side, the tilt of the camera 21 can be corrected by making the detection range R tilted in the opposite direction by the tilted amount. Thereby, the detection range R can be set between an image when the vehicle 1 is tilted left and right and images before and after the tilt so that the image does not tilt. Thereby, when the object to be detected is image recognized from the image of the detection range R, it becomes easier to improve the object detection accuracy and shorten the detection time.
  • the course of the vehicle 1 can be estimated by a known method, and it is also possible to estimate whether the vehicle 1 is turning.
  • the detection range determination data DR is formed to set the center position PA of the detection range R so as to include the course corresponding to the turning destination.
  • the center position PA is set on the road G1 on which the vehicle 1 travels.
  • the detection range determination data DR sets the center position PA of the detection range R to a course corresponding to the turning point rather than in front of the camera 21, as shown in FIG. 5 described later. Correct the position to include. Furthermore, the detection range determination data DR rotates the detection range R in the opposite direction by the left and right tilt angle. With these, a part of the detection range R can be overlapped on the course, and when an object to be detected is image recognized from the detection range R, it becomes easier to detect the object at the turning destination.
  • the output unit 15 is a device that inputs the recognition result of the image recognition unit 43 and outputs information based on the recognition result.
  • the output unit 15 of this embodiment includes a first notification unit 51 that notifies the occupant of the vehicle 1 of the recognition result, and a second notification unit 52 that notifies other vehicles and the like of the recognition result.
  • the first notification unit 51 performs a process of causing a display device or an audio output device provided in the vehicle 1 to notify the presence of the object to be detected, for example, by display or audio as a recognition result.
  • the second notification unit 52 includes a communication device that realizes wireless communication with other vehicles existing around the vehicle 1.
  • the second notification section 52 transmits information indicating the recognition result of the image recognition section 43 to other vehicles or the like via a communication device.
  • the other destination vehicle is, for example, a vehicle running behind the vehicle 1, but is not limited to a vehicle behind the vehicle 1. Note that either the second notification unit 52 or the first notification unit 51 may be omitted from the output unit 15. Further, the configuration is not limited to the above configuration, and the configuration of the output unit 15 and the output destination may be changed as appropriate.
  • FIG. 3 is a flowchart showing the basic operation of the driving support device 10.
  • the flow shown in FIG. 3 is a process that is repeatedly executed when the vehicle 1 is running or the like.
  • the driving support device 10 starts acquiring vehicle information DM detected by the vehicle sensor 12 and acquiring an image GC captured by the camera 21 (step S1).
  • the detection range determination unit 41 determines the detection range R based on the vehicle information DM and using the detection range determination data DR (step S2: detection range determination processing).
  • the detection range determination unit 41 sets a rectangular detection range R smaller than the shooting range of the shot image GC at the center position of the shot image GC, which corresponds to the front center of the vehicle 1. This example shows a case where a decision has been made.
  • This detection range R corresponds to an area where another vehicle G2 (see FIG. 2) traveling in front of the vehicle 1 exists.
  • the driving support device 10 uses the image generation unit 42 to generate a recognition image GT consisting of an image of the detection range R from the photographed image GC (step S3: image generation processing).
  • the image recognition unit 43 performs image recognition processing for recognizing the object to be detected on the recognition image GT (step S4).
  • a wide range of known image recognition techniques can be applied to this image recognition, and for example, the object to be detected is determined by image recognition of color and shape.
  • the recognition image GT includes another vehicle G2 running in front of the vehicle 1, so the vehicle G2 is detected as the object to be detected by the process of step S4.
  • step S5 If the object to be detected is detected (step S5: YES), the driving support device 10 performs a process of notifying the recognition result through the output unit 15 (step S6). On the other hand, if the object to be detected is not detected (step S5: NO), the driving support device 10 temporarily ends the flow shown in FIG. 3. The above is the basic operation of the driving support device 10.
  • FIG. 4 is a flowchart showing the detection range determination process.
  • FIG. 5 is a diagram schematically showing the road G1, the photographed image GC, and the detection range R when the vehicle 1 is turning to the right. As shown in FIG. 5, when the vehicle 1 is turning to the right, the vehicle 1 is tilted to the right, so the photographed image GC is tilted by an angle ⁇ K (corresponding to the horizontal tilt angle) with respect to the horizontal plane. It has become a thing.
  • the detection range determination unit 41 estimates the course of the vehicle 1 based on the vehicle speed, steering angle, and left/right inclination angle detected by the vehicle sensor 12 (step S11), and Based on this, the center position PA of the detection range R is calculated (step S12).
  • the center position PA is calculated in the case of the driving situation shown in FIG. 5
  • a position on the road G1 on which the vehicle 1 is traveling and which is closer to the right side in the turning direction within the photographing range of the photographed image GC is calculated as the center position PA.
  • the range including the turning destination location on the road G1 is set as the detection range R.
  • FIG. 6 schematically shows the road G1, the detection range R, and the center position PA when the vehicle 1 is traveling straight.
  • the detection range R and center position PA during high speed driving are shown as the detection range RH and center position PAH
  • the detection range R and center position PA during low speed driving are shown as the detection range RL and center position PAL.
  • the center position PAL during low-speed driving is a position close to the front of the vehicle 1 on the road G1 on which the vehicle 1 travels.
  • the center position PAL during high-speed travel is a forward position far from the vehicle 1 on the road G1 on which the vehicle 1 travels.
  • the detection range determination unit 41 calculates the angle ⁇ A of the detection range R based on the left and right inclination angles (step S13).
  • the angle ⁇ A is an angle that corrects the inclination of the detection range R due to the left and right inclination of the vehicle 1.
  • the angle ⁇ A is an angle that corrects the angle ⁇ K that is the left and right inclination angle.
  • the angle is ⁇ K. Note that in the case of the driving situation shown in FIG. 6, the left-right inclination angle is zero, so the angle ⁇ A is zero.
  • FIG. 7 is a diagram showing an example of the angle of view of the captured image GC and the angle of view of the detection range R. As shown in FIG. 7, two types of angles of view (XR, YR) and (XR1, YR1) are illustrated as the angle of view (horizontal angle of view, vertical angle of view) of the detection range R. The angle of view of the detection range R is smaller than the angle of view (XS, YS) of the captured image GC.
  • the angle of view (XR, YR) is an example of the angle of view when the vehicle speed is smaller than a predetermined value
  • the angle of view (XR1, YR1) is an example of the angle of view when the vehicle speed is higher than a predetermined value.
  • the angle of view (for example, the angle of view (XR, YR)) when the vehicle speed is lower than the predetermined value is greater than or equal to the predetermined threshold size SS. Further, the angle of view (for example, the angle of view (XR1, YR1)) when the vehicle speed is higher than the predetermined value is less than or equal to the threshold size SS.
  • the angle of view of the detection range R is set to a smaller angle of view as the vehicle speed is higher than the predetermined value, and is set to be smaller than the angle of view of the captured image GC. Therefore, the recognition image GT generated by the image generation unit 42 becomes image data with a smaller amount of data than the photographed image GC.
  • FIG. 5 illustrates a case where the angle of view of the detection range R has a shape similar to the angle of view of the photographed image GC, it is not limited to a similar shape.
  • the detection range determining unit 41 determines whether the angle of view of the detection range R calculated based on the vehicle speed is larger than a predetermined threshold size SS (step S15), and determines whether the angle of view of the detection range R is larger than the threshold size SS. If the size is larger than the size SS (step S15; YES), a thinning coefficient KG is calculated so that the angle of view when the vehicle speed is smaller than a predetermined value is fixed to the threshold size SS.
  • the thinning coefficient KG is a value that defines the amount of thinning of pixels in an image, and is set to a value such as 1/3, 1/4, . . . , 1/n (n is an arbitrary integer).
  • the thinning coefficient is calculated such that the larger the angle of view of the detection range R, the larger the amount of thinning of pixels in the image.
  • the recognition image GT is created with approximately half the number of pixels by averaging adjacent pixel data of the recognition image GT to generate one pixel data. , and the amount of data of the recognition image GT can be reduced. In this way, by performing the process of thinning out the number of pixels in the detection range R, the resolution of the recognition image GT corresponding to the detection range R can be converted to a low resolution.
  • a thinning coefficient KG is calculated.
  • the detection range determination unit 41 outputs detection range specification information DT that specifies the detection range R and the thinning coefficient KG to the image generation unit 42.
  • the image generation unit 42 selects an image for recognition based on the captured image GC and the detection range specifying information DT, which is an image in the detection range R in the captured image GC and whose number of pixels has been reduced by the thinning coefficient KG. It is generated as an image GT. Note that if the angle of view of the detection range R is smaller than the threshold size SS, the thinning coefficient KG is not calculated.
  • step S17 the detection range determining section 41 outputs detection range specifying information DT for specifying the detection range R to the image generating section 42.
  • the image generation unit 42 generates a recognition image GT in which the number of pixels is not reduced. The above is the content of the detection range determination process.
  • the driving support device 10 uses the vehicle sensor 12 that detects the vehicle information DM of the vehicle 1, the surroundings detection sensor 11 that detects the surrounding situation of the vehicle 1, and the surroundings based on the detection results of the vehicle sensor 12.
  • a detection range determination unit 41 that determines the detection range R of the situation; an image generation unit 42 that generates a recognition image GT consisting of an image of the detection range R based on detection information from the surrounding detection sensor 11; and a recognition image GT.
  • it is provided with an image recognition unit 43 for image recognition of the object to be detected, and the recognition image GT is an image with a smaller amount of data than the photographed image GC obtained from the detection information of the surrounding detection sensor 11.
  • the detection range R of the surrounding situation smaller than the detection range of the surrounding detection sensor 11
  • the vehicle sensor 12 corresponds to the "first sensor” of the present invention
  • the surrounding detection sensor 11 corresponds to the "second sensor” of the present invention.
  • the vehicle sensor 12 and the surroundings detection sensor 11 may be changed as appropriate.
  • the surroundings detection sensor 11 may include a radar that is one of the devices that detect the surroundings.
  • the vehicle sensor 12 includes a vehicle speed sensor 31, a steering angle sensor 32, and a tilt angle sensor 33 that detect the vehicle speed, steering angle, and left/right tilt angle of the vehicle 1, respectively.
  • the detection range R is varied based on the detection result. According to this configuration, since the detection range R is varied based on the vehicle speed, steering angle, and left/right inclination angle of the vehicle 1, the detection range R can be made small to reduce the amount of calculations during image recognition, while the vehicle 1 is traveling It becomes easier to determine the detection range necessary for object recognition according to the state, and it becomes easier to recognize an appropriate object.
  • the vehicle sensor 12 includes all of the vehicle speed sensor 31, steering angle sensor 32, and tilt angle sensor 33
  • the vehicle sensor 12 is not limited to this, and includes the vehicle speed sensor 31, the steering angle sensor 32, and the tilt angle sensor It may be configured to include at least one of 33.
  • the process of determining the detection range R may be performed based on only one of the vehicle speed, steering angle, and left/right inclination angle of the vehicle 1.
  • the vehicle sensor 12 also includes other sensors such as pitch sensors that detect the pitch angle of the vehicle 1, and determines the detection range R or corrects the detection range R using the detection results of the other sensors. You can do it like this.
  • the detection range determination unit 41 reduces the angle of view of the detection range R when the vehicle speed is higher than a predetermined value, and increases the angle of view of the detection range R when the vehicle speed is lower than the predetermined value.
  • the angle of view of the detection range R is made small, so that the detection range R can be set so that distant objects can be detected at high speeds.
  • the angle of view of the detection range R is increased, making it easier to recognize objects over a wider range.
  • the predetermined value can be changed as appropriate.
  • the size of the angle of view of the detection range R is not limited to the above-mentioned embodiment, and may be determined as appropriate.
  • the detection range determination unit 41 thins out the number of pixels in the detection range R according to the vehicle speed, when the detection range R increases depending on the vehicle speed, by thinning out the number of pixels in the detection range R, the recognition image GT The amount of data can be reduced, and the amount of calculation can be reduced.
  • the detection range R becomes smaller when the speed is high, so the number of pixels within the detection range R is not thinned out, and when the speed is low, the detection range R is widened and the number of pixels within the detection range R is thinned out. . Therefore, the resolution of the recognition image GT can be maintained even at high speeds where blurring etc. are likely to occur in the photographed image GC, making it easier to appropriately recognize objects.
  • the detection range determination unit 41 estimates the course of the vehicle 1 based on the detection result of the vehicle sensor 12, and overlaps a part of the detection range R on the estimated course. According to this configuration, it is possible to obtain the recognition image GT on the course of the vehicle 1 while reducing the amount of data of the recognition image GT by reducing the detection range R, and to detect objects that affect the running of the vehicle 1. It becomes easier to recognize properly.
  • the detection range R can be shifted according to the steering angle to include the course, making it easier to detect objects at the turning destination.
  • the course of the vehicle 1 is specified based on the vehicle speed, the steering angle, and the left/right inclination angle of the vehicle 1.
  • the information on either the left or right inclination angles may not be used, or the course of the vehicle 1 may be estimated using detection results from other sensors.
  • the detection range determination unit 41 corrects the inclination of the detection range R according to the left and right angle of inclination, even if the vehicle 1 leans left and right, the recognition image GT of the detection range R is prevented from leaning left and right. can. This makes it easier to improve object detection accuracy and shorten detection time while reducing the amount of calculations.
  • the detection range determination unit 41 sets the detection range R to include the position of the road G1 corresponding to the turning destination. Since the road G1 curves in the turning direction during a turn, there is a possibility that the position of the road G1 at the turning destination cannot be included in the detection range just by correcting the detection range according to the left/right inclination angle.
  • the detection range R by setting the detection range R to include the position of the road G1 corresponding to the turning destination, objects that affect the running of the vehicle 1 can be easily recognized while reducing the amount of calculation. .
  • the position of the road G1 corresponding to the turning destination corresponds to the "course corresponding to the turning destination" of the present invention.
  • a first sensor that detects vehicle information of a vehicle, a second sensor that detects surrounding conditions of the vehicle, and detects surrounding objects based on the detection information of the first sensor and the detection information of the second sensor.
  • the recognition arithmetic processing device includes a detection range determination unit that determines a detection range of the surrounding situation based on the detection result of the first sensor, and an image of the detection range based on detection information of the second sensor.
  • the recognition image includes an image generation unit that generates a recognition image, and an image recognition unit that recognizes an object to be detected from the recognition image, and the recognition image includes the recognition image obtained from the detection information of the second sensor.
  • An arithmetic processing device characterized by having a smaller amount of data than an image of the surrounding situation.
  • the detection range of the surrounding situation smaller than the detection range of the second sensor, it is possible to obtain a recognition image with a reduced amount of data.
  • the first sensor includes at least one of a vehicle speed sensor, a steering angle sensor, and a tilt angle sensor that respectively detect a vehicle speed, a steering angle, and a left/right tilt angle of the vehicle
  • the detection range determining unit includes: The arithmetic processing device according to configuration 1, wherein the detection range is varied based on the detection result of the first sensor.
  • the detection range is varied based on at least one of the vehicle speed, steering angle, and left/right inclination angle, so that the detection range can be made smaller to reduce the amount of calculations during image recognition, while the vehicle is It becomes easier to determine the detection range necessary for object recognition according to the state, and it becomes easier to recognize an appropriate object.
  • the detection range determining unit reduces the angle of view of the detection range when the vehicle speed is higher than a predetermined value, and increases the angle of view of the detection range when the vehicle speed is lower than a predetermined value.
  • the arithmetic processing device characterized in that: According to this configuration, when the vehicle speed is higher than a predetermined value, the angle of view of the detection range is reduced, so that a distant object can be detected at high speeds, and when the vehicle speed is lower than the predetermined value, the detection range is Since the angle of view of the range is increased, it becomes easier to recognize objects over a wider range.
  • Vehicle 10 Driving support device (computation processing device) 11 Surrounding detection sensor (second sensor) 12 Vehicle sensor (first sensor) 13 Arithmetic unit 14 Storage unit 15 Output unit 21 Camera 31 Vehicle speed sensor 32 Rudder angle sensor 33 Tilt angle sensor 41 Detection range determination unit 42 Image generation unit 43 Image recognition unit R Detection range GT Recognition image GC Photographed image (image of surrounding situation) )

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention can improve the speed at which a surrounding object is detected and reduce the cost required for detecting the object. A computation processing device according to the present invention recognizes the surrounding object on the basis of detection information from a first sensor (12) and detection information from a second sensor (11), the first sensor (12) detecting vehicle information of a vehicle and the second sensor (11) detecting conditions surrounding the vehicle. The computation processing device comprises: a detection range determination unit 41 that determines the range of detection of the surrounding conditions on the basis of the detection result from the first sensor (12); an image generation unit 42 that generates, on the basis of the detection information from the second sensor (11), a recognition image comprising an image of the detection range; and an image recognition unit 43 that carries out image recognition of an object to be detected in the recognition image. The amount of data of the recognition image is less than the amount of data of an image of the surrounding conditions obtained from the detection information of the second sensor (11).

Description

演算処理装置arithmetic processing unit
 本発明は、演算処理装置に関する。 The present invention relates to an arithmetic processing device.
 車両の周囲状況を検知するセンサを備え、検知された情報に基づき周囲物体を認識する演算処理装置が知られている。この種の装置には、対象車両の車速を検出する車速センサと、対象車両の周囲状況を検知するカメラ及び測距センサとを備え、車速に応じて、カメラ画像から検知された物体と、測距センサによって検知された物体とを切り換えて表示するものがある(例えば、特許文献1)。 A computing processing device is known that is equipped with a sensor that detects the surrounding situation of a vehicle and recognizes surrounding objects based on the detected information. This type of device is equipped with a vehicle speed sensor that detects the vehicle speed of the target vehicle, and a camera and distance measurement sensor that detect the surrounding conditions of the target vehicle. There is a device that displays an object detected by a distance sensor by switching the display (for example, Patent Document 1).
国際公開第2017/068692号International Publication No. 2017/068692
 画像センサの高画素化を背景として、画像データが大容量化している。
 従来の構成は、大容量の画像データを利用して画像認識処理を行うので、物体の検知に時間がかかってしまう事態が生じる。時間を短くするために、高性能な演算処理デバイスや大容量メモリ等のハードウェア資源を採用すると、コスト低減に不利となる。
 本発明は、上述した事情に鑑みてなされたものであり、周囲の物体の検知速度の向上や物体の検知に要するコスト低減を可能にすることを目的とする。
With the increase in the number of pixels in image sensors, the amount of image data is increasing.
In conventional configurations, image recognition processing is performed using a large amount of image data, which may result in a situation where it takes time to detect an object. If hardware resources such as high-performance arithmetic processing devices and large-capacity memories are adopted in order to shorten the time, this will be disadvantageous in terms of cost reduction.
The present invention has been made in view of the above-mentioned circumstances, and aims to improve the detection speed of surrounding objects and reduce the cost required for detecting objects.
 この明細書には、2022年3月28日に出願された日本国特許出願・特願2022-052644号の全ての内容が含まれる。
 上記目的を達成するために、車両の車両情報を検知する第1センサと、前記車両の周囲状況を検知する第2センサと、前記第2センサの検知情報に基づき、周囲物体を認識する演算処理装置において、前記第1センサの検知結果に基づき、前記周囲状況の検知範囲を決定する検知範囲決定部と、前記第2センサの検知情報に基づいて、前記検知範囲の画像からなる認識用画像を生成する画像生成部と、前記認識用画像に対し、検出対象の物体を画像認識する画像認識部とを備え、前記認識用画像は、前記第2センサの検知情報から得られる前記周囲状況の画像よりもデータ量が少ないことを特徴とする。
This specification includes all contents of Japanese patent application/Japanese Patent Application No. 2022-052644 filed on March 28, 2022.
In order to achieve the above object, a first sensor that detects vehicle information of a vehicle, a second sensor that detects the surrounding situation of the vehicle, and a calculation process that recognizes surrounding objects based on the detection information of the second sensor. The apparatus includes a detection range determination unit that determines a detection range of the surrounding situation based on a detection result of the first sensor, and a recognition image consisting of an image of the detection range based on detection information of the second sensor. an image recognition unit that recognizes an object to be detected based on the recognition image, and the recognition image is an image of the surrounding situation obtained from the detection information of the second sensor. It is characterized by a smaller amount of data.
 周囲の物体の検知速度の向上や物体の検知に要するコスト低減が可能になる。 It becomes possible to improve the detection speed of surrounding objects and reduce the cost required to detect objects.
図1は、本発明の演算処理装置の実施形態に係る運転支援装置を示す図である。FIG. 1 is a diagram showing a driving support device according to an embodiment of the arithmetic processing device of the present invention. 図2は、カメラの撮影画像の一例を示す図である。FIG. 2 is a diagram showing an example of an image taken by a camera. 図3は、運転支援装置の基本動作の説明に供する図である。FIG. 3 is a diagram for explaining the basic operation of the driving support device. 図4は、検知範囲決定処理を示すフローチャートである。FIG. 4 is a flowchart showing the detection range determination process. 図5は、車両が右方向へ旋回中の場合の道路、撮影画像及び検知範囲を模式的に示した図である。FIG. 5 is a diagram schematically showing a road, a photographed image, and a detection range when the vehicle is turning to the right. 図6は、車両が直進中の場合の道路、検知範囲及び中心位置を模式的に示す図である。FIG. 6 is a diagram schematically showing a road, a detection range, and a center position when the vehicle is traveling straight. 図7は、撮影画像の画角と検知範囲の画角の一例を示す図である。FIG. 7 is a diagram showing an example of the angle of view of a photographed image and the angle of view of a detection range.
 以下、図面を参照して本発明の一実施形態について説明する。
 図1は、本発明の演算処理装置の実施形態に係る運転支援装置を示す図である。この運転支援装置10は、車両1に搭載される装置であり、車両1の周囲に存在する物体を認識し、認識結果を通知する機能を有している。本実施形態において、車両1は、自動二輪車である。なお、車両1は、自動二輪車以外の車両でもよく、例えば、自動二輪車等の鞍乗り型の車両全般を含む鞍乗り型車両でもよいし、鞍乗り型車両以外の任意の車両でもよい。
Hereinafter, one embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a diagram showing a driving support device according to an embodiment of the arithmetic processing device of the present invention. This driving support device 10 is a device mounted on the vehicle 1, and has a function of recognizing objects existing around the vehicle 1 and notifying the recognition results. In this embodiment, the vehicle 1 is a motorcycle. The vehicle 1 may be a vehicle other than a motorcycle, for example, a saddle type vehicle including all saddle type vehicles such as a motorcycle, or any vehicle other than a saddle type vehicle.
 運転支援装置10は、周囲検知センサ11と、車両センサ12と、演算部13と、記憶部14と、出力部15とを備えている。周囲検知センサ11は、車両1の周囲状況を検出するセンサであり、少なくとも車両1の前方を撮影するカメラ21を備えている。
 図2は、カメラ21の撮影画像GCの一例を示す図である。
 カメラ21は、車両1の周囲に存在する物体を撮影する観点から、車両1が走行する道路(路面)G1に加え、車両1の前方及び前方左右に位置する他の車両G2、施設G3及び歩道G4等を撮影可能に、画角(水平画角、垂直画角)や解像度等のスペック、及び撮影範囲等の仕様が定められている。
The driving support device 10 includes a surrounding detection sensor 11, a vehicle sensor 12, a calculation section 13, a storage section 14, and an output section 15. The surroundings detection sensor 11 is a sensor that detects the surroundings of the vehicle 1, and includes a camera 21 that photographs at least the front of the vehicle 1.
FIG. 2 is a diagram showing an example of an image GC taken by the camera 21.
From the perspective of photographing objects existing around the vehicle 1, the camera 21 captures images of the road (road surface) G1 on which the vehicle 1 is traveling, as well as other vehicles G2, facilities G3, and sidewalks located in front of the vehicle 1, to the left and right of the front. Specifications such as the angle of view (horizontal angle of view, vertical angle of view) and resolution, as well as the shooting range, etc., have been established to enable shooting of G4 and the like.
 車両センサ12は、車両情報DMを検知するセンサである。より具体的には、車両センサ12は、車両1の速度(以下、車速と言う)を検知する車速センサ31と、車両1の舵角(操舵角、又は前輪の舵角とも称される)を検知する舵角センサ32と、車両1の傾斜角を検知する傾斜角センサ33とを備え、車両1の車速、舵角及び傾斜角からなる車両情報DMを出力する。傾斜角センサ33が検知する傾斜角は、車体左右への傾斜角であり、例えば、ロール角である。 The vehicle sensor 12 is a sensor that detects vehicle information DM. More specifically, the vehicle sensor 12 includes a vehicle speed sensor 31 that detects the speed of the vehicle 1 (hereinafter referred to as vehicle speed), and a vehicle speed sensor 31 that detects the speed of the vehicle 1 (hereinafter referred to as the vehicle speed), and the steering angle of the vehicle 1 (also referred to as the steering angle or the steering angle of the front wheels). It includes a steering angle sensor 32 that detects the steering angle, and a tilt angle sensor 33 that detects the tilt angle of the vehicle 1, and outputs vehicle information DM consisting of the vehicle speed, steering angle, and tilt angle of the vehicle 1. The tilt angle detected by the tilt angle sensor 33 is a tilt angle of the vehicle body to the left and right, and is, for example, a roll angle.
 演算部13は、各種の演算処理が可能な演算処理デバイスを備えている。演算部13は、演算処理デバイスによって記憶部14に記憶された制御プログラムDPを実行することによって、検知範囲決定部41、画像生成部42、及び画像認識部43として機能する。検知範囲決定部41は、周囲状況の検知範囲Rを決定する周囲状況決定処理を行う。画像生成部42は、カメラ21の撮影画像GCから、検知範囲Rの画像からなる認識用画像GTを生成する画像生成処理を行う。 The arithmetic unit 13 includes an arithmetic processing device capable of performing various arithmetic processes. The calculation unit 13 functions as a detection range determination unit 41, an image generation unit 42, and an image recognition unit 43 by executing the control program DP stored in the storage unit 14 using a calculation processing device. The detection range determination unit 41 performs a surrounding situation determination process to determine a detection range R of the surrounding situation. The image generation unit 42 performs image generation processing to generate a recognition image GT consisting of an image of the detection range R from the captured image GC of the camera 21.
 画像認識部43は、認識用画像GTに対し、検出対象の物体を画像認識する画像認識処理を行う。検出対象の物体は、車両1の走行に影響する可能性のある物体や乗員に通知することが望まれる物体であり、例えば、他の車両、障害物、及び歩行者等である。 The image recognition unit 43 performs image recognition processing on the recognition image GT to recognize the object to be detected. Objects to be detected are objects that may affect the running of the vehicle 1 or objects for which it is desirable to notify the occupants, such as other vehicles, obstacles, and pedestrians.
 記憶部14は、運転支援装置10が使用する各種のデータを記憶する記憶デバイスである。記憶部14は、制御プログラムDPの他に、検知範囲決定用データDRを記憶している。検知範囲決定用データDRは、車両1の車速、舵角、及び左右への傾斜角(以下、「左右傾斜角」と言う)に基づいて、検知範囲Rを決定するための情報である。
 例えば、検知範囲決定用データDRには、例えば、車両1の車速、舵角、及び左右傾斜角からなる3つのパラメータから、検知範囲Rを決定するマップデータが適用される。
The storage unit 14 is a storage device that stores various data used by the driving support device 10. The storage unit 14 stores detection range determination data DR in addition to the control program DP. The detection range determination data DR is information for determining the detection range R based on the vehicle speed, steering angle, and left and right tilt angle (hereinafter referred to as "left and right tilt angle") of the vehicle 1.
For example, map data for determining the detection range R from three parameters including the vehicle speed, steering angle, and left/right tilt angle of the vehicle 1 is applied to the detection range determination data DR.
 検知範囲決定用データDRは、少なくとも次の3つの条件を満たしている。
<第1条件>
 検知範囲決定用データDRは、予め決められた所定値よりも車速が大きいほど検知範囲Rのサイズに相当する画角(水平画角、垂直画角)を小さくし、予め決められた所定値よりも車速が小さいほど画角を大きくするデータに形成されている。
 検出対象の物体の存在が乗員に通知された場合、通知されてから乗員が反応できる速さには限界がある。車速が大きいほど検知範囲Rの画角を小さくすることによって、遠方にある物体を検知範囲R内で相対的に大きくし、遠方の物体を早期に画像認識し易くなる。
 低速時には、一般的に、撮影画像GC内の情報変化量が相対的に少なくなる。車速が小さいほど検知範囲Rの画角を大きくすることによって、より広範囲の物体を認識し易くなる。
 例えば、車速が予め決められた所定値の場合の画角を閾値サイズSSとし、所定値よりも車速が大きい場合には画角を閾値サイズSS以下に小さくし、所定値よりも車速が小さい場合には画角を閾値サイズSS以上に大きくする。所定値及び閾値サイズSSは、適切に物体を画像認識できる速度や画角であればよく、適宜な値に設定すればよい。
The detection range determining data DR satisfies at least the following three conditions.
<First condition>
The detection range determination data DR is such that the larger the vehicle speed is than a predetermined value, the smaller the angle of view (horizontal angle of view, vertical angle of view) corresponding to the size of the detection range R. The angle of view is also formed to increase the angle of view as the vehicle speed decreases.
When a passenger is notified of the presence of an object to be detected, there is a limit to how quickly the passenger can react after being notified. By reducing the angle of view of the detection range R as the vehicle speed increases, objects located far away are made relatively large within the detection range R, making it easier to recognize the images of distant objects at an early stage.
At low speeds, the amount of information change in the captured image GC is generally relatively small. By increasing the angle of view of the detection range R as the vehicle speed decreases, it becomes easier to recognize objects over a wider range.
For example, the angle of view when the vehicle speed is a predetermined value is set to the threshold size SS, when the vehicle speed is greater than the predetermined value, the angle of view is reduced to the threshold size SS or less, and when the vehicle speed is smaller than the predetermined value, the angle of view is set to the threshold size SS. In this case, the angle of view is made larger than the threshold size SS. The predetermined value and the threshold size SS may be set to appropriate values as long as the speed and angle of view allow appropriate image recognition of the object.
<第2条件>
 検知範囲決定用データDRは、左右傾斜角に応じて、検知範囲Rの左右の傾きを補正するデータに形成されている。
 車両1の左右の傾斜に伴ってカメラ21が傾斜するので、傾斜した分だけ検知範囲Rを逆方向に傾けた範囲にすることによって、カメラ21の傾斜分を補正することができる。これにより、車両1が左右に傾斜したときの画像と、その前後の画像とで、画像が傾かないように検知範囲Rを設定できる。これにより、検知範囲Rの画像から検出対象の物体を画像認識する際に、物体の検出精度の向上や、検出時間の短時間化を図り易くなる。
<Second condition>
The detection range determining data DR is formed into data for correcting the horizontal inclination of the detection range R according to the horizontal inclination angle.
Since the camera 21 tilts as the vehicle 1 tilts from side to side, the tilt of the camera 21 can be corrected by making the detection range R tilted in the opposite direction by the tilted amount. Thereby, the detection range R can be set between an image when the vehicle 1 is tilted left and right and images before and after the tilt so that the image does not tilt. Thereby, when the object to be detected is image recognized from the image of the detection range R, it becomes easier to improve the object detection accuracy and shorten the detection time.
<第3条件>
 車両1の車速、舵角及び左右傾斜角に基づき、公知の手法により車両1の進路を推定できると共に、車両1が旋回中か否かを推定することが可能である。
 検知範囲決定用データDRは、旋回先に相当する進路を含むように、検知範囲Rの中心位置PAを設定するデータに形成されている。
 この場合、中心位置PAは、車両1が走行する道路G1上に設けられるようにする。車両1が旋回中の場合、つまり、道路G1が曲線の場合、道路G1がカメラ21の正面に位置せず、前方の検出対象の物体が検出されない可能性がある。このため、車両1が旋回中の場合、検知範囲決定用データDRは、後述の図5に示すように、検知範囲Rの中心位置PAを、カメラ21の正面よりも旋回先に相当する進路を含む位置に補正する。さらに、検知範囲決定用データDRは、左右傾斜角の分だけ検知範囲Rを逆方向に回転させる。これらによって、進路上に検知範囲Rの一部を重ねることができ、検知範囲Rから検出対象の物体を画像認識する際に、旋回先の物体を検知し易くなる。
<Third condition>
Based on the vehicle speed, steering angle, and left/right inclination angle of the vehicle 1, the course of the vehicle 1 can be estimated by a known method, and it is also possible to estimate whether the vehicle 1 is turning.
The detection range determination data DR is formed to set the center position PA of the detection range R so as to include the course corresponding to the turning destination.
In this case, the center position PA is set on the road G1 on which the vehicle 1 travels. When the vehicle 1 is turning, that is, when the road G1 is a curve, the road G1 is not located in front of the camera 21, and the object to be detected in front may not be detected. Therefore, when the vehicle 1 is turning, the detection range determination data DR sets the center position PA of the detection range R to a course corresponding to the turning point rather than in front of the camera 21, as shown in FIG. 5 described later. Correct the position to include. Furthermore, the detection range determination data DR rotates the detection range R in the opposite direction by the left and right tilt angle. With these, a part of the detection range R can be overlapped on the course, and when an object to be detected is image recognized from the detection range R, it becomes easier to detect the object at the turning destination.
 出力部15は、画像認識部43の認識結果を入力し、認識結果に基づく情報を出力するデバイスである。本実施形態の出力部15は、車両1の乗員に認識結果を報知する第1報知部51と、他の車両等に認識結果を報知する第2報知部52とを備えている。
 第1報知部51は、車両1に設けられた表示デバイスや音声出力デバイスに対し、認識結果として、例えば、検出対象の物体の存在を表示又は音声で報知させる処理を行う。
The output unit 15 is a device that inputs the recognition result of the image recognition unit 43 and outputs information based on the recognition result. The output unit 15 of this embodiment includes a first notification unit 51 that notifies the occupant of the vehicle 1 of the recognition result, and a second notification unit 52 that notifies other vehicles and the like of the recognition result.
The first notification unit 51 performs a process of causing a display device or an audio output device provided in the vehicle 1 to notify the presence of the object to be detected, for example, by display or audio as a recognition result.
 第2報知部52は、車両1周囲に存在する他の車両等との間で無線通信を実現する通信デバイスを有している。第2報知部52は、通信デバイスを介して、他の車両等に画像認識部43の認識結果を示す情報を送信する。送信先の他の車両等は、例えば、車両1の後方を走行する車両であるが、後方の車両に限定しなくてもよい。
 なお、出力部15から第2報知部52及び第1報知部51のいずれかを省略してもよい。また、上記構成に限定されず、出力部15の構成や出力先は適宜に変更してもよい。
The second notification unit 52 includes a communication device that realizes wireless communication with other vehicles existing around the vehicle 1. The second notification section 52 transmits information indicating the recognition result of the image recognition section 43 to other vehicles or the like via a communication device. The other destination vehicle is, for example, a vehicle running behind the vehicle 1, but is not limited to a vehicle behind the vehicle 1.
Note that either the second notification unit 52 or the first notification unit 51 may be omitted from the output unit 15. Further, the configuration is not limited to the above configuration, and the configuration of the output unit 15 and the output destination may be changed as appropriate.
 図3は、運転支援装置10の基本動作を示すフローチャートである。図3に示すフローは、車両1が走行中等の場合に繰り返し実行される処理である。
 まず、運転支援装置10は、車両センサ12で検知された車両情報DMの取得、及び、カメラ21の撮影画像GCの取得を開始する(ステップS1)。次に、運転支援装置10は、検知範囲決定部41により、車両情報DMに基づき、検知範囲決定用データDRを利用して検知範囲Rを決定する(ステップS2:検知範囲決定処理)。
FIG. 3 is a flowchart showing the basic operation of the driving support device 10. The flow shown in FIG. 3 is a process that is repeatedly executed when the vehicle 1 is running or the like.
First, the driving support device 10 starts acquiring vehicle information DM detected by the vehicle sensor 12 and acquiring an image GC captured by the camera 21 (step S1). Next, in the driving support device 10, the detection range determination unit 41 determines the detection range R based on the vehicle information DM and using the detection range determination data DR (step S2: detection range determination processing).
 図3には、説明を判りやすくするため、検知範囲決定部41が、車両1の前方中央に相当する撮影画像GCの中央位置に、撮影画像GCの撮影範囲よりも小さい矩形の検知範囲Rを決定した場合を例示している。この検知範囲Rは、車両1の前方を走行する他の車両G2(図2参照)が存在するエリアに相当している。 In order to make the explanation easier to understand, in FIG. 3, the detection range determination unit 41 sets a rectangular detection range R smaller than the shooting range of the shot image GC at the center position of the shot image GC, which corresponds to the front center of the vehicle 1. This example shows a case where a decision has been made. This detection range R corresponds to an area where another vehicle G2 (see FIG. 2) traveling in front of the vehicle 1 exists.
 次いで、運転支援装置10は、画像生成部42により、撮影画像GCから検知範囲Rの画像からなる認識用画像GTを生成する(ステップS3:画像生成処理)。運転支援装置10は、画像認識部43により、認識用画像GTに対し、検出対象の物体を画像認識する画像認識処理を行う(ステップS4)。
 この画像認識には、公知の画像認識技術を広く適用可能であり、例えば、色や形状を画像認識することによって、検出対象の物体を判別する。図3に示す例の場合、認識用画像GT中に、車両1の前方を走行する他の車両G2が含まれるため、ステップS4の処理により車両G2が検出対象の物体として検出される。
Next, the driving support device 10 uses the image generation unit 42 to generate a recognition image GT consisting of an image of the detection range R from the photographed image GC (step S3: image generation processing). In the driving support device 10, the image recognition unit 43 performs image recognition processing for recognizing the object to be detected on the recognition image GT (step S4).
A wide range of known image recognition techniques can be applied to this image recognition, and for example, the object to be detected is determined by image recognition of color and shape. In the case of the example shown in FIG. 3, the recognition image GT includes another vehicle G2 running in front of the vehicle 1, so the vehicle G2 is detected as the object to be detected by the process of step S4.
 検出対象の物体が検出された場合(ステップS5:YES)、運転支援装置10は、出力部15により認識結果を通知する処理を行う(ステップS6)。一方、検出対象の物体が検出されない場合(ステップS5:NO)、運転支援装置10は、図3に示すフローを一端終了する。以上が運転支援装置10の基本動作である。 If the object to be detected is detected (step S5: YES), the driving support device 10 performs a process of notifying the recognition result through the output unit 15 (step S6). On the other hand, if the object to be detected is not detected (step S5: NO), the driving support device 10 temporarily ends the flow shown in FIG. 3. The above is the basic operation of the driving support device 10.
 ステップS2の検知範囲決定処理について更に説明する。
 図4は、検知範囲決定処理を示すフローチャートである。図5は、車両1が右方向へ旋回中の場合の道路G1、撮影画像GC及び検知範囲Rを模式的に示した図である。図5に示すように、車両1が右方向へ旋回中のときは車両1が右方向に傾斜しているので、撮影画像GCは水平面に対して角度θK(左右傾斜角に相当)だけ傾斜したものとなっている。
The detection range determination process in step S2 will be further explained.
FIG. 4 is a flowchart showing the detection range determination process. FIG. 5 is a diagram schematically showing the road G1, the photographed image GC, and the detection range R when the vehicle 1 is turning to the right. As shown in FIG. 5, when the vehicle 1 is turning to the right, the vehicle 1 is tilted to the right, so the photographed image GC is tilted by an angle θK (corresponding to the horizontal tilt angle) with respect to the horizontal plane. It has become a thing.
 図4に示すように、検知範囲決定部41は、車両センサ12で検知された車両1の車速、舵角及び左右傾斜角に基づき、車両1の進路を推定し(ステップS11)、推定した進路に基づいて、検知範囲Rの中心位置PAを算出する(ステップS12)。図5に示す走行状況の場合、車両1が走行する道路G1上であって、撮影画像GCの撮影範囲のうち旋回方向である右側に寄せた位置が、中心位置PAとして算出される。これにより、道路G1における旋回先の場所を含む範囲が検知範囲Rに設定される。 As shown in FIG. 4, the detection range determination unit 41 estimates the course of the vehicle 1 based on the vehicle speed, steering angle, and left/right inclination angle detected by the vehicle sensor 12 (step S11), and Based on this, the center position PA of the detection range R is calculated (step S12). In the case of the driving situation shown in FIG. 5, a position on the road G1 on which the vehicle 1 is traveling and which is closer to the right side in the turning direction within the photographing range of the photographed image GC is calculated as the center position PA. Thereby, the range including the turning destination location on the road G1 is set as the detection range R.
 ここで、図6は、車両1が直進中の場合の道路G1、検知範囲R及び中心位置PAを模式的に示している。図6には、高速走行時の検知範囲R及び中心位置PAを、検知範囲RH及び中心位置PAHで示し、低速走行時の検知範囲R及び中心位置PAを、検知範囲RL及び中心位置PALで示している。図6に示すように、低速走行時の中心位置PALは、車両1が走行する道路G1のうち、車両1に近い前方位置とされる。また、高速走行時の中心位置PALは、車両1が走行する道路G1のうち、車両1から遠い前方位置とされる。 Here, FIG. 6 schematically shows the road G1, the detection range R, and the center position PA when the vehicle 1 is traveling straight. In FIG. 6, the detection range R and center position PA during high speed driving are shown as the detection range RH and center position PAH, and the detection range R and center position PA during low speed driving are shown as the detection range RL and center position PAL. ing. As shown in FIG. 6, the center position PAL during low-speed driving is a position close to the front of the vehicle 1 on the road G1 on which the vehicle 1 travels. Furthermore, the center position PAL during high-speed travel is a forward position far from the vehicle 1 on the road G1 on which the vehicle 1 travels.
 次に、検知範囲決定部41は、左右傾斜角に基づき検知範囲Rの角度θAを算出する(ステップS13)。角度θAは、車両1の左右の傾斜による検知範囲Rの傾きを補正する角度であり、図5に示す走行状況の場合、角度θAは、左右傾斜角である角度θKを補正する角度、つまり、角度-θKである。なお、図6に示す走行状況の場合、左右傾斜角は零であるので、角度θAは零である。 Next, the detection range determination unit 41 calculates the angle θA of the detection range R based on the left and right inclination angles (step S13). The angle θA is an angle that corrects the inclination of the detection range R due to the left and right inclination of the vehicle 1. In the case of the driving situation shown in FIG. 5, the angle θA is an angle that corrects the angle θK that is the left and right inclination angle. The angle is −θK. Note that in the case of the driving situation shown in FIG. 6, the left-right inclination angle is zero, so the angle θA is zero.
 次いで、検知範囲決定部41は、車速に基づき検知範囲Rの画角を算出する(ステップS14)。
 図7は、撮影画像GCの画角と検知範囲Rの画角の一例を示す図である。
 図7に示すように、検知範囲Rの画角(水平画角、垂直画角)として、2種類の画角(XR、YR)、(XR1、YR1)を例示している。検知範囲Rの画角は、撮影画像GCの画角(XS、YS)よりも小さい画角とされる。また、検知範囲Rの画角(XR、YR)、(XR1、YR1)のうち、画角(XR、YR)は予め決められた所定値よりも車速が小さい場合の画角の一例であり、画角(XR1、YR1)は予め決められた所定値よりも車速が大きい場合の画角の一例である。
Next, the detection range determining unit 41 calculates the angle of view of the detection range R based on the vehicle speed (step S14).
FIG. 7 is a diagram showing an example of the angle of view of the captured image GC and the angle of view of the detection range R.
As shown in FIG. 7, two types of angles of view (XR, YR) and (XR1, YR1) are illustrated as the angle of view (horizontal angle of view, vertical angle of view) of the detection range R. The angle of view of the detection range R is smaller than the angle of view (XS, YS) of the captured image GC. Also, among the angles of view (XR, YR) and (XR1, YR1) of the detection range R, the angle of view (XR, YR) is an example of the angle of view when the vehicle speed is smaller than a predetermined value, The angle of view (XR1, YR1) is an example of the angle of view when the vehicle speed is higher than a predetermined value.
 上記所定値よりも車速が小さい場合の画角(例えば、画角(XR、YR))は、予め決められた閾値サイズSS以上である。また、上記所定値よりも車速が大きい場合の画角(例えば画角(XR1、YR1))は閾値サイズSS以下である。
 本実施形態において、検知範囲Rの画角は、上記所定値よりも車速が大きいほど小さい画角に設定され、かつ、撮影画像GCの画角より小さくされる。このため、画像生成部42によって生成される認識用画像GTは、撮影画像GCよりもデータ量が少ない画像データとなる。なお、図5では、検知範囲Rの画角を、撮影画像GCの画角の相似形状にした場合を例示しているが、相似形状に限定しなくてもよい。
The angle of view (for example, the angle of view (XR, YR)) when the vehicle speed is lower than the predetermined value is greater than or equal to the predetermined threshold size SS. Further, the angle of view (for example, the angle of view (XR1, YR1)) when the vehicle speed is higher than the predetermined value is less than or equal to the threshold size SS.
In this embodiment, the angle of view of the detection range R is set to a smaller angle of view as the vehicle speed is higher than the predetermined value, and is set to be smaller than the angle of view of the captured image GC. Therefore, the recognition image GT generated by the image generation unit 42 becomes image data with a smaller amount of data than the photographed image GC. Note that although FIG. 5 illustrates a case where the angle of view of the detection range R has a shape similar to the angle of view of the photographed image GC, it is not limited to a similar shape.
 上記所定値よりも車速が小さくなるほど検知範囲Rの画角を大きくした場合、極低速で検知範囲Rの画角が、撮影画像GCの画角と近いサイズとなり、認識用画像GTと撮影画像GCのデータ量が殆ど変わらないおそれが生じる。
 そこで、検知範囲決定部41は、車速に基づき算出した検知範囲Rの画角が、予め定めた閾値サイズSSよりも大きいか否かを判定し(ステップS15)、検知範囲Rの画角が閾値サイズSSよりも大きい場合には(ステップS15;YES)、間引き係数KGを算出することで、所定値よりも車速が小さい場合の画角を閾値サイズSSに固定するようにしている。
If the angle of view of the detection range R is increased as the vehicle speed becomes smaller than the predetermined value, the angle of view of the detection range R becomes close to the angle of view of the photographed image GC at extremely low speeds, and the recognition image GT and the photographed image GC There is a possibility that the amount of data will hardly change.
Therefore, the detection range determining unit 41 determines whether the angle of view of the detection range R calculated based on the vehicle speed is larger than a predetermined threshold size SS (step S15), and determines whether the angle of view of the detection range R is larger than the threshold size SS. If the size is larger than the size SS (step S15; YES), a thinning coefficient KG is calculated so that the angle of view when the vehicle speed is smaller than a predetermined value is fixed to the threshold size SS.
 間引き係数KGは、画像中の画素の間引き量を規定する値であり、1/3,1/4,・・・,1/n(nは任意の整数)といった値に設定される。検知範囲Rの画角が大きいほど、画像中の画素の間引き量が大きくなるように間引き係数が算出される。
 例えば、間引き係数KGが1/2の場合、認識用画像GTの隣接する画素データを平均化して一つの画素データを生成する演算処理を行うことによって、画素数を約半分にした認識用画像GTに変換でき、認識用画像GTのデータ量を低減できる。このように、検知範囲Rの画素数を間引きする処理を行うことによって、検知範囲Rに対応する認識用画像GTの解像度を低解像度に変換できる。
The thinning coefficient KG is a value that defines the amount of thinning of pixels in an image, and is set to a value such as 1/3, 1/4, . . . , 1/n (n is an arbitrary integer). The thinning coefficient is calculated such that the larger the angle of view of the detection range R, the larger the amount of thinning of pixels in the image.
For example, when the thinning coefficient KG is 1/2, the recognition image GT is created with approximately half the number of pixels by averaging adjacent pixel data of the recognition image GT to generate one pixel data. , and the amount of data of the recognition image GT can be reduced. In this way, by performing the process of thinning out the number of pixels in the detection range R, the resolution of the recognition image GT corresponding to the detection range R can be converted to a low resolution.
 続くステップS16において、間引き係数KGを算出する。この場合、ステップS17において、検知範囲決定部41は、検知範囲Rと間引き係数KGとを特定する検知範囲特定情報DTを、画像生成部42に出力する。これにより、画像生成部42では、撮影画像GCと検知範囲特定情報DTとに基づき、撮影画像GCにおける検知範囲Rの画像であって、間引き係数KGによって画素数が低減された画像が、認識用画像GTとして生成される。
 なお、検知範囲Rの画角が閾値サイズSSよりも小さい場合には、間引き係数KGが算出されない。この場合、ステップS17において、検知範囲決定部41は、検知範囲Rを特定する検知範囲特定情報DTを画像生成部42に出力する。これにより、画像生成部42では、画素数が低減されない認識用画像GTが生成される。以上が、検知範囲決定処理の内容である。
In the following step S16, a thinning coefficient KG is calculated. In this case, in step S17, the detection range determination unit 41 outputs detection range specification information DT that specifies the detection range R and the thinning coefficient KG to the image generation unit 42. As a result, the image generation unit 42 selects an image for recognition based on the captured image GC and the detection range specifying information DT, which is an image in the detection range R in the captured image GC and whose number of pixels has been reduced by the thinning coefficient KG. It is generated as an image GT.
Note that if the angle of view of the detection range R is smaller than the threshold size SS, the thinning coefficient KG is not calculated. In this case, in step S17, the detection range determining section 41 outputs detection range specifying information DT for specifying the detection range R to the image generating section 42. As a result, the image generation unit 42 generates a recognition image GT in which the number of pixels is not reduced. The above is the content of the detection range determination process.
 以上説明したように、運転支援装置10は、車両1の車両情報DMを検知する車両センサ12と、車両1の周囲状況を検知する周囲検知センサ11と、車両センサ12の検知結果に基づき、周囲状況の検知範囲Rを決定する検知範囲決定部41と、周囲検知センサ11の検知情報に基づいて、検知範囲Rの画像からなる認識用画像GTを生成する画像生成部42と、認識用画像GTに対し、検出対象の物体を画像認識する画像認識部43とを備え、認識用画像GTは、周囲検知センサ11の検知情報から得られる撮影画像GCよりもデータ量が少ない画像とされる。
 この構成によれば、周囲状況の検知範囲Rを周囲検知センサ11の検知範囲より小さくすることによって、データ量を抑えた認識用画像GTを得ることができる。これにより、認識用画像GTから検出対象の物体を画像認識する際の演算量を低減することができる。演算量を低減できる分、物体の検知速度の向上が可能になると共に、高性能な演算処理デバイス等のハードウェア資源を採用しない構成が可能になり、コスト低減に有利となる。
As described above, the driving support device 10 uses the vehicle sensor 12 that detects the vehicle information DM of the vehicle 1, the surroundings detection sensor 11 that detects the surrounding situation of the vehicle 1, and the surroundings based on the detection results of the vehicle sensor 12. A detection range determination unit 41 that determines the detection range R of the situation; an image generation unit 42 that generates a recognition image GT consisting of an image of the detection range R based on detection information from the surrounding detection sensor 11; and a recognition image GT. On the other hand, it is provided with an image recognition unit 43 for image recognition of the object to be detected, and the recognition image GT is an image with a smaller amount of data than the photographed image GC obtained from the detection information of the surrounding detection sensor 11.
According to this configuration, by making the detection range R of the surrounding situation smaller than the detection range of the surrounding detection sensor 11, it is possible to obtain a recognition image GT with a reduced amount of data. Thereby, it is possible to reduce the amount of calculation when recognizing the object to be detected from the recognition image GT. Since the amount of calculation can be reduced, it is possible to improve the object detection speed, and it is also possible to create a configuration that does not require hardware resources such as high-performance arithmetic processing devices, which is advantageous for cost reduction.
 なお、車両センサ12は本発明の「第1センサ」に相当し、周囲検知センサ11は本発明の「第2センサ」に相当する。なお、車両センサ12及び周囲検知センサ11は適宜に変更してもよく、例えば、周囲検知センサ11は、周囲状況を検出するデバイスの一つであるレーダを備えてもよい。 Note that the vehicle sensor 12 corresponds to the "first sensor" of the present invention, and the surrounding detection sensor 11 corresponds to the "second sensor" of the present invention. Note that the vehicle sensor 12 and the surroundings detection sensor 11 may be changed as appropriate. For example, the surroundings detection sensor 11 may include a radar that is one of the devices that detect the surroundings.
 また、車両センサ12は、車両1の車速、舵角及び左右傾斜角をそれぞれ検知する車速センサ31、舵角センサ32、及び傾斜角センサ33を含み、検知範囲決定部41は、車両センサ12の検知結果に基づき、検知範囲Rを可変する。この構成によれば、車両1の車速、舵角及び左右傾斜角に基づき検知範囲Rを可変するので、検知範囲Rを小さくして画像認識時の演算量の低減を図りつつ、車両1の走行状態に応じて物体認識に必要な検知範囲が決定し易くなり、適切な物体を認識し易くなる。 Further, the vehicle sensor 12 includes a vehicle speed sensor 31, a steering angle sensor 32, and a tilt angle sensor 33 that detect the vehicle speed, steering angle, and left/right tilt angle of the vehicle 1, respectively. The detection range R is varied based on the detection result. According to this configuration, since the detection range R is varied based on the vehicle speed, steering angle, and left/right inclination angle of the vehicle 1, the detection range R can be made small to reduce the amount of calculations during image recognition, while the vehicle 1 is traveling It becomes easier to determine the detection range necessary for object recognition according to the state, and it becomes easier to recognize an appropriate object.
 なお、車両センサ12が、車速センサ31、舵角センサ32、及び傾斜角センサ33の全てを備える態様を説明したが、これに限定されず、車速センサ31、舵角センサ32、及び傾斜角センサ33の少なくともいずれかを含むように構成してもよい。例えば、検知範囲Rを決定する処理は、車両1の車速、舵角及び左右傾斜角のいずれか一つだけに基づいて行うようにしてもよい。また、車両センサ12が、車両1のピッチ角を検知するピッチ各センサ等の他のセンサを備え、他のセンサの検知結果を利用して検知範囲Rを決定し、或いは検知範囲Rを補正するようにしてもよい。 Although a mode has been described in which the vehicle sensor 12 includes all of the vehicle speed sensor 31, steering angle sensor 32, and tilt angle sensor 33, the vehicle sensor 12 is not limited to this, and includes the vehicle speed sensor 31, the steering angle sensor 32, and the tilt angle sensor It may be configured to include at least one of 33. For example, the process of determining the detection range R may be performed based on only one of the vehicle speed, steering angle, and left/right inclination angle of the vehicle 1. The vehicle sensor 12 also includes other sensors such as pitch sensors that detect the pitch angle of the vehicle 1, and determines the detection range R or corrects the detection range R using the detection results of the other sensors. You can do it like this.
 また、検知範囲決定部41は、所定値よりも車速が大きい場合、検知範囲Rの画角を小さくし、所定値よりも車速が小さい場合、検知範囲Rの画角を大きくする。この構成によれば、所定値よりも車速が大きい場合、検知範囲Rの画角を小さくするので、高速時は遠くの物体を検知可能な検知範囲Rにできる。所定値よりも車速が小さい場合、検知範囲Rの画角を大きくするので、より広範囲の物体を認識し易くなる。上記したように、所定値は適宜に変更可能である。また、検知範囲Rの画角をどのようなサイズにするかについては、上記の態様に限定されず、適宜に決定すればよい。 Further, the detection range determination unit 41 reduces the angle of view of the detection range R when the vehicle speed is higher than a predetermined value, and increases the angle of view of the detection range R when the vehicle speed is lower than the predetermined value. According to this configuration, when the vehicle speed is higher than a predetermined value, the angle of view of the detection range R is made small, so that the detection range R can be set so that distant objects can be detected at high speeds. When the vehicle speed is lower than the predetermined value, the angle of view of the detection range R is increased, making it easier to recognize objects over a wider range. As described above, the predetermined value can be changed as appropriate. Further, the size of the angle of view of the detection range R is not limited to the above-mentioned embodiment, and may be determined as appropriate.
 また、検知範囲決定部41は、車速に応じて検知範囲Rの画素数を間引きするので、車速によって検知範囲Rが大きくなる場合に、検知範囲Rの画素数を間引きすることで認識用画像GTのデータ量を低減でき、演算量を低減することができる。本構成では、高速時は検知範囲Rが小さくなるので、検知範囲R内の画素数が間引きされず、低速時の場合に検知範囲Rを広くし、検知範囲R内の画素数が間引きされる。このため、撮影画像GCにブレなどが生じやすい高速時の場合でも認識用画像GTの解像度を維持でき、物体を適切に認識し易くなる。 In addition, since the detection range determination unit 41 thins out the number of pixels in the detection range R according to the vehicle speed, when the detection range R increases depending on the vehicle speed, by thinning out the number of pixels in the detection range R, the recognition image GT The amount of data can be reduced, and the amount of calculation can be reduced. In this configuration, the detection range R becomes smaller when the speed is high, so the number of pixels within the detection range R is not thinned out, and when the speed is low, the detection range R is widened and the number of pixels within the detection range R is thinned out. . Therefore, the resolution of the recognition image GT can be maintained even at high speeds where blurring etc. are likely to occur in the photographed image GC, making it easier to appropriately recognize objects.
 また、検知範囲決定部41は、車両センサ12の検出結果に基づき車両1の進路を推定し、推定した進路上に検知範囲Rの一部を重ねる。この構成によれば、検知範囲Rを小さくして認識用画像GTのデータ量を低減しつつ、車両1の進路上の認識用画像GTを得ることができ、車両1の走行に影響する物体を適切に認識し易くなる。この場合、車両センサ12に含まれる舵角センサ32の検出結果に応じて、進路上を含むように舵角に応じて検知範囲Rをずらすことができ、旋回先の物体を検知し易くなる。
 本実施形態では、車両1の車速、舵角及び左右傾斜角に基づき車両1の進路を特定する場合を説明したが、車両1の進路を特定可能な範囲で、車両1の車速、舵角及び左右傾斜角のいずれかの情報を利用しないようにしてもよいし、他のセンサの検知結果を利用して車両1の進路を推定するようにしてもよい。
Furthermore, the detection range determination unit 41 estimates the course of the vehicle 1 based on the detection result of the vehicle sensor 12, and overlaps a part of the detection range R on the estimated course. According to this configuration, it is possible to obtain the recognition image GT on the course of the vehicle 1 while reducing the amount of data of the recognition image GT by reducing the detection range R, and to detect objects that affect the running of the vehicle 1. It becomes easier to recognize properly. In this case, according to the detection result of the steering angle sensor 32 included in the vehicle sensor 12, the detection range R can be shifted according to the steering angle to include the course, making it easier to detect objects at the turning destination.
In this embodiment, a case has been described in which the course of the vehicle 1 is specified based on the vehicle speed, the steering angle, and the left/right inclination angle of the vehicle 1. The information on either the left or right inclination angles may not be used, or the course of the vehicle 1 may be estimated using detection results from other sensors.
 また、検知範囲決定部41は、左右傾斜角に応じて検知範囲Rの傾きを補正するので、車両1が左右に傾斜した場合でも、検知範囲Rの認識用画像GTが左右に傾かないようにできる。これにより、演算量を少なくしつつ、物体の検出精度の向上や検出時間の短時間化を図り易くなる。 Furthermore, since the detection range determination unit 41 corrects the inclination of the detection range R according to the left and right angle of inclination, even if the vehicle 1 leans left and right, the recognition image GT of the detection range R is prevented from leaning left and right. can. This makes it easier to improve object detection accuracy and shorten detection time while reducing the amount of calculations.
 また、検知範囲決定部41は、車両1が旋回中の場合、検知範囲Rを、旋回先に相当する道路G1の位置を含む範囲にする。旋回中は道路G1が旋回方向に曲がるため、左右傾斜角に応じて検知範囲を補正するだけでは、旋回先の道路G1の位置を検知範囲に含めることができないおそれが生じる。旋回中の場合に、検知範囲Rを、旋回先に相当する道路G1の位置を含む範囲にすることにより、演算量を低減しながら、車両1の走行に影響する物体を適切に認識し易くなる。
 なお、旋回先に相当する道路G1の位置は、本発明の「旋回先に相当する進路」に相当している。
Furthermore, when the vehicle 1 is turning, the detection range determination unit 41 sets the detection range R to include the position of the road G1 corresponding to the turning destination. Since the road G1 curves in the turning direction during a turn, there is a possibility that the position of the road G1 at the turning destination cannot be included in the detection range just by correcting the detection range according to the left/right inclination angle. When the vehicle 1 is turning, by setting the detection range R to include the position of the road G1 corresponding to the turning destination, objects that affect the running of the vehicle 1 can be easily recognized while reducing the amount of calculation. .
Note that the position of the road G1 corresponding to the turning destination corresponds to the "course corresponding to the turning destination" of the present invention.
 上述の実施形態は、あくまでも本発明の一態様の例示であり、本発明の趣旨を逸脱しない範囲において任意に変形、及び応用が可能である。 The above-described embodiment is merely an example of one aspect of the present invention, and can be arbitrarily modified and applied without departing from the spirit of the present invention.
[上記実施の形態によりサポートされる構成]
 上記実施の形態は、以下の構成をサポートする。
[Configuration supported by the above embodiment]
The above embodiment supports the following configurations.
 (構成1)車両の車両情報を検知する第1センサと、前記車両の周囲状況を検知する第2センサと、前記第1センサの検知情報と前記第2センサの検知情報に基づき、周囲物体を認識する演算処理装置において、前記第1センサの検知結果に基づき、前記周囲状況の検知範囲を決定する検知範囲決定部と、前記第2センサの検知情報に基づいて、前記検知範囲の画像からなる認識用画像を生成する画像生成部と、前記認識用画像に対し、検出対象の物体を画像認識する画像認識部とを備え、前記認識用画像は、前記第2センサの検知情報から得られる前記周囲状況の画像よりもデータ量が少ないことを特徴とする演算処理装置。
 この構成によれば、周囲状況の検知範囲を第2センサの検知範囲より小さくすることによって、データ量を抑えた認識用画像を得ることができる。これにより、認識用画像から検出対象の物体を画像認識する際の演算量を低減することができる。演算量を低減できる分、物体の検知速度の向上が可能になると共に、高性能な演算処理デバイス等のハードウェア資源を不要にでき、物体の検知に要するコスト低減が可能になる。
(Configuration 1) A first sensor that detects vehicle information of a vehicle, a second sensor that detects surrounding conditions of the vehicle, and detects surrounding objects based on the detection information of the first sensor and the detection information of the second sensor. The recognition arithmetic processing device includes a detection range determination unit that determines a detection range of the surrounding situation based on the detection result of the first sensor, and an image of the detection range based on detection information of the second sensor. The recognition image includes an image generation unit that generates a recognition image, and an image recognition unit that recognizes an object to be detected from the recognition image, and the recognition image includes the recognition image obtained from the detection information of the second sensor. An arithmetic processing device characterized by having a smaller amount of data than an image of the surrounding situation.
According to this configuration, by making the detection range of the surrounding situation smaller than the detection range of the second sensor, it is possible to obtain a recognition image with a reduced amount of data. Thereby, it is possible to reduce the amount of calculation when recognizing the object to be detected from the recognition image. Since the amount of calculation can be reduced, it is possible to improve the speed of object detection, and it is also possible to eliminate the need for hardware resources such as high-performance arithmetic processing devices, thereby reducing the cost required for object detection.
 (構成2)前記第1センサは、前記車両の車速、舵角及び左右傾斜角をそれぞれ検知する車速センサ、舵角センサ、及び傾斜角センサの少なくともいずれかを含み、前記検知範囲決定部は、前記第1センサの検知結果に基づき、前記検知範囲を可変することを特徴とする構成1に記載の演算処理装置。
 この構成によれば、車両の車速、舵角及び左右傾斜角の少なくともいずれかに基づき検知範囲を可変するので、検知範囲を小さくして画像認識時の演算量の低減を図りつつ、車両の走行状態に応じて物体認識に必要な検知範囲が決定し易くなり、適切な物体を認識し易くなる。
(Configuration 2) The first sensor includes at least one of a vehicle speed sensor, a steering angle sensor, and a tilt angle sensor that respectively detect a vehicle speed, a steering angle, and a left/right tilt angle of the vehicle, and the detection range determining unit includes: The arithmetic processing device according to configuration 1, wherein the detection range is varied based on the detection result of the first sensor.
According to this configuration, the detection range is varied based on at least one of the vehicle speed, steering angle, and left/right inclination angle, so that the detection range can be made smaller to reduce the amount of calculations during image recognition, while the vehicle is It becomes easier to determine the detection range necessary for object recognition according to the state, and it becomes easier to recognize an appropriate object.
 (構成3)前記検知範囲決定部は、所定値よりも前記車速が大きい場合、前記検知範囲の画角を小さくし、所定値よりも前記車速が小さい場合、前記検知範囲の画角を大きくすることを特徴とする構成2に記載の演算処理装置。
 この構成によれば、所定値よりも車速が大きい場合、検知範囲の画角を小さくするので、高速時は遠くの物体を検知可能な検知範囲にでき、所定値よりも車速が小さい場合、検知範囲の画角を大きくするので、より広範囲の物体を認識し易くなる。
(Configuration 3) The detection range determining unit reduces the angle of view of the detection range when the vehicle speed is higher than a predetermined value, and increases the angle of view of the detection range when the vehicle speed is lower than a predetermined value. The arithmetic processing device according to configuration 2, characterized in that:
According to this configuration, when the vehicle speed is higher than a predetermined value, the angle of view of the detection range is reduced, so that a distant object can be detected at high speeds, and when the vehicle speed is lower than the predetermined value, the detection range is Since the angle of view of the range is increased, it becomes easier to recognize objects over a wider range.
 (構成4)前記検知範囲決定部は、前記車速に応じて前記検知範囲の画素数を間引きすることを特徴とする構成3に記載の演算処理装置。
 この構成によれば、車速によって検知範囲が大きくなる場合に、検知範囲の画素数を間引きすることで認識用画像のデータ量を低減でき、演算量を低減することができる。
(Configuration 4) The arithmetic processing device according to Configuration 3, wherein the detection range determining unit thins out the number of pixels in the detection range according to the vehicle speed.
According to this configuration, when the detection range increases depending on the vehicle speed, the data amount of the recognition image can be reduced by thinning out the number of pixels in the detection range, and the amount of calculation can be reduced.
 (構成5)前記検知範囲決定部は、前記左右傾斜角に応じて前記検知範囲の傾きを補正することを特徴とする構成2から4のいずれか一項に記載の演算処理装置。
 この構成によれば、左右傾斜角に応じて前記検知範囲の傾きを補正するので、車両が左右に傾斜した場合でも、検知範囲の認識用画像が左右に傾かないようにできる。これにより、演算量を少なくしつつ、物体の検出精度の向上や検出時間の短時間化を図り易くなる。
(Structure 5) The arithmetic processing device according to any one of Structures 2 to 4, wherein the detection range determination unit corrects the inclination of the detection range according to the horizontal inclination angle.
According to this configuration, since the inclination of the detection range is corrected according to the left-right inclination angle, even if the vehicle inclines left and right, the recognition image of the detection range can be prevented from inclining left and right. This makes it easier to improve object detection accuracy and shorten detection time while reducing the amount of calculations.
 (構成6)前記検知範囲決定部は、前記第1センサの検知結果に基づき前記車両の進路を推定し、推定した進路上に前記検知範囲の一部を重ねることを特徴とする構成2から5のいずれか一項に記載の演算処理装置。
 この構成によれば、検知範囲を小さくして認識用画像のデータ量を低減しつつ、車両の進路上の認識用画像を得ることができ、車両の走行に影響する物体を適切に認識し易くなる。
(Configuration 6) Configurations 2 to 5, wherein the detection range determination unit estimates the course of the vehicle based on the detection result of the first sensor, and overlaps a part of the detection range on the estimated course. The arithmetic processing device according to any one of the above.
According to this configuration, it is possible to obtain a recognition image on the path of the vehicle while reducing the amount of data for the recognition image by reducing the detection range, making it easier to appropriately recognize objects that affect the running of the vehicle. Become.
 (構成7)前記検知範囲決定部は、前記車両が旋回中の場合、前記検知範囲を、旋回先に相当する進路を含む範囲にすることを特徴とする構成6に記載の演算処理装置。
 この構成によれば、検知範囲を、旋回先に相当する進路を含む範囲にするので、演算量を低減しながら、車両の走行に影響する物体を適切に認識し易くなる。
(Structure 7) The arithmetic processing device according to Structure 6, wherein when the vehicle is turning, the detection range determination unit sets the detection range to a range that includes a course corresponding to a destination of the turn.
According to this configuration, since the detection range is set to include the course corresponding to the destination of the turn, it becomes easier to appropriately recognize objects that affect the running of the vehicle while reducing the amount of calculation.
 1 車両
 10 運転支援装置(演算処理装置)
 11 周囲検知センサ(第2センサ)
 12 車両センサ(第1センサ)
 13 演算部
 14 記憶部
 15 出力部
 21 カメラ
 31 車速センサ
 32 舵角センサ
 33 傾斜角センサ
 41 検知範囲決定部
 42 画像生成部
 43 画像認識部
 R 検知範囲
 GT 認識用画像
 GC 撮影画像(周囲状況の画像)
 
1 Vehicle 10 Driving support device (computation processing device)
11 Surrounding detection sensor (second sensor)
12 Vehicle sensor (first sensor)
13 Arithmetic unit 14 Storage unit 15 Output unit 21 Camera 31 Vehicle speed sensor 32 Rudder angle sensor 33 Tilt angle sensor 41 Detection range determination unit 42 Image generation unit 43 Image recognition unit R Detection range GT Recognition image GC Photographed image (image of surrounding situation) )

Claims (7)

  1.  車両の車両情報を検知する第1センサ(12)と、前記車両(1)の周囲状況を検知する第2センサ(11)と、前記第1センサ(12)の検知情報と前記第2センサ(11)の検知情報に基づき、周囲物体を認識する演算処理装置において、
     前記第1センサ(12)の検知結果に基づき、前記周囲状況の検知範囲を決定する検知範囲決定部(41)と、
     前記第2センサ(11)の検知情報に基づいて、前記検知範囲(R)の画像からなる認識用画像(GT)を生成する画像生成部(42)と、
     前記認識用画像(GT)に対し、検出対象の物体を画像認識する画像認識部(43)とを備え、
     前記認識用画像(GT)は、前記第2センサ(11)の検知情報から得られる前記周囲状況の画像(GC)よりもデータ量が少ないことを特徴とする演算処理装置。
    A first sensor (12) that detects vehicle information of the vehicle, a second sensor (11) that detects the surrounding situation of the vehicle (1), and a detection information of the first sensor (12) and the second sensor ( 11) In the arithmetic processing device that recognizes surrounding objects based on the detection information,
    a detection range determination unit (41) that determines a detection range of the surrounding situation based on the detection result of the first sensor (12);
    an image generation unit (42) that generates a recognition image (GT) consisting of an image of the detection range (R) based on the detection information of the second sensor (11);
    an image recognition unit (43) for image recognition of an object to be detected for the recognition image (GT);
    The arithmetic processing device is characterized in that the recognition image (GT) has a smaller amount of data than the surrounding situation image (GC) obtained from the detection information of the second sensor (11).
  2.  前記第1センサ(12)は、前記車両(1)の車速、舵角及び左右傾斜角をそれぞれ検知する車速センサ(31)、舵角センサ(32)、及び傾斜角センサ(33)の少なくともいずれかを含み、
     前記検知範囲決定部(41)は、前記第1センサ(12)の検知結果に基づき、前記検知範囲(R)を可変することを特徴とする請求項1に記載の演算処理装置。
    The first sensor (12) is at least one of a vehicle speed sensor (31), a steering angle sensor (32), and a tilt angle sensor (33) that respectively detect the vehicle speed, steering angle, and left/right tilt angle of the vehicle (1). including
    The arithmetic processing device according to claim 1, wherein the detection range determination unit (41) varies the detection range (R) based on the detection result of the first sensor (12).
  3.  前記検知範囲決定部(41)は、所定値よりも前記車速が大きい場合、前記検知範囲(R)の画角を小さくし、所定値よりも前記車速が小さい場合、前記検知範囲(R)の画角を大きくすることを特徴とする請求項2に記載の演算処理装置。 The detection range determination unit (41) reduces the angle of view of the detection range (R) when the vehicle speed is higher than a predetermined value, and reduces the angle of view of the detection range (R) when the vehicle speed is lower than a predetermined value. The arithmetic processing device according to claim 2, characterized in that the angle of view is increased.
  4.  前記検知範囲決定部(41)は、前記車速に応じて前記検知範囲(R)の画素数を間引きすることを特徴とする請求項3に記載の演算処理装置。 The arithmetic processing device according to claim 3, wherein the detection range determining unit (41) thins out the number of pixels in the detection range (R) according to the vehicle speed.
  5.  前記検知範囲決定部(41)は、前記左右傾斜角に応じて前記検知範囲(R)の傾きを補正することを特徴とする請求項2から4のいずれか一項に記載の演算処理装置。 The arithmetic processing device according to any one of claims 2 to 4, wherein the detection range determination unit (41) corrects the inclination of the detection range (R) according to the left-right inclination angle.
  6.  前記検知範囲決定部(41)は、前記第1センサ(12)の検知結果に基づき前記車両(1)の進路を推定し、推定した進路上に前記検知範囲(R)の一部を重ねることを特徴とする請求項2から5のいずれか一項に記載の演算処理装置。 The detection range determination unit (41) estimates the course of the vehicle (1) based on the detection result of the first sensor (12), and overlaps a part of the detection range (R) on the estimated course. The arithmetic processing device according to any one of claims 2 to 5, characterized in that:
  7.  前記検知範囲決定部(41)は、前記車両(1)が旋回中の場合、前記検知範囲(R)を、旋回先に相当する進路を含む範囲にすることを特徴とする請求項6に記載の演算処理装置。 
     
    7. The detection range determination unit (41) sets the detection range (R) to a range that includes a course corresponding to a turning destination when the vehicle (1) is turning. arithmetic processing unit.
PCT/JP2023/000607 2022-03-28 2023-01-12 Computation processing device WO2023188685A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-052644 2022-03-28
JP2022052644 2022-03-28

Publications (1)

Publication Number Publication Date
WO2023188685A1 true WO2023188685A1 (en) 2023-10-05

Family

ID=88200117

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/000607 WO2023188685A1 (en) 2022-03-28 2023-01-12 Computation processing device

Country Status (1)

Country Link
WO (1) WO2023188685A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62155140A (en) * 1985-12-27 1987-07-10 Aisin Warner Ltd Road image input system for controlling vehicle
JP2013017024A (en) * 2011-07-04 2013-01-24 Denso Corp Detector of object which approaches vehicle
WO2015174208A1 (en) * 2014-05-12 2015-11-19 ボッシュ株式会社 Image-recognition device and method for controlling same
JP2020025265A (en) * 2018-07-31 2020-02-13 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and imaging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62155140A (en) * 1985-12-27 1987-07-10 Aisin Warner Ltd Road image input system for controlling vehicle
JP2013017024A (en) * 2011-07-04 2013-01-24 Denso Corp Detector of object which approaches vehicle
WO2015174208A1 (en) * 2014-05-12 2015-11-19 ボッシュ株式会社 Image-recognition device and method for controlling same
JP2020025265A (en) * 2018-07-31 2020-02-13 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device and imaging device

Similar Documents

Publication Publication Date Title
US9902323B2 (en) Periphery surveillance apparatus and program
JP4291741B2 (en) Lane departure warning device
US8306700B2 (en) Vehicle travel support device, vehicle, and vehicle travel support program
EP2881927B1 (en) Warning device for vehicle
JP5949861B2 (en) Vehicle approaching object detection device and vehicle approaching object detection method
JP2016119570A (en) Vehicle periphery monitoring device
CN111824129B (en) Image processing apparatus and image processing method
CN114066929A (en) Method of predicting a trajectory of a target vehicle relative to an autonomous vehicle
JPWO2015174208A1 (en) Image recognition apparatus and control method thereof
JP2011065219A (en) Device for estimation of road curvature
JP2008285083A (en) Parking support device
CN109017983B (en) Driving assistance system
US10917584B2 (en) Image display device
US11760275B2 (en) Image pickup system and image pickup device
JP5150958B2 (en) Vehicle travel support device, vehicle, vehicle travel support program
US10625678B2 (en) Image display device
JP4193740B2 (en) Nose view monitor device
US10540807B2 (en) Image processing device
JP3393427B2 (en) Curve radius estimation device and automatic steering control system with curve radius estimation device
US20210402987A1 (en) Image processor and image processing method
JP2000211543A (en) Vehicular driving supporting device
WO2023188685A1 (en) Computation processing device
JP2005041360A (en) Driving operation assisting device for vehicle, and vehicle having it
JP6941949B2 (en) Vehicle image display device
US12083959B2 (en) Image control system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23776835

Country of ref document: EP

Kind code of ref document: A1