WO2022254795A1 - Image processing device and image processing method - Google Patents
Image processing device and image processing method Download PDFInfo
- Publication number
- WO2022254795A1 WO2022254795A1 PCT/JP2022/004704 JP2022004704W WO2022254795A1 WO 2022254795 A1 WO2022254795 A1 WO 2022254795A1 JP 2022004704 W JP2022004704 W JP 2022004704W WO 2022254795 A1 WO2022254795 A1 WO 2022254795A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- dimensional object
- object information
- information
- image processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 83
- 238000003672 processing method Methods 0.000 title claims description 3
- 230000010354 integration Effects 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 239000000284 extract Substances 0.000 claims abstract 6
- 238000000605 extraction Methods 0.000 claims abstract 2
- 230000007423 decrease Effects 0.000 claims 1
- 238000000034 method Methods 0.000 description 22
- 238000003384 imaging method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000012937 correction Methods 0.000 description 7
- 238000003702 image correction Methods 0.000 description 7
- 230000000052 comparative effect Effects 0.000 description 6
- 230000005856 abnormality Effects 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/10—Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument
- G01C3/14—Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument with binocular observation at a single point, e.g. stereoscopic type
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/145—Illumination specially adapted for pattern recognition, e.g. using gratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/72—Combination of two or more compensation controls
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
- G01S2013/9323—Alternative operation using light waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present invention relates to an image processing device and an image processing method for recognizing the environment outside the vehicle based on an image of the outside of the vehicle.
- flare halation
- the surroundings of the light source become bright and blurry
- a high-intensity light source for example, a brake lamp during braking
- an image processing apparatus that can image the target without causing halation with a single camera.
- the vehicle is extracted from the images acquired by the camera.
- a region whose brightness is to be corrected is calculated, and in the calculated region whose brightness is to be corrected, a portion having brightness that causes halation, such as a headlight portion and a road surface reflection portion of the headlight.
- the flare part of the headlight is calculated, the calculated part that causes halation is corrected to brightness that does not cause halation by multiplying it by a predetermined reduction rate, and the corrected halation area is superimposed on the image acquired by the camera. output.
- the present invention can accurately calculate the parallax information around the high-brightness light source even if flare occurs when capturing an image of a three-dimensional object with a high-brightness light source. It is an object of the present invention to provide an image processing apparatus capable of making good estimations.
- an image recognition apparatus of the present invention includes a camera that captures a first image with a first exposure and captures a second image with a second exposure that is less than the first exposure; a three-dimensional object extracting unit for extracting a first region in which a three-dimensional object exists from a first image and extracting a second region in which the three-dimensional object exists from the second image; and first three-dimensional object information from the first region. and a three-dimensional object information detection unit that detects second three-dimensional object information from the second region, and a three-dimensional object information integration unit that integrates the first three-dimensional object information and the second three-dimensional object information. It is used as an image processing device.
- the image processing apparatus of the present invention even if flare occurs when imaging a three-dimensional object with a high-intensity light source, parallax information around the high-intensity light source can be accurately calculated, and the position, speed, shape, and position of the three-dimensional object can be calculated.
- the type and the like can be estimated with high accuracy.
- FIG. 1 is a hardware configuration diagram of a stereo camera device according to an embodiment
- FIG. 1 is a functional block diagram of a stereo camera device of one embodiment
- FIG. 4 is a processing flowchart of the stereo camera device of one embodiment. 4 is a flowchart of a three-dimensional object recognition process of a comparative example
- 4 is a flowchart of solid object recognition processing according to the present invention. An example of the procedure for calculating the amount of flare light.
- FIG. 3 is a functional block diagram showing details of an object recognition unit; The functional block diagram which shows the detail of a normal shutter three-dimensional object process part. The functional block diagram which shows the detail of a low-exposure shutter three-dimensional object processing part.
- a stereo camera device which is an embodiment of the image processing device of the present invention, will be described below with reference to the drawings.
- FIG. 1 is a hardware configuration diagram showing an outline of a stereo camera device 10 of this embodiment.
- the stereo camera device 10 is an in-vehicle device that recognizes the environment outside the vehicle based on captured images outside the vehicle. Recognize the environment outside the vehicle. Then, the stereo camera device 10 determines control policies such as acceleration/deceleration assistance and steering assistance of the own vehicle according to the recognized environment outside the vehicle.
- the stereo camera device 10 includes cameras 11 (a left camera 11L and a right camera 11R), an image input interface 12, an image processing section 13, an arithmetic processing section 14, a storage section 15, a CAN An interface 16 and an abnormality monitoring unit 17 are provided.
- the configuration from the image input interface 12 to the abnormality monitoring unit 17 is a single or a plurality of computer units interconnected via an internal bus.
- Various functions such as the image processing unit 13 and the arithmetic processing unit 14 are realized by executing the functions, but the present invention will be described below while omitting such well-known techniques.
- the camera 11 is a stereo camera composed of a left camera 11L and a right camera 11R, which is installed on the vehicle so as to capture an image in front of the vehicle.
- the imaging element of the left camera 11L alternates between a first left image P1 L with a normal exposure (hereinafter referred to as "normal shutter”) and a second left image P2 L with an exposure less than normal (hereinafter referred to as "low exposure shutter”).
- normal shutter a normal exposure
- low exposure shutter an exposure less than normal
- the "exposure amount” in this embodiment is a physical quantity obtained by multiplying the exposure time of the imaging element by the gain for amplifying the output of the imaging element.
- the exposure amount of the "normal shutter” is, for example, an exposure amount obtained by multiplying the exposure time of 20 msec by a gain 16 times the low-exposure shutter gain G described later, and the exposure amount of the "low-exposure shutter” is, for example, The amount of exposure is obtained by multiplying the exposure time of 0.1 msec by the predetermined low-exposure shutter gain G, but each amount of exposure is not limited to the example.
- each camera alternately captures the first image P1 (P1 L , P1 R ) with the normal shutter and the second image P2 (P2 L , P2 R ) with the low exposure shutter. Then, the imaging of the second image P2 with the low-exposure shutter may be omitted. Only when it is determined that the lamp size W1 (see FIG. 5B) in the image P1 is equal to or larger than the predetermined threshold value Wth , the second image P2 with the low exposure shutter may be captured.
- the image input interface 12 is an interface that controls imaging by the camera 11 and captures the captured images P (P1 L , P2 L , P1 R , P2 R ). An image P captured through this interface is transmitted to the image processing unit 13 or the like through an internal bus.
- the image processing unit 13 compares the left image P L (P1 L , P2 L ) captured by the left camera 11L and the right image P R (P1 R , P2 R ) captured by the right camera 11R, and performs Then, the device-specific deviation due to the imaging element is corrected, and correction such as noise interpolation is performed.
- the image processing unit 13 calculates mutually corresponding portions between the left and right images having the same exposure amount, calculates parallax information I (distance information for each point on the image), and stores it in the storage unit 15 . Specifically, the first left image P1L and the first right image P1R captured at the same timing are compared to calculate the first parallax information I1, and the second left image P2L and the second left image P2L captured at the same timing are compared to the first right image P1R. The second parallax information I2 is calculated from the comparison of the two right images P2- R .
- the arithmetic processing unit 14 uses the image P and the parallax information I stored in the storage unit 15 to recognize various objects necessary for perceiving the environment around the vehicle.
- Various objects recognized here include people, other vehicles, other obstacles, traffic lights, signs, tail lamps and headlights of other vehicles, and the like. Some of these recognition results and intermediate calculation results are recorded in the storage unit 15 . Furthermore, the arithmetic processing unit 14 uses the object recognition result to determine the control policy of the host vehicle.
- the storage unit 15 is a storage device such as a semiconductor memory, and stores the corrected image P and the parallax information I output from the image processing unit 13, and the object recognition result output from the arithmetic processing unit 14 and control of the own vehicle. Memorize policies.
- the CAN interface 16 is an interface for transmitting the object recognition result obtained by the arithmetic processing unit 14 and the control policy of the own vehicle to an in-vehicle network CAN (Controller Area Network).
- a control system (ECU, etc.) for controlling the driving system, braking system, steering system, etc. of the own vehicle is connected to the in-vehicle network CAN.
- Driving support such as automatic braking and steering avoidance can be executed according to the environment.
- the abnormality monitoring unit 17 monitors whether each unit in the stereo camera device 10 is operating abnormally or whether an error has occurred during data transfer, and is designed to prevent abnormal operations.
- the left camera 11L alternately captures the first left image P1L with the normal shutter and the second left image P2L with the low-exposure shutter
- the right camera 11R also captures the first right image P1R with the normal shutter and the low-exposure shutter.
- the second right image P2- R of the exposure shutter is alternately captured.
- the image correction unit 13a of the image processing unit 13 performs image correction processing to absorb the unique peculiarities of the image sensor for each image P (P1 L , P2 L , P1 R , P2 R ).
- the corrected image P is stored in the image buffer 15 a in the storage unit 15 .
- the parallax calculation unit 13b of the image processing unit 13 compares the left and right images captured at the same timing (the first left image P1 L and the first right image P1 R , or the second left image P2 L and the second right image P2 R ) is collated, and parallax information I is calculated for each exposure amount. This calculation makes it clear where on the left image PL corresponds to where on the right image PR , so the distance to the object on each image can be obtained according to the principle of triangulation. Then, the parallax information I (I1, I2) for each exposure amount obtained here is stored in the parallax buffer 15b of the storage unit 15.
- the object recognition unit 14a of the arithmetic processing unit 14 uses the image P and the parallax information I stored in the storage unit 15 to perform object recognition processing for extracting an area in which a three-dimensional object exists.
- Three-dimensional objects to be recognized include, for example, people, cars, other three-dimensional objects, signs, traffic lights, tail lamps, etc.
- the recognition dictionary 15c is referred to as necessary.
- the vehicle control unit 14b of the arithmetic processing unit 14 controls the own vehicle in consideration of the object recognition result output from the object recognition unit 14a and the state of the own vehicle (speed, steering angle, etc.). determine policy. For example, if there is a possibility of a collision with the preceding vehicle, the system issues a warning to the occupants to encourage them to take actions to avoid a collision, or controls the vehicle's braking and steering angle to avoid the preceding vehicle. is generated and output to the control system (ECU, etc.) via the CAN interface 16 and the in-vehicle network CAN.
- the control system ECU, etc.
- step S1 the image correction section 13a of the image processing section 13 processes the right image PR . Specifically, the image correction unit 13a performs device-specific deviation correction, noise correction, and the like on the right image P R (P1 R , P2 R ) captured by the right camera 11R. Store in the buffer 15a.
- step S2 the image correction section 13a of the image processing section 13 processes the left image PL .
- the image correction unit 13a applies device-specific deviation correction, noise correction, and the like to the left image P L (P1 L , P2 L ) captured by the left camera 11L. Store in the buffer 15a. Note that the order of steps S1 and S2 may be reversed.
- step S3 the parallax calculation unit 13b of the image processing unit 13 compares the corrected left image PL and right image PR stored in the image buffer 15a to calculate the parallax, and obtains the parallax information I is stored in the parallax buffer 15 b of the storage unit 15 .
- step S4 the object recognition unit 14a of the arithmetic processing unit 14 uses either the left image PL or the right image PR and the parallax information I to perform object recognition.
- a three-dimensional object such as a preceding vehicle is also recognized in this step, the details of the three-dimensional object recognition method of this embodiment will be described later.
- step S5 the vehicle control unit 14b of the arithmetic processing unit 14 determines a vehicle control policy based on the object recognition result, and outputs the result to the in-vehicle network CAN.
- step S41 a three-dimensional object in the first image P1 is extracted based on the parallax distribution in the first parallax information I1 calculated in step S3 of FIG. to estimate Also, the history of the distance to the three-dimensional object is recorded, and the speed of the three-dimensional object is estimated from the history information.
- step S42 the recognition dictionary 15c is referenced to determine the type of the three-dimensional object detected in step S41.
- step S43 the three-dimensional object information (shape, distance, speed) estimated in step S41 and the three-dimensional object type determined in step S42 are output to the subsequent vehicle control unit 14b.
- the image processing apparatus of the comparative example obtains three-dimensional object information (shape, distance, speed) and three-dimensional object type by the above method. , becomes inaccurate around the high-brightness light source, and the three-dimensional object information around the high-brightness light source detected in step S41 is also inaccurate. As a result, in the image processing apparatus of the comparative example, the reliability of the three-dimensional object type around the high-brightness light source determined in step S42 was also low, and there was a possibility that driving support control based on low-reliability information would not be appropriate.
- the object recognition unit 14a detects information of a three-dimensional object based on each of the parallax distributions in the two types of parallax information I calculated in step S3 of FIG. Specifically, based on the parallax distribution in the first parallax information I1, the area of the three-dimensional object is extracted from the first image P1, and the shape of the three-dimensional object and the distance to the three-dimensional object are estimated. Also, the history of the distance to the three-dimensional object in the first image P1 is recorded, and the speed of the three-dimensional object is estimated from the history information.
- the region of the three-dimensional object in the second image P2 is extracted, and the shape of the three-dimensional object and the distance to the three-dimensional object are estimated. Also, the history of the distance to the three-dimensional object in the second image P2 is recorded, and the speed of the three-dimensional object is estimated from the history information.
- step S41 the object recognition unit 14a also detects light spots in the image P.
- FIG. 5B(a) is an example of the light spot group detected from the first image P1 with the normal shutter
- FIG. 5B(b) is an example of the light spot group detected from the low exposure shutter second image P2. An example.
- step S4a when a plurality of light spot groups are extracted in step S41, the object recognition unit 14a determines from the distance and position of each light spot group whether they are lamp pairs of the same vehicle. Then, when it is determined that the lamp pair is the same vehicle, the coordinate information as the lamp pair is output, and the process proceeds to step S4b. On the other hand, if it is determined that the pair of lamps does not belong to the same vehicle, the process proceeds to step S42.
- step S4b the object recognition unit 14a uses the position information of lamp pairs between multiple shutters to link three-dimensional object information that is presumed to be the same object.
- the first three-dimensional object information of the lamp pair L1 illustrated in FIG. 5B(a) and the second three-dimensional object information of the lamp pair L2 illustrated in FIG. 5B(b) are linked.
- step S4c the object recognition unit 14a determines whether the amount of flare light from the lamp pair is greater than or equal to the threshold. If the amount of flare light from the lamp pair is equal to or greater than the threshold, the process proceeds to step S4d, and if the amount of flare light is less than the threshold, the process proceeds to step S42.
- the determination of the amount of flare light in step S4c is performed, for example, as follows. First, a luminance threshold value for extracting a lamp for each exposure amount is determined in advance, and a region of pixels exceeding the luminance threshold value is extracted as a lamp (see FIGS. 5B(a) and (b)). Next, the lamp sizes W1 and W2 are obtained from the width of each lamp. Then, the difference between the lamp size W1 of the normal shutter (first image P1) and the lamp size W2 of the low-exposure shutter (second image P2) is calculated as the amount of flare light, and it is determined whether or not this amount of flare light is greater than or equal to the threshold. As a method of calculating the difference, as illustrated in FIG. 5B(c), a method of defining the area obtained by subtracting the circular area of lamp size W2 from the circular area of lamp size W1 as the amount of flare light can be considered.
- the description so far has been made on the assumption that either the left image PL or the right image PR is selected and the processing in FIG. 5A is performed. 5A, the amount of flare light may be calculated for each of the left and right images. As a result, even if one of the left and right cameras is out of order, it is possible to continue desired control using the captured images of the normal cameras.
- step S4d the object recognition unit 14a integrates the first three-dimensional object information of the normal shutter (first image P1) and the second three-dimensional object information of the low-exposure shutter (second image P2) linked in step S4b. Calculate the three-dimensional object information. Specifically, a weighted average of the first three-dimensional object information and the second three-dimensional object information is obtained using an integration coefficient individually set for each piece of three-dimensional object information (position, speed, shape, etc.).
- the distance D1 to the three-dimensional object indicated by the first three-dimensional object information is integrated by Equation 1 below.
- Distance D after integration Distance D1 ⁇ (1 ⁇ 0.3)+Distance D2 ⁇ 0.3 (Formula 1) Since the first image P1 with the normal shutter and the second image P2 with the low exposure shutter are alternately captured, there is a slight time difference between the two images. Therefore, it is more desirable to perform integration after correcting each piece of information in consideration of the time difference between the imaging timings of the first image P1 and the second image P2 and the relative relationship between the target object (preceding vehicle) and the host vehicle.
- the integration coefficient need not be a fixed value, and may be increased or decreased in proportion to the distance to the three-dimensional object, for example. Further, the integration coefficient may be set to a value corresponding to the amount of flare light by checking the correlation between the amount of flare light and the accuracy of each information of the object in advance. Furthermore, when the amount of flare light is calculated using both the left and right cameras, the variation in the amount of flare light may also be reflected in the integration coefficient. When the amount of flare light is large, by setting a large weight for the information of the higher accuracy of the normal exposure and the low exposure shutter, an improvement in accuracy after integration can be expected.
- the integration coefficient may be set individually for each piece of information (position, speed, shape, etc.) further subdivided. For example, for a shape, set the height and width separately. If the amount of flare light is large or small, the integration coefficient may be set so that the result of any shutter is not used at all during integration.
- step S42 the object recognition unit 14a determines the type of the three-dimensional object using the integrated three-dimensional object information when the three-dimensional object information is integrated in step S4d. ) is used to determine the type of the three-dimensional object.
- step S43 the object recognition unit 14a passes the first three-dimensional object information estimated in step S41 or the integrated three-dimensional object information integrated in step S4d and the three-dimensional object type determined in step S42 to the vehicle control unit in the subsequent stage. 14b.
- the first three-dimensional object information based on the first image P1 susceptible to flare is replaced with the second three-dimensional object information based on the second image P2 less susceptible to flare.
- the storage unit 15 outputs the first image P1 and the first parallax information I1 of the normal shutter and the second image P2 and the second parallax information I2 of the low exposure shutter to the object recognition unit 14a. do.
- the first image P1 of the normal shutter and the first parallax information I1 are input to the normal shutter three-dimensional object processing unit 21 in the object recognition unit 14a.
- FIG. 6B is an example of the configuration of the normal shutter three-dimensional object processing unit 21.
- the first image P1 and the first Three-dimensional object detection in step S41 is performed using the parallax information I1, and first three-dimensional object information is output.
- the lamp detection section 21e performs lamp detection in step S41 on the first image P1, and outputs first lamp information.
- the low-exposure shutter solid object processing unit 22 in the object recognition unit 14a receives the second image P2 of the low-exposure shutter and the second parallax information I2.
- FIG. 6C is an example of the configuration of the low-exposure shutter three-dimensional object processing unit 22.
- the second image P2 and the Three-dimensional object detection in step S41 is performed using the two-parallax information I2, and second three-dimensional object information is output.
- the lamp detection unit 22e performs the lamp detection in step S41 on the second image P2, and outputs the second lamp information.
- the lamp pair detection unit 23 performs lamp pair detection in step S4a on the first image P1 based on the first lamp information from the normal shutter three-dimensional object processing unit 21 (see FIG. 5B(a)). Further, the lamp pair detection unit 24 performs lamp pair detection in step S4a on the second image P2 based on the second lamp information from the low-exposure shutter three-dimensional object processing unit 22 (see FIG. 5B(b)).
- the lamp detection result linking unit 25 performs linking processing in step S4b for the three-dimensional object information of each lamp pair detected by the lamp pair detecting unit 23 and the lamp pair detecting unit 24.
- the flare determination unit 26 calculates the amount of flare light by the method illustrated in FIG. 5B(c).
- the three-dimensional object information integration unit 27 performs the processing of steps S4c and S4d.
- the 1-dimensional object information and the 2nd 3-dimensional object information from the low-exposure shutter 3-dimensional object processing unit 22 are integrated under a predetermined rule.
- the amount of flare light calculated by the flare determination unit 26 is less than the predetermined threshold value, the first solid object information from the normal shutter solid object processing unit 21 is output as it is.
- the type determination unit 28 performs the process of step S42, and determines the type of the three-dimensional object based on the object information output by the three-dimensional object information integration unit 27 and the recognition dictionary 15c.
- the detection result output unit 29 executes step S43, and outputs three-dimensional object information (position, speed, shape, etc.) output from the three-dimensional object information integration unit 27 and three-dimensional object information output from the type determination unit 28.
- the type is output to the vehicle control unit 14b.
- the stereo camera device 10 of the present embodiment described above even if flare occurs when capturing an image of a three-dimensional object with a high-intensity light source, parallax information around the high-intensity light source can be accurately calculated. Position, speed, shape, type, etc. can be estimated with high accuracy.
- the image processing apparatus of the present invention is a stereo camera apparatus, but the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter. .
- the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter.
- the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter.
- the three-dimensional object information integration method of the present invention described above can be used.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides an image processing device which, even in the case of flare occurring at photographing of a stereoscopic object provided with a high luminance light source, can accurately calculate disparity information on the surroundings of the high luminance light source and can estimate, with accuracy, the position, the speed, the type, and the like of the stereoscopic object. The image processing device comprises: a camera that captures a first image with a first exposure value and captures a second image with a second exposure value smaller than the first exposure value; a stereoscopic object extraction unit that extracts, from the first image, a first region where a stereoscopic object is present and extracts, from the second image, a second region where the stereoscopic object is present; a stereoscopic object information detection unit that detects first stereoscopic object information from the first region and detects second stereoscopic object information from the second region; and a stereoscopic object information integration unit that integrates the first stereoscopic object information and the second stereoscopic object information.
Description
本発明は、車外を撮像した画像に基づいて車外環境を認識する画像処理装置、および、画像処理方法に関する。
The present invention relates to an image processing device and an image processing method for recognizing the environment outside the vehicle based on an image of the outside of the vehicle.
近年、車外環境に応じて加減速操作や操舵操作を支援する運転支援機能を搭載した自動車が普及しつつあり、夜間走行時の運転支援機能のニーズも高まっている。夜間走行時に適切な運転支援を行うには、先行車のブレーキランプの輝度に拘わらず、先行車までの距離や、先行車の速度を正確に推定する必要がある。そのため、近年の画像処理装置には、低輝度から高輝度の光源を正確に検知する機能の強化が求められており、撮像時のダイナミックレンジを広げるHDR(High Dynamic Range)などの技術が利用されている。
In recent years, vehicles equipped with driving support functions that support acceleration/deceleration and steering operations according to the environment outside the vehicle are becoming more popular, and the need for driving support functions during nighttime driving is increasing. In order to provide appropriate driving assistance during nighttime driving, it is necessary to accurately estimate the distance to the preceding vehicle and the speed of the preceding vehicle regardless of the brightness of the brake lamps of the preceding vehicle. Therefore, recent image processing devices are required to have enhanced functions to accurately detect low to high brightness light sources, and technologies such as HDR (High Dynamic Range), which expands the dynamic range during imaging, are being used. ing.
しかし、撮像時のダイナミックレンジを広げると、高輝度の光源(例えば、制動中のブレーキランプ)を撮像した際に、光源の周囲が明るくぼやけるフレア(ハレーション)現象が発生しやすくなる。そして、フレアが発生すると、フレア部分では立体物を正しく検出できないという課題があった。
However, if the dynamic range during imaging is widened, flare (halation) phenomenon, in which the surroundings of the light source become bright and blurry, tends to occur when imaging a high-intensity light source (for example, a brake lamp during braking). Then, when flare occurs, there is a problem that a three-dimensional object cannot be detected correctly in the flare portion.
そこで、特許文献1の要約書の画像処理装置では、「夜間等に高輝度目標があっても1台のカメラによりハレーションを起こさずに当該目標を撮像可能な画像処理装置を提供する」ため、「車両に搭載され、車両周辺の画像を取得するカメラと制御部を備え、制御部において、カメラで取得した画像から車両を抽出し、カメラにより取得した車両周辺の画像中の明るさに基づき、抽出した車両及び車両近傍において、明るさを補正する領域を算出する。そして、算出した明るさを補正する領域においてハレーションを起こす明るさである部分、例えば、ヘッドライト部分、ヘッドライトの路面反射部分及びヘッドライトのフレア部分を算出し、算出したハレーションを起こす部分を、所定の低減率を乗ずることによりハレーションを起こさない明るさに補正し、補正したハレーション領域をカメラで取得した画像に重ね合わせて出力」していた。
Therefore, in the image processing apparatus of the abstract of Patent Document 1, "Even if there is a high-brightness target at night, etc., there is provided an image processing apparatus that can image the target without causing halation with a single camera." "Equipped with a camera and a control unit installed in the vehicle to acquire images of the surroundings of the vehicle. In the control unit, the vehicle is extracted from the images acquired by the camera. In the extracted vehicle and the vicinity of the vehicle, a region whose brightness is to be corrected is calculated, and in the calculated region whose brightness is to be corrected, a portion having brightness that causes halation, such as a headlight portion and a road surface reflection portion of the headlight. And the flare part of the headlight is calculated, the calculated part that causes halation is corrected to brightness that does not cause halation by multiplying it by a predetermined reduction rate, and the corrected halation area is superimposed on the image acquired by the camera. output.
しかしながら、特許文献1の画像処理装置では、ハレーションを起こす領域のみに対し明るさ補正を行うため、補正領域の境界内外では輝度変化が不連続となり、境界近傍の視差情報(画像上の各点に対する距離情報)を正確に演算することができず、境界近傍の対象物(先行車等)の距離や速度を正しく推定できないという課題があった。
However, in the image processing apparatus of Patent Document 1, since brightness correction is performed only for areas that cause halation, brightness changes are discontinuous inside and outside the boundaries of the correction areas, and parallax information near the boundaries (for each point on the image) is discontinuous. distance information) cannot be calculated accurately, and the distance and speed of an object (preceding vehicle, etc.) in the vicinity of the boundary cannot be accurately estimated.
そこで、本発明は、高輝度光源を備えた立体物の撮像時にフレアが発生しても、高輝度光源周辺の視差情報を正確に演算でき、立体物の位置、速度、形状、種別等を精度良く推定することができる画像処理装置を提供することを目的とする。
Therefore, the present invention can accurately calculate the parallax information around the high-brightness light source even if flare occurs when capturing an image of a three-dimensional object with a high-brightness light source. It is an object of the present invention to provide an image processing apparatus capable of making good estimations.
上記課題を解決するため、本発明の画像認識装置は、第1露光量の第1画像を撮像するとともに、前記第1露光量より少ない第2露光量の第2画像を撮像するカメラと、前記第1画像から立体物が存在する第1領域を抽出するとともに、前記第2画像から前記立体物が存在する第2領域を抽出する立体物抽出部と、前記第1領域から第1立体物情報を検知するとともに、前記第2領域から第2立体物情報を検知する立体物情報検知部と、前記第1立体物情報と前記第2立体物情報を統合する立体物情報統合部と、を備える画像処理装置とした。
In order to solve the above problems, an image recognition apparatus of the present invention includes a camera that captures a first image with a first exposure and captures a second image with a second exposure that is less than the first exposure; a three-dimensional object extracting unit for extracting a first region in which a three-dimensional object exists from a first image and extracting a second region in which the three-dimensional object exists from the second image; and first three-dimensional object information from the first region. and a three-dimensional object information detection unit that detects second three-dimensional object information from the second region, and a three-dimensional object information integration unit that integrates the first three-dimensional object information and the second three-dimensional object information. It is used as an image processing device.
本発明の画像処理装置によれば、高輝度光源を持った立体物の撮像時にフレアが発生しても、高輝度光源周辺の視差情報を正確に演算でき、立体物の位置、速度、形状、種別等を精度良く推定することができる。
According to the image processing apparatus of the present invention, even if flare occurs when imaging a three-dimensional object with a high-intensity light source, parallax information around the high-intensity light source can be accurately calculated, and the position, speed, shape, and position of the three-dimensional object can be calculated. The type and the like can be estimated with high accuracy.
以下、本発明の画像処理装置の一実施例である、ステレオカメラ装置について、図を用いて説明する。
A stereo camera device, which is an embodiment of the image processing device of the present invention, will be described below with reference to the drawings.
<ステレオカメラ装置10のハードウェア構成図>
図1は、本実施例のステレオカメラ装置10の概略を示すハードウェア構成図である。
このステレオカメラ装置10は、車外の撮像画像に基づいて、車外環境を認識する車載装置であり、例えば、道路の白線、歩行者、他車両、その他の立体物、信号、標識、点灯ランプなどの車外環境を認識する。そして、ステレオカメラ装置10は、認識した車外環境に応じて、自車の加減速支援や操舵支援などの制御方針を決定する。 <Hardware Configuration Diagram ofStereo Camera Device 10>
FIG. 1 is a hardware configuration diagram showing an outline of astereo camera device 10 of this embodiment.
Thestereo camera device 10 is an in-vehicle device that recognizes the environment outside the vehicle based on captured images outside the vehicle. Recognize the environment outside the vehicle. Then, the stereo camera device 10 determines control policies such as acceleration/deceleration assistance and steering assistance of the own vehicle according to the recognized environment outside the vehicle.
図1は、本実施例のステレオカメラ装置10の概略を示すハードウェア構成図である。
このステレオカメラ装置10は、車外の撮像画像に基づいて、車外環境を認識する車載装置であり、例えば、道路の白線、歩行者、他車両、その他の立体物、信号、標識、点灯ランプなどの車外環境を認識する。そして、ステレオカメラ装置10は、認識した車外環境に応じて、自車の加減速支援や操舵支援などの制御方針を決定する。 <Hardware Configuration Diagram of
FIG. 1 is a hardware configuration diagram showing an outline of a
The
図1に示すように、ステレオカメラ装置10は、カメラ11(左カメラ11L、右カメラ11R)と、画像入力インタフェース12と、画像処理部13と、演算処理部14と、記憶部15と、CANインタフェース16と、異常監視部17を備えている。なお、画像入力インタフェース12から異常監視部17の構成は、内部バスを介して相互接続された、単一または複数のコンピュータユニットであり、コンピュータユニットが備える演算装置(CPU等)が所定のプログラムを実行することで、画像処理部13や演算処理部14などの各種機能を実現するが、以下では、このような周知技術を省略しながら本発明を説明する。
As shown in FIG. 1, the stereo camera device 10 includes cameras 11 (a left camera 11L and a right camera 11R), an image input interface 12, an image processing section 13, an arithmetic processing section 14, a storage section 15, a CAN An interface 16 and an abnormality monitoring unit 17 are provided. The configuration from the image input interface 12 to the abnormality monitoring unit 17 is a single or a plurality of computer units interconnected via an internal bus. Various functions such as the image processing unit 13 and the arithmetic processing unit 14 are realized by executing the functions, but the present invention will be described below while omitting such well-known techniques.
カメラ11は、自車前方を撮像できるように自車に設置した、左カメラ11Lと右カメラ11Rからなるステレオカメラである。左カメラ11Lの撮像素子は、通常の露光量(以下「通常シャッタ」)の第1左画像P1Lと、通常より少ない露光量(以下「低露光シャッタ」)の第2左画像P2Lを交互に撮像する。また、左カメラ11Lの撮像と並行して、右カメラ11Rの撮像素子は、通常シャッタの第1右画像P1Rと、低露光シャッタの第2右画像P2Rを交互に撮像する。
The camera 11 is a stereo camera composed of a left camera 11L and a right camera 11R, which is installed on the vehicle so as to capture an image in front of the vehicle. The imaging element of the left camera 11L alternates between a first left image P1 L with a normal exposure (hereinafter referred to as "normal shutter") and a second left image P2 L with an exposure less than normal (hereinafter referred to as "low exposure shutter"). image to In parallel with the imaging by the left camera 11L, the imaging element of the right camera 11R alternately captures the first right image P1R with the normal shutter and the second right image P2R with the low exposure shutter.
ここで、本実施例における「露光量」とは、撮像素子の露光時間と、撮像素子の出力を増幅するゲインを乗算した物理量である。「通常シャッタ」の露光量は、例えば、露光時間20m秒と、後述する低露光シャッタ用ゲインGの16倍のゲインを乗算した露光量であり、「低露光シャッタ」の露光量は、例えば、露光時間0.1m秒と、所定の低露光シャッタ用ゲインGを乗算した露光量であるが、各露光量は例示したものに限定されない。
Here, the "exposure amount" in this embodiment is a physical quantity obtained by multiplying the exposure time of the imaging element by the gain for amplifying the output of the imaging element. The exposure amount of the "normal shutter" is, for example, an exposure amount obtained by multiplying the exposure time of 20 msec by a gain 16 times the low-exposure shutter gain G described later, and the exposure amount of the "low-exposure shutter" is, for example, The amount of exposure is obtained by multiplying the exposure time of 0.1 msec by the predetermined low-exposure shutter gain G, but each amount of exposure is not limited to the example.
なお、上記では、各カメラで通常シャッタの第1画像P1(P1L、P1R)と低露光シャッタの第2画像P2(P2L、P2R)を交互に撮像すると説明したが、所定条件下では、低露光シャッタの第2画像P2の撮像を省略しても良く、例えば、通常シャッタの第1画像P1の輝度値や照度センサ等で外光が暗いと判断した場合や、後述する第1画像P1中のランプサイズW1(図5B参照)が所定の閾値Wth以上であると判断した場合にのみ、低露光シャッタの第2画像P2を撮像しても良い。
In the above description, each camera alternately captures the first image P1 (P1 L , P1 R ) with the normal shutter and the second image P2 (P2 L , P2 R ) with the low exposure shutter. Then, the imaging of the second image P2 with the low-exposure shutter may be omitted. Only when it is determined that the lamp size W1 (see FIG. 5B) in the image P1 is equal to or larger than the predetermined threshold value Wth , the second image P2 with the low exposure shutter may be captured.
画像入力インタフェース12は、カメラ11の撮像を制御し、撮像した画像P(P1L、P2L、P1R、P2R)を取り込むインタフェースである。このインタフェースを通して取り込まれた画像Pは、内部バスを通して、画像処理部13等に送信される。
The image input interface 12 is an interface that controls imaging by the camera 11 and captures the captured images P (P1 L , P2 L , P1 R , P2 R ). An image P captured through this interface is transmitted to the image processing unit 13 or the like through an internal bus.
画像処理部13は、左カメラ11Lが撮像した左画像PL(P1L、P2L)と、右カメラ11Rが撮像した右画像PR(P1R、P2R)を比較し、各画像に対して、撮像素子に起因するデバイス固有の偏差の補正や、ノイズ補間などの補正を施した後、補正後の画像Pを記憶部15に記憶する。
The image processing unit 13 compares the left image P L (P1 L , P2 L ) captured by the left camera 11L and the right image P R (P1 R , P2 R ) captured by the right camera 11R, and performs Then, the device-specific deviation due to the imaging element is corrected, and correction such as noise interpolation is performed.
また、画像処理部13は、同じ露光量の左右画像間で相互に対応する箇所を算出して、視差情報I(画像上の各点に対する距離情報)を演算し、記憶部15に記憶する。具体的には、同じタイミングで撮像された第1左画像P1Lと第1右画像P1Rの比較から第1視差情報I1が演算され、同じタイミングで撮像された第2左画像P2Lと第2右画像P2Rの比較から第2視差情報I2が演算される。
Further, the image processing unit 13 calculates mutually corresponding portions between the left and right images having the same exposure amount, calculates parallax information I (distance information for each point on the image), and stores it in the storage unit 15 . Specifically, the first left image P1L and the first right image P1R captured at the same timing are compared to calculate the first parallax information I1, and the second left image P2L and the second left image P2L captured at the same timing are compared to the first right image P1R. The second parallax information I2 is calculated from the comparison of the two right images P2- R .
演算処理部14は、記憶部15に記憶された画像Pと視差情報Iを使い、車両周辺の環境を知覚するために必要な、各種物体を認識する。ここで認識される各種物体とは、人、他車両、その他の障害物、信号機、標識、他車両のテールランプやヘッドライド、などである。これら認識結果や中間的な計算結果の一部が記憶部15に記録される。さらに、演算処理部14は、物体認識結果を用いて、自車の制御方針を決定する。
The arithmetic processing unit 14 uses the image P and the parallax information I stored in the storage unit 15 to recognize various objects necessary for perceiving the environment around the vehicle. Various objects recognized here include people, other vehicles, other obstacles, traffic lights, signs, tail lamps and headlights of other vehicles, and the like. Some of these recognition results and intermediate calculation results are recorded in the storage unit 15 . Furthermore, the arithmetic processing unit 14 uses the object recognition result to determine the control policy of the host vehicle.
記憶部15は、半導体メモリ等の記憶装置であり、画像処理部13の出力である補正後の画像Pや視差情報I、および、演算処理部14の出力である物体認識結果や自車の制御方針などを記憶する。
The storage unit 15 is a storage device such as a semiconductor memory, and stores the corrected image P and the parallax information I output from the image processing unit 13, and the object recognition result output from the arithmetic processing unit 14 and control of the own vehicle. Memorize policies.
CANインタフェース16は、演算処理部14で得た物体認識結果や自車の制御方針を車載ネットワークCAN(Controller Area Network)に伝えるインタフェースである。
なお、車載ネットワークCANには、自車の駆動系、制動系、操舵系等を制御する制御システム(ECU等)が接続されており、ステレオカメラ装置10は、制御システムを介在することで、車外環境に応じて自動ブレーキや操舵回避などの運転支援を実行することができる。 TheCAN interface 16 is an interface for transmitting the object recognition result obtained by the arithmetic processing unit 14 and the control policy of the own vehicle to an in-vehicle network CAN (Controller Area Network).
A control system (ECU, etc.) for controlling the driving system, braking system, steering system, etc. of the own vehicle is connected to the in-vehicle network CAN. Driving support such as automatic braking and steering avoidance can be executed according to the environment.
なお、車載ネットワークCANには、自車の駆動系、制動系、操舵系等を制御する制御システム(ECU等)が接続されており、ステレオカメラ装置10は、制御システムを介在することで、車外環境に応じて自動ブレーキや操舵回避などの運転支援を実行することができる。 The
A control system (ECU, etc.) for controlling the driving system, braking system, steering system, etc. of the own vehicle is connected to the in-vehicle network CAN. Driving support such as automatic braking and steering avoidance can be executed according to the environment.
異常監視部17は、ステレオカメラ装置10内の各部が異常動作を起こしていないか、データ転送時にエラーが発生していないかどうかなどを監視しており、異常動作を防ぐ仕掛けをなっている。
The abnormality monitoring unit 17 monitors whether each unit in the stereo camera device 10 is operating abnormally or whether an error has occurred during data transfer, and is designed to prevent abnormal operations.
<ステレオカメラ装置10の機能ブロック図>
次に、図2の機能ブロック図を参照しながら、ステレオカメラ装置10の基本的な処理フローを説明する。 <Functional block diagram ofstereo camera device 10>
Next, a basic processing flow of thestereo camera device 10 will be described with reference to the functional block diagram of FIG.
次に、図2の機能ブロック図を参照しながら、ステレオカメラ装置10の基本的な処理フローを説明する。 <Functional block diagram of
Next, a basic processing flow of the
上記したように、左カメラ11Lは通常シャッタの第1左画像P1Lと低露光シャッタの第2左画像P2Lを交互に撮像し、右カメラ11Rも通常シャッタの第1右画像P1Rと低露光シャッタの第2右画像P2Rを交互に撮像する。
As described above, the left camera 11L alternately captures the first left image P1L with the normal shutter and the second left image P2L with the low-exposure shutter, and the right camera 11R also captures the first right image P1R with the normal shutter and the low-exposure shutter. The second right image P2- R of the exposure shutter is alternately captured.
画像処理部13の画像補正部13aは、各々の画像P(P1L、P2L、P1R、P2R)について、撮像素子が持つ固有の癖を吸収する画像補正処理を行う。そして、補正後の画像Pは、記憶部15内の画像バッファ15aに蓄えられる。
The image correction unit 13a of the image processing unit 13 performs image correction processing to absorb the unique peculiarities of the image sensor for each image P (P1 L , P2 L , P1 R , P2 R ). The corrected image P is stored in the image buffer 15 a in the storage unit 15 .
その後、画像処理部13の視差演算部13bは、同じタイミングで撮像した左右画像同士(第1左画像P1Lと第1右画像P1R、または、第2左画像P2Lと第2右画像P2R)を照合し、露光量毎に視差情報Iを演算する。この演算により、左画像PL上の何処と右画像PR上の何処が対応するか明らかとなるので、三角測量の原理によって、各画像上の対象物までの距離が得られることになる。そして、ここで得られた露光量毎の視差情報I(I1,I2)は記憶部15の視差バッファ15bに蓄えられる。
After that, the parallax calculation unit 13b of the image processing unit 13 compares the left and right images captured at the same timing (the first left image P1 L and the first right image P1 R , or the second left image P2 L and the second right image P2 R ) is collated, and parallax information I is calculated for each exposure amount. This calculation makes it clear where on the left image PL corresponds to where on the right image PR , so the distance to the object on each image can be obtained according to the principle of triangulation. Then, the parallax information I (I1, I2) for each exposure amount obtained here is stored in the parallax buffer 15b of the storage unit 15. FIG.
更に、演算処理部14の物体認識部14aは、記憶部15に記憶された画像Pと視差情報Iを用いて、立体物が存在する領域を抽出する物体認識処理を行う。認識対象の立体物は、例えば、人、車、その他の立体物、標識、信号機、テールランプなどがあるが、認識の際は必要に応じて認識辞書15cを参照する。
Furthermore, the object recognition unit 14a of the arithmetic processing unit 14 uses the image P and the parallax information I stored in the storage unit 15 to perform object recognition processing for extracting an area in which a three-dimensional object exists. Three-dimensional objects to be recognized include, for example, people, cars, other three-dimensional objects, signs, traffic lights, tail lamps, etc. For recognition, the recognition dictionary 15c is referred to as necessary.
物体認識を終えると、演算処理部14の車両制御部14bは、物体認識部14aの出力である物体認識結果と、自車の状態(速度、舵角など)を勘案して、自車の制御方針を決定する。例えば、先行車との衝突可能性がある場合は、乗員に警告を発して衝突回避の操作を促したり、自車の制動制御や舵角制御により先行車を回避制御したりするための制御信号を生成し、CANインタフェース16および車載ネットワークCAN経由で制御システム(ECU等)に出力する。
After completing the object recognition, the vehicle control unit 14b of the arithmetic processing unit 14 controls the own vehicle in consideration of the object recognition result output from the object recognition unit 14a and the state of the own vehicle (speed, steering angle, etc.). determine policy. For example, if there is a possibility of a collision with the preceding vehicle, the system issues a warning to the occupants to encourage them to take actions to avoid a collision, or controls the vehicle's braking and steering angle to avoid the preceding vehicle. is generated and output to the control system (ECU, etc.) via the CAN interface 16 and the in-vehicle network CAN.
<画像処理部と演算処理部の処理フローチャート>
次に、図3のフローチャートを用いて、画像処理部13と演算処理部14による処理を説明する。 <Processing Flowchart of Image Processing Unit and Calculation Processing Unit>
Next, processing by theimage processing unit 13 and the arithmetic processing unit 14 will be described using the flowchart of FIG.
次に、図3のフローチャートを用いて、画像処理部13と演算処理部14による処理を説明する。 <Processing Flowchart of Image Processing Unit and Calculation Processing Unit>
Next, processing by the
まず、ステップS1では、画像処理部13の画像補正部13aは、右画像PRを処理する。具体的には、画像補正部13aは、右カメラ11Rが撮像した右画像PR(P1R、P2R)に対し、デバイス固有の偏差補正、ノイズ補正などを施した後、記憶部15の画像バッファ15aに記憶する。
First, in step S1, the image correction section 13a of the image processing section 13 processes the right image PR . Specifically, the image correction unit 13a performs device-specific deviation correction, noise correction, and the like on the right image P R (P1 R , P2 R ) captured by the right camera 11R. Store in the buffer 15a.
次に、ステップS2では、画像処理部13の画像補正部13aは、左画像PLを処理する。具体的には、画像補正部13aは、左カメラ11Lが撮像した左画像PL(P1L、P2L)に対し、デバイス固有の偏差補正、ノイズ補正などを施した後、記憶部15の画像バッファ15aに記憶する。なお、ステップS1とステップS2の順番は逆であっても良い。
Next, in step S2, the image correction section 13a of the image processing section 13 processes the left image PL . Specifically, the image correction unit 13a applies device-specific deviation correction, noise correction, and the like to the left image P L (P1 L , P2 L ) captured by the left camera 11L. Store in the buffer 15a. Note that the order of steps S1 and S2 may be reversed.
その後、ステップS3では、画像処理部13の視差演算部13bは、画像バッファ15aに記憶された補正後の左画像PLと右画像PRを比較して視差を演算し、求めた視差情報Iを記憶部15の視差バッファ15bに記憶する。
Thereafter, in step S3, the parallax calculation unit 13b of the image processing unit 13 compares the corrected left image PL and right image PR stored in the image buffer 15a to calculate the parallax, and obtains the parallax information I is stored in the parallax buffer 15 b of the storage unit 15 .
次に、ステップS4では、演算処理部14の物体認識部14aは、左画像PLまたは右画像PRの一方と、視差情報Iと、を用いて物体認識を行う。本ステップでは、先行車等の立体物も認識されるが、本実施例の立体物認識方法の詳細は後述することとする。
Next, in step S4, the object recognition unit 14a of the arithmetic processing unit 14 uses either the left image PL or the right image PR and the parallax information I to perform object recognition. Although a three-dimensional object such as a preceding vehicle is also recognized in this step, the details of the three-dimensional object recognition method of this embodiment will be described later.
最後に、ステップS5では、演算処理部14の車両制御部14bは、物体認識結果を踏まえた車両制御方針を決定し、その結果を車載ネットワークCANに出力する。
Finally, in step S5, the vehicle control unit 14b of the arithmetic processing unit 14 determines a vehicle control policy based on the object recognition result, and outputs the result to the in-vehicle network CAN.
<比較例の立体物認識方法>
ここで、図4のフローチャートを用いて、比較例の画像処理装置により、図3のステップS4のタイミングで実施される立体物認識方法の詳細を説明する。なお、比較例の画像処理装置では、低露光シャッタの第2画像P2(第2左画像P2L、第2右画像P2R)を撮像しないため、図3のステップS3では、通常シャッタの第1画像P1に基づく第1視差情報I1のみが演算されているものとする。 <Three-dimensional object recognition method of comparative example>
Here, the details of the three-dimensional object recognition method performed at the timing of step S4 in FIG. 3 by the image processing apparatus of the comparative example will be described with reference to the flowchart in FIG. Note that the image processing apparatus of the comparative example does not capture the second image P2 (the second left image P2 L , the second right image P2 R ) with the low-exposure shutter. Assume that only the first parallax information I1 based on the image P1 is calculated.
ここで、図4のフローチャートを用いて、比較例の画像処理装置により、図3のステップS4のタイミングで実施される立体物認識方法の詳細を説明する。なお、比較例の画像処理装置では、低露光シャッタの第2画像P2(第2左画像P2L、第2右画像P2R)を撮像しないため、図3のステップS3では、通常シャッタの第1画像P1に基づく第1視差情報I1のみが演算されているものとする。 <Three-dimensional object recognition method of comparative example>
Here, the details of the three-dimensional object recognition method performed at the timing of step S4 in FIG. 3 by the image processing apparatus of the comparative example will be described with reference to the flowchart in FIG. Note that the image processing apparatus of the comparative example does not capture the second image P2 (the second left image P2 L , the second right image P2 R ) with the low-exposure shutter. Assume that only the first parallax information I1 based on the image P1 is calculated.
まず、ステップS41では、図3のステップS3で演算した第1視差情報I1内の視差分布に基づいて、第1画像P1内の立体物を抽出し、立体物の形状や、立体物までの距離を推定する。また、立体物までの距離の履歴を記録しておき、履歴情報から立体物の速度を推定する。
First, in step S41, a three-dimensional object in the first image P1 is extracted based on the parallax distribution in the first parallax information I1 calculated in step S3 of FIG. to estimate Also, the history of the distance to the three-dimensional object is recorded, and the speed of the three-dimensional object is estimated from the history information.
次に、ステップS42では、認識辞書15cを参照して、ステップS41で検知した立体物の種別を判定する。
Next, in step S42, the recognition dictionary 15c is referenced to determine the type of the three-dimensional object detected in step S41.
最後に、ステップS43では、ステップS41で推定した立体物情報(形状、距離、速度)と、ステップS42で判定した立体物種別を、後段の車両制御部14bへ出力する。
Finally, in step S43, the three-dimensional object information (shape, distance, speed) estimated in step S41 and the three-dimensional object type determined in step S42 are output to the subsequent vehicle control unit 14b.
比較例の画像処理装置は、以上の方法で立体物情報(形状、距離、速度)や立体物種別を求めるが、高輝度光源の撮像時にフレアが発生すると、ステップS3で演算した視差情報Iが、高輝度光源の周辺では不正確になり、ステップS41で検知した高輝度光源周辺の立体物情報も不正確となっていた。その結果、比較例の画像処理装置では、ステップS42で判別した高輝度光源周辺の立体物種別も信頼性が低く、信頼性の低い情報に基づく運転支援制御等が適切でない可能性もあった。
The image processing apparatus of the comparative example obtains three-dimensional object information (shape, distance, speed) and three-dimensional object type by the above method. , becomes inaccurate around the high-brightness light source, and the three-dimensional object information around the high-brightness light source detected in step S41 is also inaccurate. As a result, in the image processing apparatus of the comparative example, the reliability of the three-dimensional object type around the high-brightness light source determined in step S42 was also low, and there was a possibility that driving support control based on low-reliability information would not be appropriate.
<本実施例の立体物認識方法>
次に、図5Aのフローチャートを用いて、本実施例のステレオカメラ装置10により、図3のステップS4のタイミングで実施される立体物認識方法の詳細を説明する。なお、本実施例では、通常シャッタの第1画像P1(P1L、P1R)に加え、低露光シャッタの第2画像P2(P2L、P2R)も撮像されているため、図3のステップS3では、第1画像P1に基づく第1視差情報I1と、第2画像P2に基づく第2視差情報I2の双方が演算されているものとする。 <Three-dimensional object recognition method of the present embodiment>
Next, the details of the three-dimensional object recognition method performed at the timing of step S4 in FIG. 3 by thestereo camera device 10 of the present embodiment will be described with reference to the flowchart in FIG. 5A. In this embodiment, in addition to the first images P1 (P1 L , P1 R ) with the normal shutter, the second images P2 (P2 L , P2 R ) with the low exposure shutter are also captured. In S3, both the first parallax information I1 based on the first image P1 and the second parallax information I2 based on the second image P2 are calculated.
次に、図5Aのフローチャートを用いて、本実施例のステレオカメラ装置10により、図3のステップS4のタイミングで実施される立体物認識方法の詳細を説明する。なお、本実施例では、通常シャッタの第1画像P1(P1L、P1R)に加え、低露光シャッタの第2画像P2(P2L、P2R)も撮像されているため、図3のステップS3では、第1画像P1に基づく第1視差情報I1と、第2画像P2に基づく第2視差情報I2の双方が演算されているものとする。 <Three-dimensional object recognition method of the present embodiment>
Next, the details of the three-dimensional object recognition method performed at the timing of step S4 in FIG. 3 by the
まず、ステップS41では、物体認識部14aは、図3のステップS3で演算した2種類の視差情報I内の視差分布の夫々に基づいて、立体物の情報を検知する。具体的には、第1視差情報I1内の視差分布に基づいて第1画像P1内の立体物の領域を抽出し、立体物の形状や立体物までの距離を推定する。また、第1画像P1内の立体物までの距離の履歴を記録しておき、その履歴情報から立体物の速度を推定する。同様に、第2視差情報I2内の視差分布に基づいて第2画像P2内の立体物の領域を抽出し、立体物の形状や立体物までの距離を推定する。また、第2画像P2内の立体物までの距離の履歴を記録しておき、その履歴情報から立体物の速度を推定する。
First, in step S41, the object recognition unit 14a detects information of a three-dimensional object based on each of the parallax distributions in the two types of parallax information I calculated in step S3 of FIG. Specifically, based on the parallax distribution in the first parallax information I1, the area of the three-dimensional object is extracted from the first image P1, and the shape of the three-dimensional object and the distance to the three-dimensional object are estimated. Also, the history of the distance to the three-dimensional object in the first image P1 is recorded, and the speed of the three-dimensional object is estimated from the history information. Similarly, based on the parallax distribution in the second parallax information I2, the region of the three-dimensional object in the second image P2 is extracted, and the shape of the three-dimensional object and the distance to the three-dimensional object are estimated. Also, the history of the distance to the three-dimensional object in the second image P2 is recorded, and the speed of the three-dimensional object is estimated from the history information.
また、ステップS41では、物体認識部14aは、画像P内の光点も検知する。ここで、図5B(a)は、通常シャッタの第1画像P1から検知した光点群の一例であり、図5B(b)は、低露光シャッタの第2画像P2から検知した光点群の一例である。
In addition, in step S41, the object recognition unit 14a also detects light spots in the image P. Here, FIG. 5B(a) is an example of the light spot group detected from the first image P1 with the normal shutter, and FIG. 5B(b) is an example of the light spot group detected from the low exposure shutter second image P2. An example.
次に、ステップS4aでは、物体認識部14aは、ステップS41で複数の光点群を抽出した場合には、夫々の光点群の距離と位置から、同一車両のランプペアであるかを判定する。そして、同一車両のランプペアであると判定した場合は、ランプペアとしての座標情報を出力し、ステップS4bに移行する。一方、同一車両のランプペアでないと判定した場合は、ステップS42に移行する。
Next, in step S4a, when a plurality of light spot groups are extracted in step S41, the object recognition unit 14a determines from the distance and position of each light spot group whether they are lamp pairs of the same vehicle. Then, when it is determined that the lamp pair is the same vehicle, the coordinate information as the lamp pair is output, and the process proceeds to step S4b. On the other hand, if it is determined that the pair of lamps does not belong to the same vehicle, the process proceeds to step S42.
ステップS4bでは、物体認識部14aは、複数シャッタ間のランプペアの位置情報を利用し、同一物体と推定される立体物情報を紐付ける。ここでは、図5B(a)に例示するランプペアL1の第1立体物情報と、図5B(b)に例示するランプペアL2の第2立体物情報が紐付けられるものとする。
In step S4b, the object recognition unit 14a uses the position information of lamp pairs between multiple shutters to link three-dimensional object information that is presumed to be the same object. Here, it is assumed that the first three-dimensional object information of the lamp pair L1 illustrated in FIG. 5B(a) and the second three-dimensional object information of the lamp pair L2 illustrated in FIG. 5B(b) are linked.
ステップS4cでは、物体認識部14aは、ランプペアのフレア光量が閾値以上かを判定する。そして、ランプペアのフレア光量が閾値以上であれば、ステップS4dに移行し、フレア光量が閾値未満であれば、ステップS42に移行する。
In step S4c, the object recognition unit 14a determines whether the amount of flare light from the lamp pair is greater than or equal to the threshold. If the amount of flare light from the lamp pair is equal to or greater than the threshold, the process proceeds to step S4d, and if the amount of flare light is less than the threshold, the process proceeds to step S42.
ここで、ステップS4cでのフレア光量の判定は、例えば次のように行う。まず、露光量毎にランプを抽出するための予め輝度閾値を定めておき、その輝度閾値を超えた画素の領域をランプとして抽出する(図5B(a)、(b)参照)。次に、各ランプの幅からランプサイズW1、W2を求める。そして、通常シャッタ(第1画像P1)のランプサイズW1と、低露光シャッタ(第2画像P2)のランプサイズW2の差分をフレア光量として算出し、このフレア光量が閾値以上かを判定する。差分の算出方法は、図5B(c)に例示するように、ランプサイズW1の円形領域面積から、ランプサイズW2の円形領域面積を減算した面積をフレア光量と定義する方法等が考えられる。
Here, the determination of the amount of flare light in step S4c is performed, for example, as follows. First, a luminance threshold value for extracting a lamp for each exposure amount is determined in advance, and a region of pixels exceeding the luminance threshold value is extracted as a lamp (see FIGS. 5B(a) and (b)). Next, the lamp sizes W1 and W2 are obtained from the width of each lamp. Then, the difference between the lamp size W1 of the normal shutter (first image P1) and the lamp size W2 of the low-exposure shutter (second image P2) is calculated as the amount of flare light, and it is determined whether or not this amount of flare light is greater than or equal to the threshold. As a method of calculating the difference, as illustrated in FIG. 5B(c), a method of defining the area obtained by subtracting the circular area of lamp size W2 from the circular area of lamp size W1 as the amount of flare light can be considered.
なお、ここまで、左画像PLか右画像PRの一方を選択して図5Aの処理を実施することを想定して説明してきたが、左画像PLと右画像PRの双方の画像に対して図5Aの処理を実施する場合は、左右画像の夫々についてフレア光量を算出しても良い。これにより、左右の何れかのカメラが故障している場合であっても、正常なカメラの撮像画像を用いて、所望の制御を継続することができる。
Note that the description so far has been made on the assumption that either the left image PL or the right image PR is selected and the processing in FIG. 5A is performed. 5A, the amount of flare light may be calculated for each of the left and right images. As a result, even if one of the left and right cameras is out of order, it is possible to continue desired control using the captured images of the normal cameras.
ステップS4dでは、物体認識部14aは、ステップS4bで紐付けた通常シャッタ(第1画像P1)の第1立体物情報と低露光シャッタ(第2画像P2)の第2立体物情報を統合した統合立体物情報を演算する。具体的には、立体物の各情報(位置、速度、形状、等)に個別に設定された統合係数を用い、第1立体物情報と第2立体物情報の加重平均を取る。例えば、立体物情報の一種である、立体物までの距離を統合する場合、距離に関して設定された統合係数が0.3であれば、第1立体物情報が示す立体物までの距離D1と、第2立体物情報が示す立体物までの距離D2は、次の式1により統合される。
In step S4d, the object recognition unit 14a integrates the first three-dimensional object information of the normal shutter (first image P1) and the second three-dimensional object information of the low-exposure shutter (second image P2) linked in step S4b. Calculate the three-dimensional object information. Specifically, a weighted average of the first three-dimensional object information and the second three-dimensional object information is obtained using an integration coefficient individually set for each piece of three-dimensional object information (position, speed, shape, etc.). For example, when integrating the distance to a three-dimensional object, which is a type of three-dimensional object information, if the integration coefficient set for the distance is 0.3, the distance D1 to the three-dimensional object indicated by the first three-dimensional object information, The distance D2 to the three-dimensional object indicated by the second three-dimensional object information is integrated by Equation 1 below.
統合後の距離D=距離D1×(1-0.3)+距離D2×0.3 ・・・(式1)
なお、通常シャッタの第1画像P1と低露光シャッタの第2画像P2は交互に撮像されるため、両画像の撮像に僅かな時間差がある。そこで、第1画像P1と第2画像P2の撮像タイミングの時間差と対象物(先行車)と自車の相対関係を考慮して各情報を修正した後、統合を実施することがより望ましい。 Distance D after integration=Distance D1×(1−0.3)+Distance D2×0.3 (Formula 1)
Since the first image P1 with the normal shutter and the second image P2 with the low exposure shutter are alternately captured, there is a slight time difference between the two images. Therefore, it is more desirable to perform integration after correcting each piece of information in consideration of the time difference between the imaging timings of the first image P1 and the second image P2 and the relative relationship between the target object (preceding vehicle) and the host vehicle.
なお、通常シャッタの第1画像P1と低露光シャッタの第2画像P2は交互に撮像されるため、両画像の撮像に僅かな時間差がある。そこで、第1画像P1と第2画像P2の撮像タイミングの時間差と対象物(先行車)と自車の相対関係を考慮して各情報を修正した後、統合を実施することがより望ましい。 Distance D after integration=Distance D1×(1−0.3)+Distance D2×0.3 (Formula 1)
Since the first image P1 with the normal shutter and the second image P2 with the low exposure shutter are alternately captured, there is a slight time difference between the two images. Therefore, it is more desirable to perform integration after correcting each piece of information in consideration of the time difference between the imaging timings of the first image P1 and the second image P2 and the relative relationship between the target object (preceding vehicle) and the host vehicle.
なお、統合係数は、固定値である必要は無く、例えば、立体物までの距離に比例して増減させても良い。また、統合係数は、予めフレア光量と対象物の各情報の精度との相関を調べておき、フレア光量に応じた値を設定しても良い。さらに、左右双方のカメラでフレア光量を算出する場合、フレア光量のバラツキも統合係数に反映しても良い。フレア光量が多い場合に、通常露光もしくは低露光シャッタの内、精度の良い方の情報の重みを大きく設定することで、統合後の精度向上が見込める。統合係数は、各情報(位置、速度、形状、等)をさらに細分化した情報に対して、個別に設定しても良い。例えば、形状について、高さと幅を個別に設定する。フレア光量が多いもしくは小さい場合、統合時いずれかのシャッタの結果を全く使わないように統合係数を設定しても良い。
Note that the integration coefficient need not be a fixed value, and may be increased or decreased in proportion to the distance to the three-dimensional object, for example. Further, the integration coefficient may be set to a value corresponding to the amount of flare light by checking the correlation between the amount of flare light and the accuracy of each information of the object in advance. Furthermore, when the amount of flare light is calculated using both the left and right cameras, the variation in the amount of flare light may also be reflected in the integration coefficient. When the amount of flare light is large, by setting a large weight for the information of the higher accuracy of the normal exposure and the low exposure shutter, an improvement in accuracy after integration can be expected. The integration coefficient may be set individually for each piece of information (position, speed, shape, etc.) further subdivided. For example, for a shape, set the height and width separately. If the amount of flare light is large or small, the integration coefficient may be set so that the result of any shutter is not used at all during integration.
ステップS42では、物体認識部14aは、ステップS4dで立体物情報を統合した場合は、統合立体物情報を利用して立体物の種別を判定し、そうでない場合は、通常シャッタ(第1画像P1)から求めた第1立体物情報を利用して立体物の種別を判定する。
In step S42, the object recognition unit 14a determines the type of the three-dimensional object using the integrated three-dimensional object information when the three-dimensional object information is integrated in step S4d. ) is used to determine the type of the three-dimensional object.
最後に、ステップS43では、物体認識部14aは、ステップS41で推定した第1立体物情報またはステップS4dで統合した統合立体物情報と、ステップS42で判定した立体物種別を、後段の車両制御部14bへ出力する。
Finally, in step S43, the object recognition unit 14a passes the first three-dimensional object information estimated in step S41 or the integrated three-dimensional object information integrated in step S4d and the three-dimensional object type determined in step S42 to the vehicle control unit in the subsequent stage. 14b.
以上で説明した本実施例のステレオカメラ装置10では、フレアの影響を受けやすい第1画像P1に基づく第1立体物情報を、フレアの影響を受けにくい第2画像P2に基づく第2立体物情報と統合することで、立体物情報の信頼性を高めることができ、立体物種別の判別の信頼性や、運転支援の信頼性をより高めることができる。
In the stereo camera device 10 of the present embodiment described above, the first three-dimensional object information based on the first image P1 susceptible to flare is replaced with the second three-dimensional object information based on the second image P2 less susceptible to flare. By integrating with, the reliability of the three-dimensional object information can be improved, and the reliability of discrimination of the three-dimensional object type and the reliability of driving support can be further improved.
<物体認識部14aの具体構成>
次に、図6Aから図6Cの機能ブロック図を用いて、図2の物体認識部14aのより具体的な構成を説明する。 <Specific Configuration ofObject Recognition Unit 14a>
Next, a more specific configuration of theobject recognition unit 14a in FIG. 2 will be described using functional block diagrams in FIGS. 6A to 6C.
次に、図6Aから図6Cの機能ブロック図を用いて、図2の物体認識部14aのより具体的な構成を説明する。 <Specific Configuration of
Next, a more specific configuration of the
図6Aに示すように、記憶部15は、通常シャッタの第1画像P1と第1視差情報I1、および、低露光シャッタの第2画像P2と第2視差情報I2を、物体認識部14aに出力する。
As shown in FIG. 6A, the storage unit 15 outputs the first image P1 and the first parallax information I1 of the normal shutter and the second image P2 and the second parallax information I2 of the low exposure shutter to the object recognition unit 14a. do.
物体認識部14a内の通常シャッタ立体物処理部21には、通常シャッタの第1画像P1と第1視差情報I1が入力される。図6Bは、通常シャッタ立体物処理部21の構成の一例であり、ここに示す、視差参照部21a、距離推定部21b、速度推定部21c、形状推定部21dにより、第1画像P1と第1視差情報I1を用いたステップS41の立体物検知が実施され、第1立体物情報を出力する。また、ランプ検出部21eにより、第1画像P1に対するステップS41でのランプ検知が実施され、第1ランプ情報を出力する。
The first image P1 of the normal shutter and the first parallax information I1 are input to the normal shutter three-dimensional object processing unit 21 in the object recognition unit 14a. FIG. 6B is an example of the configuration of the normal shutter three-dimensional object processing unit 21. The first image P1 and the first Three-dimensional object detection in step S41 is performed using the parallax information I1, and first three-dimensional object information is output. Further, the lamp detection section 21e performs lamp detection in step S41 on the first image P1, and outputs first lamp information.
同様に、物体認識部14a内の低露光シャッタ立体物処理部22には、低露光シャッタの第2画像P2と第2視差情報I2が入力される。図6Cは、低露光シャッタ立体物処理部22の構成の一例であり、ここに示す、視差参照部22a、距離推定部22b、速度推定部22c、形状推定部22dにより、第2画像P2と第2視差情報I2を用いたステップS41の立体物検知が実施され、第2立体物情報を出力する。また、ランプ検出部22eにより、第2画像P2に対するステップS41のランプ検知が実施され、第2ランプ情報を出力する。
Similarly, the low-exposure shutter solid object processing unit 22 in the object recognition unit 14a receives the second image P2 of the low-exposure shutter and the second parallax information I2. FIG. 6C is an example of the configuration of the low-exposure shutter three-dimensional object processing unit 22. The second image P2 and the Three-dimensional object detection in step S41 is performed using the two-parallax information I2, and second three-dimensional object information is output. Further, the lamp detection unit 22e performs the lamp detection in step S41 on the second image P2, and outputs the second lamp information.
ランプペア検出部23は、通常シャッタ立体物処理部21からの第1ランプ情報に基づき、第1画像P1に対しステップS4aのランプペア検出を実施する(図5B(a)参照)。また、ランプペア検出部24は、低露光シャッタ立体物処理部22からの第2ランプ情報に基づき、第2画像P2に対しステップS4aのランプペア検出を実施する(図5B(b)参照)。
The lamp pair detection unit 23 performs lamp pair detection in step S4a on the first image P1 based on the first lamp information from the normal shutter three-dimensional object processing unit 21 (see FIG. 5B(a)). Further, the lamp pair detection unit 24 performs lamp pair detection in step S4a on the second image P2 based on the second lamp information from the low-exposure shutter three-dimensional object processing unit 22 (see FIG. 5B(b)).
ランプ検知結果紐付け部25では、ランプペア検出部23とランプペア検出部24で検出した各ランプペアの立体物情報について、ステップS4bの紐付け処理を実施する。
The lamp detection result linking unit 25 performs linking processing in step S4b for the three-dimensional object information of each lamp pair detected by the lamp pair detecting unit 23 and the lamp pair detecting unit 24.
フレア判定部26は、図5B(c)に例示した方法で、フレア光量を算出する。
The flare determination unit 26 calculates the amount of flare light by the method illustrated in FIG. 5B(c).
立体物情報統合部27は、ステップS4cとステップS4dの処理を実施するものであり、フレア判定部26で算出したフレア光量が所定の閾値以上であれば、通常シャッタ立体物処理部21からの第1立体物情報と低露光シャッタ立体物処理部22からの第2立体物情報を所定の規則下で統合する。一方、フレア判定部26で算出したフレア光量が所定の閾値未満であれば、通常シャッタ立体物処理部21からの第1立体物情報をそのまま出力する。
The three-dimensional object information integration unit 27 performs the processing of steps S4c and S4d. The 1-dimensional object information and the 2nd 3-dimensional object information from the low-exposure shutter 3-dimensional object processing unit 22 are integrated under a predetermined rule. On the other hand, if the amount of flare light calculated by the flare determination unit 26 is less than the predetermined threshold value, the first solid object information from the normal shutter solid object processing unit 21 is output as it is.
種別判定部28は、ステップS42の処理を実施するものであり、立体物情報統合部27が出力した対象物情報と認識辞書15cに基づいて立体物の種別を判定する。
The type determination unit 28 performs the process of step S42, and determines the type of the three-dimensional object based on the object information output by the three-dimensional object information integration unit 27 and the recognition dictionary 15c.
検知結果出力部29は、ステップS43を実施するものであり、立体物情報統合部27の出力である立体物情報(位置、速度、形状、等)と、種別判定部28の出力である立体物種別を、車両制御部14bに出力する。
The detection result output unit 29 executes step S43, and outputs three-dimensional object information (position, speed, shape, etc.) output from the three-dimensional object information integration unit 27 and three-dimensional object information output from the type determination unit 28. The type is output to the vehicle control unit 14b.
以上で説明した本実施例のステレオカメラ装置10によれば、高輝度光源を持った立体物の撮像時にフレアが発生しても、高輝度光源周辺の視差情報を正確に演算でき、立体物の位置、速度、形状、種別等を精度良く推定することができる。
According to the stereo camera device 10 of the present embodiment described above, even if flare occurs when capturing an image of a three-dimensional object with a high-intensity light source, parallax information around the high-intensity light source can be accurately calculated. Position, speed, shape, type, etc. can be estimated with high accuracy.
なお、以上では、本発明の画像処理装置がステレオカメラ装置である場合を説明したが、本発明の画像処理装置は、通常シャッタと低露光シャッタで交互に撮像する単眼カメラ装置であっても良い。この場合、例えば、撮像対象の実サイズを仮定して画像上の大きさから距離を求める方法を採用することで、単眼カメラの出力のみから、ステレオカメラ装置同様の立体物情報を取得できるため、上記した本発明の立体物情報の統合方法を利用することができる。
In the above description, the image processing apparatus of the present invention is a stereo camera apparatus, but the image processing apparatus of the present invention may be a monocular camera apparatus that alternately takes images with a normal shutter and a low exposure shutter. . In this case, for example, by adopting a method of determining the distance from the size on the image assuming the actual size of the object to be imaged, it is possible to acquire three-dimensional object information similar to that of a stereo camera device only from the output of a monocular camera. The three-dimensional object information integration method of the present invention described above can be used.
10…ステレオカメラ装置、11…カメラ、11L…左カメラ、11R…右カメラ、12…画像入力インタフェース、13…画像処理部、13a…画像補正部、13b…視差演算部、14…演算処理部、14a…物体認識部、21…通常シャッタ立体物処理部、22…低露光シャッタ立体物処理部、23、24…ランプペア検出部、25…ランプ検知結果紐づけ部、26…フレア判定部、27…立体物情報統合部、28…種別判定部、29…検知結果出力部、14b…車両制御部、15…記憶部、15a…画像バッファ、15b…視差バッファ、15c…認識辞書、16…CANインタフェース、17…異常監視部、CAN…車載ネットワーク
10... stereo camera device, 11... camera, 11L... left camera, 11R... right camera, 12... image input interface, 13... image processing unit, 13a... image correction unit, 13b... parallax calculation unit, 14... calculation processing unit, 14a... Object recognition unit 21... Normal shutter three-dimensional object processing unit 22... Low exposure shutter three-dimensional object processing unit 23, 24... Lamp pair detection unit 25... Lamp detection result linking unit 26... Flare determination unit 27... Three-dimensional object information integration unit 28 Type determination unit 29 Detection result output unit 14b Vehicle control unit 15 Storage unit 15a Image buffer 15b Parallax buffer 15c Recognition dictionary 16 CAN interface 17...Abnormality monitoring unit, CAN...In-vehicle network
Claims (9)
- 第1露光量の第1画像を撮像するとともに、前記第1露光量より少ない第2露光量の第2画像を撮像するカメラと、
前記第1画像から立体物が存在する第1領域を抽出するとともに、前記第2画像から前記立体物が存在する第2領域を抽出する立体物抽出部と、
前記第1領域から第1立体物情報を検知するとともに、前記第2領域から第2立体物情報を検知する立体物情報検知部と、
前記第1立体物情報と前記第2立体物情報を統合した統合立体物情報を演算する立体物情報統合部と、
を備えることを特徴とする画像処理装置。 a camera that captures a first image with a first exposure and captures a second image with a second exposure that is less than the first exposure;
a three-dimensional object extracting unit that extracts a first region in which the three-dimensional object exists from the first image and extracts a second region in which the three-dimensional object exists from the second image;
a three-dimensional object information detection unit that detects first three-dimensional object information from the first region and detects second three-dimensional object information from the second region;
a three-dimensional object information integration unit that calculates integrated three-dimensional object information by integrating the first three-dimensional object information and the second three-dimensional object information;
An image processing device comprising: - 請求項1に記載の画像処理装置において、
前記第1立体物情報と前記第2立体物情報は、前記立体物までの距離、前記立体物の形状、または、前記立体物の速度の少なくとも一種の情報を含むことを特徴とする画像処理装置。 The image processing device according to claim 1,
The image processing apparatus, wherein the first three-dimensional object information and the second three-dimensional object information include at least one type of information of a distance to the three-dimensional object, a shape of the three-dimensional object, or a speed of the three-dimensional object. . - 請求項1に記載の画像処理装置において、
前記カメラは、前記第1画像と前記第2画像を交互に撮像することを特徴とする画像処理装置。 The image processing device according to claim 1,
The image processing apparatus, wherein the camera alternately captures the first image and the second image. - 請求項1に記載の画像処理装置において、
前記第1画像に撮像された光源と、前記第2画像に撮像された光源の差分を、前記第1画像に撮像されたフレア光量と判定するフレア判定部を更に備えることを特徴とする画像処理装置。 The image processing device according to claim 1,
The image processing, further comprising a flare determination unit that determines a difference between the light source imaged in the first image and the light source imaged in the second image as the amount of flare light imaged in the first image. Device. - 請求項4に記載の画像処理装置において、
前記立体物情報統合部は、次式で前記統合立体物情報を演算することを特徴とする画像処理装置。
統合立体物情報=第1立体物情報×(1-統合係数)+第2立体物情報×統合係数 In the image processing device according to claim 4,
The image processing apparatus, wherein the three-dimensional object information integration unit calculates the integrated three-dimensional object information using the following equation.
Integrated 3D object information = 1st 3D object information x (1 - integration coefficient) + 2nd 3D object information x integration coefficient - 請求項5に記載の画像処理装置において、
前記統合係数は、前記フレア光量に応じて増減する変数であることを特徴とする画像処理装置。 In the image processing device according to claim 5,
The image processing apparatus, wherein the integration coefficient is a variable that increases or decreases according to the amount of flare light. - 請求項6に記載の画像処理装置において、
前記立体物情報統合部は、
前記フレア光量が所定の閾値以上である場合に、前記統合立体物情報を出力し、
前記フレア光量が前記閾値未満である場合に、前記第1立体物情報を出力することを特徴とする画像処理装置。 In the image processing device according to claim 6,
The three-dimensional object information integration unit
outputting the integrated three-dimensional object information when the amount of flare light is equal to or greater than a predetermined threshold;
The image processing apparatus, wherein the first three-dimensional object information is output when the amount of flare light is less than the threshold value. - 請求項1に記載の画像処理装置において、
前記カメラは、
前記第1露光量の第1左画像と前記第2露光量の第2左画像を撮像する左カメラと、
前記第1露光量の第1右画像と前記第2露光量の第2右画像を撮像する右カメラと、
を有し、
前記立体物抽出部は、
前記第1左画像と前記第1右画像を比較し第1視差情報を演算するとともに、前記第2左画像と前記第2右画像を比較し第2視差情報を演算する視差演算部と、
前記第1視差情報に基づき前記第1画像から前記立体物が存在する前記第1領域を抽出する第1立体物処理部と、
前記第2視差情報に基づき前記第2画像から前記立体物が存在する前記第2領域を抽出する第2立体物処理部と、
を有することを特徴とする画像処理装置。 The image processing device according to claim 1,
The camera is
a left camera that captures a first left image with the first exposure and a second left image with the second exposure;
a right camera that captures a first right image with the first exposure amount and a second right image with the second exposure amount;
has
The three-dimensional object extraction unit
a parallax calculation unit that compares the first left image and the first right image to calculate first parallax information and compares the second left image and the second right image to calculate second parallax information;
a first three-dimensional object processing unit that extracts the first region in which the three-dimensional object exists from the first image based on the first parallax information;
a second three-dimensional object processing unit that extracts the second region in which the three-dimensional object exists from the second image based on the second parallax information;
An image processing device comprising: - 第1露光量の第1画像を撮像するステップと、
前記第1露光量より少ない第2露光量の第2画像を撮像するステップと、
前記第1画像から立体物が存在する第1領域を抽出するステップと、
前記第2画像から前記立体物が存在する第2領域を抽出するステップと、
前記第1領域から第1立体物情報を検知するステップと、
前記第2領域から第2立体物情報を検知するステップと、
前記第1立体物情報と前記第2立体物情報を統合した統合立体物情報を演算するステップと、
を備えることを特徴とする画像処理方法。 capturing a first image with a first exposure;
capturing a second image with a second exposure that is less than the first exposure;
extracting a first region in which a three-dimensional object exists from the first image;
extracting a second region in which the three-dimensional object exists from the second image;
detecting first three-dimensional object information from the first region;
detecting second three-dimensional object information from the second region;
calculating integrated three-dimensional object information by integrating the first three-dimensional object information and the second three-dimensional object information;
An image processing method comprising:
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112022001328.1T DE112022001328T5 (en) | 2021-05-31 | 2022-02-07 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD |
JP2023525380A JPWO2022254795A1 (en) | 2021-05-31 | 2022-02-07 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021090994 | 2021-05-31 | ||
JP2021-090994 | 2021-05-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022254795A1 true WO2022254795A1 (en) | 2022-12-08 |
Family
ID=84324114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/004704 WO2022254795A1 (en) | 2021-05-31 | 2022-02-07 | Image processing device and image processing method |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2022254795A1 (en) |
DE (1) | DE112022001328T5 (en) |
WO (1) | WO2022254795A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11201741A (en) * | 1998-01-07 | 1999-07-30 | Omron Corp | Image processing method and its device |
JP2007096684A (en) * | 2005-09-28 | 2007-04-12 | Fuji Heavy Ind Ltd | Outside environment recognizing device of vehicle |
JP2012026838A (en) * | 2010-07-22 | 2012-02-09 | Ricoh Co Ltd | Distance measuring equipment and image pickup device |
JP2021025833A (en) * | 2019-08-01 | 2021-02-22 | 株式会社ブルックマンテクノロジ | Distance image imaging device, and distance image imaging method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010073009A (en) | 2008-09-19 | 2010-04-02 | Denso Corp | Image processing apparatus |
-
2022
- 2022-02-07 WO PCT/JP2022/004704 patent/WO2022254795A1/en active Application Filing
- 2022-02-07 JP JP2023525380A patent/JPWO2022254795A1/ja active Pending
- 2022-02-07 DE DE112022001328.1T patent/DE112022001328T5/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11201741A (en) * | 1998-01-07 | 1999-07-30 | Omron Corp | Image processing method and its device |
JP2007096684A (en) * | 2005-09-28 | 2007-04-12 | Fuji Heavy Ind Ltd | Outside environment recognizing device of vehicle |
JP2012026838A (en) * | 2010-07-22 | 2012-02-09 | Ricoh Co Ltd | Distance measuring equipment and image pickup device |
JP2021025833A (en) * | 2019-08-01 | 2021-02-22 | 株式会社ブルックマンテクノロジ | Distance image imaging device, and distance image imaging method |
Also Published As
Publication number | Publication date |
---|---|
DE112022001328T5 (en) | 2024-01-04 |
JPWO2022254795A1 (en) | 2022-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10286834B2 (en) | Vehicle exterior environment recognition apparatus | |
US10442343B2 (en) | Vehicle exterior environment recognition apparatus | |
JP4595833B2 (en) | Object detection device | |
JP5863536B2 (en) | Outside monitoring device | |
US20080181461A1 (en) | Monitoring System | |
US20160055751A1 (en) | Lane detection apparatus and operating method for the same | |
US10000210B2 (en) | Lane recognition apparatus | |
JP3312729B2 (en) | Outside monitoring device with fail-safe function | |
US9852502B2 (en) | Image processing apparatus | |
JP6447289B2 (en) | Imaging apparatus, imaging method, program, vehicle control system, and vehicle | |
JP3301995B2 (en) | Stereo exterior monitoring device | |
US10247551B2 (en) | Vehicle image processing device for environment recognition | |
US9524645B2 (en) | Filtering device and environment recognition system | |
US9659376B2 (en) | Filtering device | |
JP2009239485A (en) | Vehicle environment recognition apparatus and preceding-vehicle follow-up control system | |
WO2022254795A1 (en) | Image processing device and image processing method | |
JP4807733B2 (en) | Outside environment recognition device | |
JP6891082B2 (en) | Object distance detector | |
JP2001043377A (en) | External monitoring device having failsafe function | |
JP3272701B2 (en) | Outside monitoring device | |
JP2020201876A (en) | Information processing device and operation support system | |
US10417505B2 (en) | Vehicle detection warning device and vehicle detection warning method | |
JP6808753B2 (en) | Image correction device and image correction method | |
WO2023112127A1 (en) | Image recognition device and image recognition method | |
WO2024009605A1 (en) | Image processing device and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22815554 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023525380 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112022001328 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22815554 Country of ref document: EP Kind code of ref document: A1 |