[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020235458A1 - Image-processing device, method, and electronic apparatus - Google Patents

Image-processing device, method, and electronic apparatus Download PDF

Info

Publication number
WO2020235458A1
WO2020235458A1 PCT/JP2020/019368 JP2020019368W WO2020235458A1 WO 2020235458 A1 WO2020235458 A1 WO 2020235458A1 JP 2020019368 W JP2020019368 W JP 2020019368W WO 2020235458 A1 WO2020235458 A1 WO 2020235458A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
signal
ranging
wavelength
depth
Prior art date
Application number
PCT/JP2020/019368
Other languages
French (fr)
Japanese (ja)
Inventor
友希 鴇崎
諭志 河田
神尾 和憲
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020235458A1 publication Critical patent/WO2020235458A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • This disclosure relates to image processing devices, methods and electronic devices.
  • the ToF sensor irradiates a predetermined distance measuring object with a predetermined ranging light and measures the distance to the measurement target position (each part of the measurement object) based on the difference between the phase of the irradiation light and the phase of the reflected light. doing.
  • the present disclosure has been made in view of such a situation, and it is possible to reduce the influence of disturbance due to external light or the difference in reflectance in the measurement target, and to perform distance measurement processing at the measurement target position more accurately. It is intended to provide possible image processing devices, methods and electronic devices.
  • the image processing apparatus of the present disclosure receives a received signal of the reflected light of the first ranging light of the first wavelength and a received light of the reflected light of the second ranging light of the second wavelength.
  • a difference calculation unit that calculates the difference between the reflection intensity of the first distance measurement light and the reflection intensity of the second distance measurement light at the same distance measurement position based on the signal, and the distance measurement position based on the difference.
  • An output unit that outputs a depth signal based on the in-phase component signal and the orthogonal component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light. , Equipped with.
  • FIG. 1 is an explanatory diagram of the principle of the image processing apparatus of the embodiment.
  • the sunlight spectrum after passing through the atmosphere means, for example, the sunlight spectrum under the conditions of Air Mass 1.5 defined in ASTM [American Society for Testing and Materials].
  • the image processing device 10 of the embodiment includes a difference calculation unit 11 and an output unit 12.
  • the difference calculation unit 11 has relative light receiving signals SL1 and spectral radiant intensity of the reflected light of the first ranging light L1 having a first wavelength ⁇ 1 having a relatively high spectral radiant intensity.
  • the output unit 12 Based on the received signal SL2 of the reflected light of the second ranging light L2 having a second wavelength ⁇ 2 (> f1), which is relatively low, the reflected intensity of the first ranging light L1 and the second ranging light L2 The difference d between the reflection intensity and the reflection intensity is calculated.
  • the output unit 12 has the in-phase component signal I1 and the orthogonal component signal Q1 corresponding to the light-receiving signal SL1 or the in-phase component signal I2 and the orthogonal component signal Q2 corresponding to the light-receiving signal SL2 based on the difference d calculated by the difference calculation unit 11.
  • the depth signal SDP corresponding to any of the above is output.
  • FIG. 2 is an explanatory diagram of an example of the sunlight spectrum after passing through the atmosphere.
  • the first ranging light L1 cannot be separated from sunlight. Therefore, the distance measurement using the first distance measurement light L1 may reduce the distance measurement accuracy under sunlight.
  • the second ranging light L2 has a relatively low spectral radiant intensity in the sunlight spectrum. That is, it is a component that is not contained much in sunlight after passing through the atmosphere. Therefore, the second ranging light L2 can be separated from the sunlight under sunlight and is not easily affected by the sunlight.
  • the wavelength of the second ranging light L2 is relatively long, blurring or scattering in the lens may become a problem. Further, even if the wavelengths are different, there is no influence on the distance measurement (influence on the phase difference).
  • the depth signal SDP corresponding to the orthogonal component signal Q is output.
  • the portion affected by sunlight (pixels affected by sunlight) can be separated from sunlight, and the light receiving signal SL2 corresponding to the second ranging light L2, which is not easily affected by sunlight.
  • the depth signal SDP corresponding to the in-phase component signal I and the orthogonal component signal Q corresponding to the above is output.
  • Depth signal SDP can be stably obtained for the entire image by adaptively selecting each of the scenes in which the area exposed to sunlight and the area not exposed to sunlight are mixed in one image such as (shade of trees). Will be.
  • the signal intensity of the received signal SL1 of the reflected light of the first ranging light L1 is a correct value indoors, which is not affected by sunlight. Further, in outdoors affected by sunlight, offset due to sunlight is included, and the signal intensity of the received signal SL1 of the reflected light of the first ranging light L1 becomes a large value.
  • the signal intensity of the received signal SL2 of the reflected light of the second ranging light L2 is not affected by sunlight both indoors and outdoors, and is a correct value.
  • the signal strength of the light receiving signal SL1 is AL1
  • the signal strength of the light receiving signal SL2 is AL2
  • the light receiving signal SL2 of the reflected light of the second ranging light L2 is regarded as having a large influence of sunlight.
  • the depth signal SDP is calculated using this.
  • the influence of sunlight is considered to be small, and the light receiving signal SL1 of the reflected light of the first ranging light L1 is used.
  • the depth signal SDP is calculated using this.
  • the correct depth signal can be calculated at any position where the influence of sunlight is large or the influence of sunlight is small. Therefore, for example, the control of the camera is stable and the image quality of photo processing is improved. It can be improved and obstacles can be detected robustly.
  • FIG. 3 is a schematic block diagram of the image processing apparatus of the first embodiment.
  • It includes a light receiving unit 22 that outputs a light receiving signal SL2, and a signal processing unit 23 that controls the irradiation unit 21 and generates a depth signal SDP based on the input light receiving signal SL1 or the light receiving signal SL2.
  • FIG. 4 is an explanatory diagram of the first aspect of the light receiving unit.
  • the light receiving unit 22 includes a light receiving lens 22A, a beam splitter (half mirror) 22B, a first TOF (Times of Flight) sensor 22D-1, and a second TOF sensor 22D-2.
  • the light receiving lens 22A collects the light received.
  • the beam splitter 22B divides the light received through the light receiving lens 22A into two systems.
  • the first TOF sensor 22D-1 receives light through the filter 22C-1 and outputs a light receiving signal SL1.
  • the second TOF sensor 22D-2 receives light through the filter 22C-2 and outputs a light receiving signal SL2.
  • the first TOF sensor 22D-1 and the second TOF sensor 22D-2 include a two-dimensional image pickup element in which a light receiving element (pixel cell) is two-dimensionally arranged. A plurality of light receiving signals SL1 and light receiving signals SL2 are output in pixel (pixel cell) units.
  • FIG. 5 is an explanatory view of a second aspect of the light receiving unit.
  • the TOF sensor 25 is provided with.
  • the lens array unit 25A in which the lens LS is arranged corresponding to each pixel cell C1 and C2, the filter FL1 corresponding to the pixel cell C1, and the filter FL2 corresponding to the pixel cell C2 are alternately arranged. It includes a filter unit 25B and a light receiving unit 25C in which a light receiving cell PC is arranged so as to correspond to the filter FL1 and the filter FL2.
  • FIG. 6 is an explanatory diagram of pixel interpolation.
  • the TOF sensor 25 further includes a signal interpolation unit, and for the light receiving cell PC corresponding to the pixel cell C1, the light receiving cell PC corresponding to the pixel cell C1 located around the light receiving cell corresponding to the pixel cell C2. If the pixel cell C1 (virtual pixel cell C1) is located in the pixel cell C2 based on the output of, the output of the light receiving cell of the pixel cell C1 may be interpolated. .. The same applies to the interpolation of the pixel cell C2.
  • the output of the virtual pixel cell C1 corresponding to the arrangement position of the pixel cell C2 is equal to the average value of the outputs of the four pixel cells C1 located adjacent to the light receiving cell PC. Just do it.
  • the virtual pixel cell C1 When the virtual pixel cell C1 is located at the four corners of the TOF sensor 25, it may be the average value of the outputs of the two adjacent pixel cells C1, and the virtual pixel cell C1 is the TOF sensor 25. When it is located on the peripheral edge excluding the four corners, it may be the average value of the outputs of the three adjacent pixel cells C1.
  • the signal processing unit 23 may perform the same processing.
  • FIG. 7 is an explanatory diagram of an example of a functional block of the signal processing unit of the first embodiment.
  • the signal processing unit 23 is roughly classified into a first RAW image storage unit 30-1, a second RAW image storage unit 30-2, a first reflection intensity calculation unit 31-1, a second reflection intensity calculation unit 31-2, and an intensity signal. It includes a difference calculation unit 32, a selection signal generation unit 33, a first I_Q signal calculation unit 34-1, a second I_Q signal calculation unit 34-2, a selection unit 35, and a depth conversion unit 36.
  • the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it as a selection signal sel to the selection unit 35.
  • P (C850-C940) / (C850 + C940)
  • the first I_Q signal calculation unit 34-1 calculates the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850, and the selection unit 35. Output to.
  • the second I_Q signal calculation unit 34-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940, and the selection unit 35. Output to.
  • the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P. P ⁇ th In the case of, the selection signal sel for selecting the first common mode component signal I850 and the first orthogonal component signal Q850 is output to the selection unit 35, assuming that the influence of sunlight is small.
  • the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P. P> th In this case, the selection signal sel for selecting the second common mode component signal I940 and the second orthogonal component signal Q940, which are less affected by sunlight, is output to the selection unit 35, assuming that the influence of sunlight is large.
  • the predetermined threshold value th is a value sufficiently larger than the difference P that may occur when the influence of sunlight is small, and may occur when the influence of sunlight is large. It is set to a value sufficiently smaller than the difference P.
  • the selection unit 35 depths either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940 based on the selection signal sel. Output to the conversion unit 36.
  • the depth conversion unit 36 is based on either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940.
  • the depth signal SDP corresponding to the distance to the object OBJ is calculated and output for each pixel.
  • FIG. 8 is an overall processing flowchart of the embodiment.
  • the process is started, the object OBJ is imaged, and the process of measuring the depth (distance to the object OBJ) (step S11) and the visible image acquisition process (step S12) are performed in parallel.
  • an image processing process step S13 for generating a depth image by applying the measured depth to the visible image is performed, and the process is completed.
  • FIG. 9 is a processing flowchart of the depth measurement process of the first embodiment.
  • the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2. Then, the light receiving unit 22 generates RAW image data RAW850 and RAW image data RAW940 and outputs them to the signal processing unit 23.
  • the signal processing unit 23 acquires the RAW image data RAW850 (step S21) and outputs it to the first reflection intensity calculation unit 31-1 and the first I_Q signal calculation unit 34-1. Similarly, the signal processing unit 23 acquires the RAW image data RAW940 (step S22) and outputs it to the second reflection intensity calculation unit 31-2 and the second I_Q signal calculation unit 34-2.
  • the signal processing unit 23 stores the acquired RAW image data RAW850 and RAW image data RAW940 in a work memory (not shown).
  • the first reflection intensity calculation unit 31-1 calculates the reflection intensity signal C850 for each pixel based on the acquired RAW image data RAW850 and outputs it to the intensity signal difference calculation unit 32 (step S23).
  • the second reflection intensity calculation unit 31-2 calculates the reflection intensity signal C940 for each pixel based on the acquired RAW image data RAW940 and outputs it to the intensity signal difference calculation unit 32 (step S24).
  • the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it to the selection signal generation unit 33 (step S25).
  • P (C850-C940) / (C850 + C940)
  • the first I_Q signal calculation unit 34-1 calculates and selects the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850. Output to unit 35 (step S26).
  • the second I_Q signal calculation unit 34-2 calculates and selects the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940. Output to unit 35 (step S27).
  • the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P.
  • P ⁇ th In the case of, the selection signal sel for selecting the first common mode component signal I850 and the first orthogonal component signal Q850 is output to the selection unit 35, assuming that the influence of sunlight is small.
  • P> th In the case of, the selection signal sel for selecting the second common mode component signal I940 and the second orthogonal component signal Q940, which are less affected by sunlight, is output to the selection unit 35 (assuming that the influence of sunlight is large).
  • the selection unit 35 depths either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940 based on the selection signal sel. Output to the conversion unit 36 (step S29).
  • the depth conversion unit 36 is based on either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940.
  • the depth signal DP corresponding to the distance to the object OBJ is calculated and output for each pixel (step S30).
  • each pixel is affected by sunlight, and when it is determined that the pixel is less affected by sunlight, it is scattered.
  • the depth signal SDP is calculated based on the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 having a first wavelength ⁇ 1. Further, when it is determined that the pixel is greatly affected by sunlight, the second in-phase component signal I940 and the second in-phase component signal I940 corresponding to the second ranging light L2 having the second wavelength ⁇ 2, which is less affected by sunlight, are used. 2 The depth signal DP is calculated based on the orthogonal component signal Q940. Therefore, the obtained depth image is highly accurate.
  • the depth signal SDP was generated by using either the second in-phase component signal I940 or the second orthogonal component signal Q940 corresponding to the second ranging light L2 having the second wavelength ⁇ 2.
  • the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 of the first wavelength ⁇ 1 and the second ranging of the second wavelength ⁇ 2 are used.
  • any depth signal is selected as the depth signal SDP based on the region determination feature amount. is there.
  • FIG. 10 is an explanatory diagram of an example of a functional block of the signal processing unit of the second embodiment. In this case, since the overall configuration is the same as that of the first embodiment, it will be described with reference to FIG.
  • the signal processing unit 23 in the second embodiment is roughly classified into the first RAW image storage unit 40-1, the second RAW image storage unit 40-2, the first I_Q signal calculation unit 41-1, and the second I_Q signal calculation unit 41-2. , 1st depth conversion unit 42-1, 2nd depth conversion unit 42-2, 1st area determination feature amount calculation unit 43-1, 2nd area determination feature amount calculation unit 43-2, comparison unit 44 and selection unit 45 It has.
  • the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 is calculated and output to the first depth conversion unit 42-1.
  • the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 is calculated and output to the second depth conversion unit 42-2.
  • the first depth conversion unit 42-1 calculates the first depth signal DP1 corresponding to the distance to the object OBJ for each pixel based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850, and selects the unit. Output to 45.
  • the second depth conversion unit 42-2 calculates for each second depth signal DP2 pixel corresponding to the distance to the object OBJ based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and selects the selection unit 45. Output to.
  • the first region determination feature amount calculation unit 43-1 is based on the first depth image corresponding to the first depth signal DP1, and the edge depiction degree ED1 (edge degree: signal is buried in noise) in the first depth image.
  • the SN ratio ⁇ 1 of the flat portion of the first depth image is calculated and output to the comparison unit 44.
  • the second region determination feature amount calculation unit 43-2 is based on the second depth image corresponding to the second depth signal DP2, and the edge depiction degree ED2 (edge degree: signal is buried in noise) in the second depth image.
  • the SN ratio ⁇ 2 of the flat portion of the second depth image is calculated and output to the comparison unit 44.
  • the comparison unit 44 determines the reliability of the edge degree in the first depth image, the SN ratio ⁇ of the flat portion of the first depth image GDP1, the edge degree in the second depth image GDP2, and the SN ratio ⁇ of the flat portion of the second depth image, respectively. As a result, a selection signal sel for selecting a more reliable depth image from the first depth image GDP1 and the second depth image GDP2 is output to the selection unit 45.
  • the selection unit 45 outputs either the first depth signal DP1 or the second depth signal DP2 as the depth signal DP for each pixel based on the selection signal sel.
  • FIG. 11 is a processing flowchart of the depth measurement process of the second embodiment.
  • the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and the RAW image data RAW850
  • the RAW image data RAW940 is generated, the RAW image data RAW850 is output to the first I_Q signal calculation unit 41-1, and the RAW image data RAW940 is output to the second I_Q signal calculation unit 41-2.
  • the first RAW image storage unit 40-1 of the signal processing unit 23 acquires the RAW image data RAW850 (step S41).
  • the first I_Q signal calculation unit 41-1 has a first orthogonality corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850 read from the first RAW image storage unit 40-1.
  • the component signal Q850 is calculated and output to the first depth conversion unit 42-1 (step S42).
  • the first depth conversion unit 42-1 calculates the first depth signal SDP1 corresponding to the distance to the object OBJ based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 for each pixel. It is output to the area determination feature amount calculation unit 43-1 and the selection unit 45 (step S43).
  • the first region determination feature amount calculation unit 43-1 calculates the reliability (step S44).
  • FIG. 12 is a processing flowchart of the reliability calculation process.
  • the first region determination feature amount calculation unit 43-1 is based on the first depth image corresponding to the first depth signal DP1, and the edge depiction degree (edge extraction degree: the signal is buried in noise) in the first depth image.
  • the degree) is calculated (step S51), and the reliability E with respect to the edge extraction degree is calculated (step S52).
  • the first region determination feature amount calculation unit 43-1 calculates the variance ⁇ of the SN ratio of the flat portion of the first depth image (step S53), and calculates the reliability F with respect to the variance ⁇ (step). S54).
  • the first region determination feature amount calculation unit 43-1 integrates both reliability E and F and outputs the integrated reliability RL1 to the comparison unit 44 (step S55).
  • the second RAW image storage unit 40-2 acquires the RAW image data RAW940 (step S45).
  • the second I_Q signal calculation unit 41-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940.
  • Output to the 2-depth conversion unit 42-2 step S46).
  • the second depth conversion unit 42-2 calculates the second depth signal SDP2 corresponding to the distance to the object OBJ for each pixel based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and the second It is output to the area determination feature amount calculation unit 43-2 and the selection unit 45 (step S47). As a result, the second region determination feature amount calculation unit 43-2 calculates the reliability (step S48).
  • the second region determination feature amount calculation unit 43-2 uses the second depth image corresponding to the second depth signal SDP2 to draw an edge in the second depth image (edge extraction degree: the signal is The degree of being buried in noise) is calculated (step S51), and the reliability E with respect to the edge extraction degree is calculated (step S52).
  • step S53 the variance ⁇ of the SN ratio of the flat portion of the second depth image is calculated (step S53), and the reliability F with respect to the variance ⁇ is calculated (step S54). Subsequently, both reliability E and F are integrated and the integrated reliability RL2 is output to the comparison unit 44 (step S55).
  • the comparison unit 44 is based on the integrated reliability RL1 output by the first area determination feature amount calculation unit 43-1 and the integrated reliability RL2 output by the second area determination feature amount calculation unit 43-2. It is determined which of the integrated reliability RL1 and the integrated reliability RL2 has the higher integrated reliability (step S49). Then, as a result of the determination, the comparison unit 44 outputs the depth signal SDP1 and the depth signal SDP2 to the selection unit 45 as a selection signal sel for selecting a more reliable depth signal.
  • the selection unit 45 outputs either the first depth signal SDP1 or the second depth signal SDP2 as the depth signal SDP for each pixel based on the selection signal sel (step S50).
  • the second embodiment it is determined whether or not each pixel is affected by sunlight based on the reliability of the depth image. Then, when it is determined that the pixel is less affected by sunlight, the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 having the first wavelength ⁇ 1 with less scattering are used. Based on this, the depth signal SDP is calculated. Further, when it is determined that the pixel is greatly affected by sunlight, the second in-phase component signal I940 and the second in-phase component signal I940 corresponding to the second ranging light L2 of the second frequency f2, which is less affected by sunlight, are used. 2 A more reliable depth signal SDP is calculated based on the orthogonal component signal Q940. Therefore, according to the second embodiment, the obtained depth image has higher accuracy.
  • FIG. 13 is an explanatory diagram of an example of a functional block of the signal processing unit of the third embodiment.
  • the signal processing unit 23 in the third embodiment can be roughly classified into a first RAW image storage unit 40-1, a second RAW image storage unit 40-2, a first I_Q signal calculation unit 41-1, and a second I_Q signal calculation unit 41-2.
  • 1st depth conversion unit 42-1, 2nd depth conversion unit 42-2, 1st area determination feature amount calculation unit 43-1, 2nd area determination feature amount calculation unit 43-2, 1st reflection intensity calculation unit 31 -1, the second reflection intensity calculation unit 31-2, the intensity signal difference calculation unit 32, the selection signal generation unit 51, and the selection unit 45 are provided.
  • the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and the RAW image data RAW850
  • the RAW image data RAW940 is generated, the RAW image data RAW850 is output to the first I_Q signal calculation unit 41-1, and the RAW image data RAW940 is output to the second I_Q signal calculation unit 41-2.
  • the first RAW image storage unit 40-1 of the signal processing unit 23 acquires the RAW image data RAW850.
  • the first I_Q signal calculation unit 41-1 has a first orthogonality corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850 read from the first RAW image storage unit 40-1.
  • the component signal Q850 is calculated and output to the first depth conversion unit 42-1.
  • the first depth conversion unit 42-1 calculates the first depth signal SDP1 corresponding to the distance to the object OBJ for each pixel based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850.
  • the output is output to the first region determination feature amount calculation unit 43-1 and the selection unit 45.
  • the first region determination feature amount calculation unit 43-1 calculates the reliability and outputs the integrated reliability RL1 to the selection signal generation unit 51.
  • the second RAW image storage unit 40-2 acquires the RAW image data RAW940 (step S45).
  • the second I_Q signal calculation unit 41-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940.
  • Output to the 2-depth conversion unit 42-2 step S46).
  • the second depth conversion unit 42-2 calculates the second depth signal SDP2 corresponding to the distance to the object OBJ for each pixel based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and the second It is output to the area determination feature amount calculation unit 43-2 and the selection unit 45 (step S47). As a result, the second region determination feature amount calculation unit 43-2 calculates the reliability and outputs the integrated reliability RL2 to the selection signal generation unit 51.
  • the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it to the selection signal generation unit 51.
  • P (C850-C940) / (C850 + C940)
  • the comparison unit 44 sets the integrated reliability RL1 output by the first area determination feature amount calculation unit 43-1, the integrated reliability RL2 output by the second area determination feature amount calculation unit 43-2, and the difference P. Based on this, it is determined which of the depth signal SDP1 and the depth signal SDP2 should be selected, and the selection signal sel is output to the selection unit 45.
  • the selection result is adopted.
  • the comparison result of the integrated reliability RL1 and the integrated reliability RL2 and the selection result of the difference P is adopted as the selection result.
  • the selection result based on the difference P is adopted. Further, when the difference between the integrated reliability RL1 and the integrated reliability RL2 is small and the difference P is also small, it is considered that the result is not so different regardless of which one is selected, so it is arbitrarily selected (for example, the depth signal SDP1 is depthed in advance). Set to select as signal SDP).
  • the selection unit 45 outputs either the first depth signal SDP1 or the second depth signal SDP2 as the depth signal SDP for each pixel based on the selection signal sel.
  • the first range-finding light L1 (for example, wavelength 850 nm), which is a component contained in a large amount in sunlight, and the sunlight spectrum are spectroscopic.
  • the case where the second ranging light L2 (for example, wavelength 940 nm), which is a component having a relatively low radiation intensity, that is, which is not so contained in sunlight, is used has been described.
  • the first ranging light L1 in the frequency band having high reflectance and the second ranging light L2 in the frequency band having low reflectance, which are specific to the type of the object to be measured are used. It is an embodiment when it is used.
  • the reflectance of the first ranging light L1 is the reflectance of the second ranging light L2 (reflectance of the ranging object at the second wavelength).
  • the first ranging light L1 and the second ranging light L2 are set so as to be significantly higher than.
  • the intensity of reflection in each wavelength band differs depending on the type of substance to be measured (for example, plant, soil, water, etc.).
  • the object to be measured exists in a region where the difference between the reflected signal in the frequency band having high reflectance and the reflected signal in the frequency band having low reflectance is large depending on the substance of the object to be measured. Can be regarded as.
  • FIG. 14 is an explanatory diagram of the fourth embodiment.
  • FIG. 14 shows the relationship between the wavelength and the reflection intensity on plants, soil, water, the ground surface, and the water surface as measurement objects.
  • a reflection signal near 800 nm corresponding to the reflectance of the first ranging light L1 having a high reflectance of the plant and the reflection of the plant.
  • the difference of the reflected signal corresponding to the reflectance of the second ranging light L2
  • the difference of the reflected signal near 500 nm, which has a low reflectance, if the difference is large, it can be determined that a plant exists in the region. It will be possible.
  • the distance (depth) to the plant can be stabilized by selecting the reflected signal near 800 nm, which has a high reflectance of the plant. You can get it.
  • a reflected signal near 400 nm (corresponding to the reflectance of the first distance measuring light L1) having a high reflectance of water and a reflection of water.
  • the difference of the reflected signal (corresponding to the reflectance of the second ranging light L2) near 800 nm, which has a low reflectance, is taken, if the difference is large, it can be determined that water exists in the region. It will be possible.
  • the distance (depth) to the water surface can be stabilized by selecting a reflected signal near 400 nm, which has a high reflectance of water. You can get it.
  • the existence of the measurement object is present. It is possible to accurately determine the presence / absence and the distance (depth) to the measurement object if it exists.
  • the distance measuring light L is the first distance measuring light L1 (for example, a wavelength of 850 nm) which is a component that is abundantly contained in sunlight. ) And the case where the second ranging light L2 (for example, wavelength 940 nm), which is a component not so much contained in sunlight, is used.
  • the wavelength is not limited to this, and any combination of wavelengths that can eliminate the influence of sunlight can be appropriately applied.
  • this technology can also adopt the following configurations.
  • a difference calculation unit that calculates the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second distance measurement light. Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said
  • An output unit that outputs a depth signal corresponding to the orthogonal component signal, Image processing device equipped with.
  • the output unit is a selection unit that selects and outputs either a received signal of the reflected light of the first ranging light or a received signal of the reflected light of the second ranging light based on the difference.
  • a calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light selected by the selection unit. Based on the calculation result of the calculation unit, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the orthogonality
  • a conversion unit that performs depth conversion on the component signal and outputs the depth signal, The image processing apparatus according to (1).
  • the output unit includes a calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, respectively. Depth conversion is performed on the in-phase component signal and the orthogonal component signal calculated from the received signal of the reflected light of the first range-finding light, and the first depth signal as the depth signal and the reflected light of the second range-finding light are performed.
  • a conversion unit that performs depth conversion on the in-phase component signal and the orthogonal component signal calculated from the received signal of the above and outputs the second depth signal as the depth signal.
  • a selection unit that selects and outputs either the first depth signal or the second depth signal based on the difference.
  • the image processing apparatus according to (1).
  • the first ranging light has a relatively high spectral radiant intensity in the sunlight spectrum, the second ranging light has a relatively low spectral radiant intensity, and the second wavelength has a second wavelength. Longer than 1 wavelength, The image processing apparatus according to any one of (1) to (3).
  • the first wavelength has a center wavelength of 850 nm, and the second wavelength has a center wavelength of 940 nm.
  • the first wavelength and the second wavelength are set so that the reflectance of the distance measuring object at the first wavelength is significantly higher than the reflectance at the second wavelength.
  • the image processing apparatus according to any one of (1) to (4).
  • a method performed by an image processor The first ranging at the same ranging position based on the received signal of the reflected light of the first ranging light of the first wavelength and the received signal of the reflected light of the second ranging light of the second wavelength.
  • the process of calculating the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second ranging light, and Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said The process of outputting the depth signal corresponding to the orthogonal component signal and A method equipped with.
  • An irradiation unit that irradiates the first ranging light of the first wavelength and the second ranging light of the second wavelength, The reflected light of the first ranging light and the reflected light of the second ranging light are received, and the received signal of the reflected light of the first ranging light and the reflected light of the second ranging light are received.
  • the image pickup unit that outputs the received signal of Based on the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, the reflection intensity of the first ranging light at the same ranging position and the said A difference calculation unit that calculates the difference between the reflection intensity of the second distance measuring light and Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said An output unit that outputs a depth signal corresponding to the orthogonal component signal, Electronic equipment equipped with.
  • Image processing device 11 Difference calculation unit 12 Output unit 20 Image processing device 21 Irradiation unit 22 Light receiving unit 22A Light receiving lens 22B Beam splitter 22C-1, 22C-2 Filter 22D-1 1st TOF sensor 22D-2 2nd TOF sensor 23 Signal processing Part 25 TOF sensor 25A Lens array unit 25B Filter unit 25C Light receiving unit 30-1 1st RAW image storage unit 30-2 2nd RAW image storage unit 31-1 1st reflection intensity calculation unit 31-2 2nd reflection intensity calculation unit 32 Strength Signal difference calculation unit 33 Selection signal generation unit 34-1, 41-1 1st I_Q signal calculation unit 34-2, 41-2 2nd I_Q signal calculation unit 35 Selection unit 36 Depth conversion unit 40-1 1st RAW image storage unit 40- 2 2nd RAW image storage unit 42-1 1st depth conversion unit 42-2 2nd depth conversion unit 43-1 1st area judgment feature amount calculation unit 4-3 2nd area judgment feature amount calculation unit 44 Comparison unit 45 Selection unit 51 Selection signal generator C1 Pixel cell C2 Pixel cell C850 Reflection intensity signal C940 Reflection

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

An image-processing device (10) comprises: a difference calculation unit (11) that, on the basis of a light reception signal pertaining to reflection light of first distance-measurement light (L1) having a first wavelength (f1) and a light reception signal pertaining to reflection light of second distance-measurement light (L2) having a second wavelength (f2), calculates the difference (d) between the reflection intensity of the first distance-measurement light (L1) and the reflection intensity of the second distance-measurement light (L2) at the same distance-measurement position; and an output unit (12) that, on the basis of the distance (d), outputs a depth signal (SDP) based on same-phase component signals (I1, I2) and orthogonal component signals (Q1, Q2) that correspond to a light-reception signal (SL1) pertaining to the reflection light of the first distance-measurement light (L1) or to a light-reception signal (SL2) pertaining to the reflection light of the second distance-measurement light (L2) at each distance-measurement position.

Description

画像処理装置、方法及び電子機器Image processing equipment, methods and electronic devices
 本開示は、画像処理装置、方法及び電子機器に関する。 This disclosure relates to image processing devices, methods and electronic devices.
 近年、スマートフォンに測距用のToF(Time of Flight)センサを搭載する動きが増えている。
 そして搭載したToFセンサにより取得したデプス(距離)情報を用いた、カメラのオートフォーカス制御、被写体と背景を分離する写真加工処理、ユーザーインターフェイスのためのジェスチャ認識等が提案されている。
 また、自動運転や運転支援のために、ToFセンサを用いた障害物検出等が提案されている。
In recent years, there has been an increase in the movement to equip smartphones with ToF (Time of Flight) sensors for distance measurement.
Then, autofocus control of the camera, photo processing for separating the subject and the background, gesture recognition for the user interface, etc. using the depth (distance) information acquired by the mounted ToF sensor have been proposed.
In addition, obstacle detection using a ToF sensor has been proposed for automatic driving and driving support.
特許第5951211号公報Japanese Patent No. 5952111 特開2015-141394号公報Japanese Unexamined Patent Publication No. 2015-141394 特開2019-049480号公報Japanese Unexamined Patent Publication No. 2019-049480
 ところで、ToFセンサは、所定の測距用光を測定対象物に照射して照射光の位相と反射光の位相との差に基づいて測定対象位置(測定対象物の各部)までの距離を測定している。 By the way, the ToF sensor irradiates a predetermined distance measuring object with a predetermined ranging light and measures the distance to the measurement target position (each part of the measurement object) based on the difference between the phase of the irradiation light and the phase of the reflected light. doing.
 したがって、測距用光が太陽光などの外部光による外乱を受けたり、測距用光の反射率が異なったりするような場合には、正しい測距処理が行えない虞がある。ひいては、正しい障害物検出等が行えなくなる虞があった。 Therefore, if the distance measuring light is disturbed by external light such as sunlight, or if the reflectance of the distance measuring light is different, there is a risk that correct distance measuring processing cannot be performed. As a result, there is a risk that correct obstacle detection and the like cannot be performed.
 本開示は、このような状況に鑑みてなされたものであり、外部光による外乱あるいは測定対象物における反射率差などの影響を低減して、より正しく測定対象位置における測距処理を行うことが可能な画像処理装置、方法及び電子機器を提供することを目的としている。 The present disclosure has been made in view of such a situation, and it is possible to reduce the influence of disturbance due to external light or the difference in reflectance in the measurement target, and to perform distance measurement processing at the measurement target position more accurately. It is intended to provide possible image processing devices, methods and electronic devices.
 上記目的を達成するために、本開示の画像処理装置は、第1の波長の第1測距用光の反射光の受光信号及び第2の波長の第2測距用光の反射光の受光信号に基づいて、同一の測距位置における第1測距用光の反射強度と、第2測距用光の反射強度と、の差を算出する差算出部と、差に基づいて測距位置毎の第1測距用光の反射光の受光信号あるいは第2測距用光の反射光の受光信号のいずれかに対応する同相成分信号及び直交成分信号に基づくデプス信号を出力する出力部と、を備える。 In order to achieve the above object, the image processing apparatus of the present disclosure receives a received signal of the reflected light of the first ranging light of the first wavelength and a received light of the reflected light of the second ranging light of the second wavelength. A difference calculation unit that calculates the difference between the reflection intensity of the first distance measurement light and the reflection intensity of the second distance measurement light at the same distance measurement position based on the signal, and the distance measurement position based on the difference. An output unit that outputs a depth signal based on the in-phase component signal and the orthogonal component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light. , Equipped with.
実施形態の画像処理装置の原理説明図である。It is a principle explanatory drawing of the image processing apparatus of embodiment. 大気圏通過後の太陽光スペクトラムの一例の説明図である。It is explanatory drawing of an example of the sunlight spectrum after passing through the atmosphere. 第1実施形態の画像処理装置の概要構成ブロック図である。It is a schematic block diagram of the image processing apparatus of 1st Embodiment. 受光部の第1態様の説明図である。It is explanatory drawing of the 1st aspect of the light receiving part. 受光部の第2態様の説明図である。It is explanatory drawing of the 2nd aspect of the light receiving part. 画素補間の説明図である。It is explanatory drawing of pixel interpolation. 第1実施形態の信号処理部の機能ブロックの一例の説明図である。It is explanatory drawing of an example of the functional block of the signal processing part of 1st Embodiment. 実施形態の全体処理フローチャートである。It is a whole processing flowchart of an embodiment. 第1実施形態のデプス計測処理の処理フローチャートである。It is a processing flowchart of the depth measurement processing of 1st Embodiment. 第2実施形態の信号処理部の機能ブロックの一例の説明図である。It is explanatory drawing of an example of the functional block of the signal processing part of 2nd Embodiment. 第2実施形態のデプス計測処理の処理フローチャートである。It is a processing flowchart of the depth measurement processing of 2nd Embodiment. 信頼度算出処理の処理フローチャートである。It is a processing flowchart of a reliability calculation process. 第3実施形態の信号処理部の機能ブロックの一例の説明図である。It is explanatory drawing of an example of the functional block of the signal processing part of 3rd Embodiment. 第4実施形態の説明図である。It is explanatory drawing of 4th Embodiment.
 次に、本開示の実施形態について図面に基づいて詳細に説明する。
[1]実施形態の原理説明
 まず、実施形態の詳細な説明に先立ち、実施形態の原理について説明する。
Next, the embodiments of the present disclosure will be described in detail with reference to the drawings.
[1] Explanation of the Principle of the Embodiment First, prior to the detailed explanation of the embodiment, the principle of the embodiment will be described.
 図1は、実施形態の画像処理装置の原理説明図である。
 以下の説明において、大気圏通過後の太陽光スペクトラムとは、例えば、ASTM[American Society for Testing and Materials]において規定されているAir Mass 1.5の条件下における太陽光スペクトラムをいうものとする。
 実施形態の画像処理装置10は、差算出部11と、出力部12と、を備えている。
 差算出部11は、大気圏通過後の太陽光スペクトラムにおいて、分光放射強度が相対的に高い第1の波長λ1の第1測距用光L1の反射光の受光信号SL1及び分光放射強度が相対的に低い第2の波長λ2(>f1)の第2測距用光L2の反射光の受光信号SL2に基づいて、第1測距用光L1の反射強度と、第2測距用光L2の反射強度と、の差dを算出する。
 出力部12は、差算出部11が算出した差dに基づいて受光信号SL1に対応する同相成分信号I1及び直交成分信号Q1、あるいは、受光信号SL2に対応する同相成分信号I2及び直交成分信号Q2のいずれかに対応するデプス信号SDPを出力する。
FIG. 1 is an explanatory diagram of the principle of the image processing apparatus of the embodiment.
In the following description, the sunlight spectrum after passing through the atmosphere means, for example, the sunlight spectrum under the conditions of Air Mass 1.5 defined in ASTM [American Society for Testing and Materials].
The image processing device 10 of the embodiment includes a difference calculation unit 11 and an output unit 12.
In the sunlight spectrum after passing through the atmosphere, the difference calculation unit 11 has relative light receiving signals SL1 and spectral radiant intensity of the reflected light of the first ranging light L1 having a first wavelength λ1 having a relatively high spectral radiant intensity. Based on the received signal SL2 of the reflected light of the second ranging light L2 having a second wavelength λ2 (> f1), which is relatively low, the reflected intensity of the first ranging light L1 and the second ranging light L2 The difference d between the reflection intensity and the reflection intensity is calculated.
The output unit 12 has the in-phase component signal I1 and the orthogonal component signal Q1 corresponding to the light-receiving signal SL1 or the in-phase component signal I2 and the orthogonal component signal Q2 corresponding to the light-receiving signal SL2 based on the difference d calculated by the difference calculation unit 11. The depth signal SDP corresponding to any of the above is output.
 図2は、大気圏通過後の太陽光スペクトラムの一例の説明図である。
 上記構成において、図3に示すように、第1測距用光L1(例えば、波長λ1=850nm)は、太陽光スペクトルにおいて分光放射強度が相対的に高い。すなわち、太陽光に多く含まれる成分である。
 ところで、第1測距用光L1は、第2測距用光L2(例えば、波長λ1=940nm)と比較して短波長である。このため、第1測距用光L1は、散乱が小さく画質的に優位である。
FIG. 2 is an explanatory diagram of an example of the sunlight spectrum after passing through the atmosphere.
In the above configuration, as shown in FIG. 3, the first ranging light L1 (for example, wavelength λ1 = 850 nm) has a relatively high spectral radiant intensity in the sunlight spectrum. That is, it is a component that is abundant in sunlight.
By the way, the first ranging light L1 has a shorter wavelength than the second ranging light L2 (for example, wavelength λ1 = 940 nm). Therefore, the first ranging light L1 has less scattering and is superior in terms of image quality.
 しかし、第1測距用光L1は、太陽光と分離することはできない。このため、第1測距用光L1を用いた測距は、太陽光下では、測距精度が低下する虞がある。
 これに対し、第2測距用光L2は、太陽光スペクトルにおいて分光放射強度が相対的に低い。すなわち、大気圏通過後の太陽光にあまり含まれていない成分である。
 したがって、第2測距用光L2は、太陽光下で太陽光と分離することができ、太陽光の影響を受けにくい。
However, the first ranging light L1 cannot be separated from sunlight. Therefore, the distance measurement using the first distance measurement light L1 may reduce the distance measurement accuracy under sunlight.
On the other hand, the second ranging light L2 has a relatively low spectral radiant intensity in the sunlight spectrum. That is, it is a component that is not contained much in sunlight after passing through the atmosphere.
Therefore, the second ranging light L2 can be separated from the sunlight under sunlight and is not easily affected by the sunlight.
 しかしながら、第2測距用光L2は波長が比較的長いためレンズでのぼけや散乱が問題となる虞がある。
 また、波長が異なっても、測距への影響(位相差への影響)はない。
However, since the wavelength of the second ranging light L2 is relatively long, blurring or scattering in the lens may become a problem.
Further, even if the wavelengths are different, there is no influence on the distance measurement (influence on the phase difference).
 そこで、太陽光の影響が小さい部分(太陽光の影響が小さい画素)については、散乱が小さく画質的に優位な第1測距用光L1に対応する受光信号SL1に対応する同相成分信号I及び直交成分信号Qに対応するデプス信号SDPを出力する。 Therefore, for the portion where the influence of sunlight is small (the pixel where the influence of sunlight is small), the in-phase component signal I corresponding to the light receiving signal SL1 corresponding to the first ranging light L1 which has small scattering and is superior in image quality and The depth signal SDP corresponding to the orthogonal component signal Q is output.
 また、太陽光の影響を受ける部分(太陽光の影響を受ける画素)については、太陽光と分離することができ、太陽光の影響を受けにくい第2測距用光L2に対応する受光信号SL2に対応する同相成分信号I及び直交成分信号Qに対応するデプス信号SDPを出力する。 Further, the portion affected by sunlight (pixels affected by sunlight) can be separated from sunlight, and the light receiving signal SL2 corresponding to the second ranging light L2, which is not easily affected by sunlight. The depth signal SDP corresponding to the in-phase component signal I and the orthogonal component signal Q corresponding to the above is output.
 これらの結果、本実施形態によれば、太陽光の影響が小さい部分と、太陽光の影響を受ける部分と、が混在するような状況下、例えば、屋内でも窓のそば、または屋外の日陰(木陰)等、1枚の画像の中でも太陽光があたっている領域とあたっていない領域とが混在したシーンでもそれぞれについて適応的に選択することで、画像全体で安定してデプス信号SDPを得られるようになる。 As a result, according to the present embodiment, under a situation where a portion affected by sunlight and a portion affected by sunlight coexist, for example, indoors by a window or outdoors in the shade ( Depth signal SDP can be stably obtained for the entire image by adaptively selecting each of the scenes in which the area exposed to sunlight and the area not exposed to sunlight are mixed in one image such as (shade of trees). Will be.
 以下、より具体的に説明する。
 第1測距用光L1の反射光の受光信号SL1の信号強度は、太陽光の影響を受けない屋内などにおいては、正しい値となる。
 また、太陽光の影響を受ける屋外などにおいては、太陽光によるオフセットが含まれ、第1測距用光L1の反射光の受光信号SL1の信号強度は、大きな値となる。
Hereinafter, a more specific description will be given.
The signal intensity of the received signal SL1 of the reflected light of the first ranging light L1 is a correct value indoors, which is not affected by sunlight.
Further, in outdoors affected by sunlight, offset due to sunlight is included, and the signal intensity of the received signal SL1 of the reflected light of the first ranging light L1 becomes a large value.
 一方、第2測距用光L2の反射光の受光信号SL2の信号強度は、屋内及び屋外のいずれにおいても太陽光の影響を受けず、正しい値となる。
 ここで、受光信号SL1の信号強度をAL1とし、受光信号SL2の信号強度をAL2とし、次式で表される信号強度差ΔAを求める。
   ΔA=(AL1-AL2)/(AL1+AL2)
On the other hand, the signal intensity of the received signal SL2 of the reflected light of the second ranging light L2 is not affected by sunlight both indoors and outdoors, and is a correct value.
Here, the signal strength of the light receiving signal SL1 is AL1, the signal strength of the light receiving signal SL2 is AL2, and the signal strength difference ΔA expressed by the following equation is obtained.
ΔA = (AL1-AL2) / (AL1 + AL2)
 そして、信号強度差ΔAが所定の閾値よりも大きい画素(受光位置:受光素子位置)については、太陽光の影響が大きいとみなして、第2測距用光L2の反射光の受光信号SL2を用いてデプス信号SDPを算出する。 Then, for a pixel (light receiving position: light receiving element position) in which the signal intensity difference ΔA is larger than a predetermined threshold value, the light receiving signal SL2 of the reflected light of the second ranging light L2 is regarded as having a large influence of sunlight. The depth signal SDP is calculated using this.
 また、信号強度差ΔAが所定の閾値よりも小さい画素(受光位置:受光素子位置)については、太陽光の影響が小さいとみなして、第1測距用光L1の反射光の受光信号SL1を用いてデプス信号SDPを算出する。 Further, for pixels whose signal intensity difference ΔA is smaller than a predetermined threshold value (light receiving position: light receiving element position), the influence of sunlight is considered to be small, and the light receiving signal SL1 of the reflected light of the first ranging light L1 is used. The depth signal SDP is calculated using this.
 この結果、太陽光の影響が多い、あるいは、太陽光の影響が少ない、いずれの位置においても、正しいデプス信号を算出することができるので、例えば、カメラの制御が安定し、写真加工の画質が向上したり、障害物をロバストに検出したりすることができる。 As a result, the correct depth signal can be calculated at any position where the influence of sunlight is large or the influence of sunlight is small. Therefore, for example, the control of the camera is stable and the image quality of photo processing is improved. It can be improved and obstacles can be detected robustly.
[2]第1実施形態
 まず第1実施形態の画像処理装置について説明する。
 図3は、第1実施形態の画像処理装置の概要構成ブロック図である。
 画像処理装置20は、第1測距用光L1としての波長λ1=850nmの光及び第2測距用光L2としての波長λ2=940nmの光を含む測距用光Lを測距対象の物体OBJに照射する照射部21と、複数の受光センサを有し、第1測距用光L1の反射光及び第2測距用光L2の反射光を受光して、画素毎に受光信号SL1あるいは受光信号SL2を出力する受光部22と、照射部21を制御するとともに、入力された受光信号SL1あるいは受光信号SL2に基づいて、デプス信号SDPを生成する信号処理部23と、を備えている。
[2] First Embodiment First, the image processing apparatus of the first embodiment will be described.
FIG. 3 is a schematic block diagram of the image processing apparatus of the first embodiment.
The image processing device 20 measures the distance measuring light L including the light having a wavelength λ1 = 850 nm as the first distance measuring light L1 and the light having a wavelength λ2 = 940 nm as the second distance measuring light L2. It has an irradiation unit 21 that irradiates the OBJ and a plurality of light receiving sensors, receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and receives the light receiving signal SL1 or the light receiving signal SL1 for each pixel. It includes a light receiving unit 22 that outputs a light receiving signal SL2, and a signal processing unit 23 that controls the irradiation unit 21 and generates a depth signal SDP based on the input light receiving signal SL1 or the light receiving signal SL2.
 照射部21としては、例えば、ピーク発光波長が波長λ1=850nm近傍のパルス波高可能な赤外LEDと、ピーク発光波長が波長λ2=940nm近傍のパルス発光可能な赤外LEDと、を備えているようにすればよい。あるいは、波長λ1=850nm及び波長λ2=940nmの双方を含むパルス発光可能な赤外光源を用いるようにすることも可能である。 The irradiation unit 21 includes, for example, an infrared LED having a peak emission wavelength in the vicinity of a wavelength of λ1 = 850 nm and capable of pulse wave height, and an infrared LED having a peak emission wavelength in the vicinity of a wavelength of λ2 = 940 nm. You can do it like this. Alternatively, it is also possible to use an infrared light source capable of pulse emission including both the wavelength λ1 = 850 nm and the wavelength λ2 = 940 nm.
 図4は、受光部の第1態様の説明図である。
 受光部22は、受光レンズ22Aと、ビームスプリッタ(ハーフミラー)22Bと、第1TOF(Times of Flight)センサ22D-1と、第2TOFセンサ22D-2と、を備えている。
 受光レンズ22Aは、受光光を集光する。
 ビームスプリッタ22Bは、受光レンズ22Aを透過した受光光を2系統に分ける。
 第1TOFセンサ22D-1は、ビームスプリッタ22Bにより分けられた一方の光のうち波長λ1=850nm近傍の光を透過するように帯域制限されたフィルタ22C-1が受光面に設けられている。そして、第1TOFセンサ22D-1は、このフィルタ22C-1を介して受光し、受光信号SL1を出力する。
 第2TOFセンサ22D-2は、ビームスプリッタ22Bにより分けられた他方の光のうち波長λ2=940nm近傍の光を透過するように帯域制限されたフィルタ22C-2が受光面に設けられている。そして第2TOFセンサ22D-2は、このフィルタ22C-2を介して受光し、受光信号SL2を出力する。
 ここで、第1TOFセンサ22D-1及び第2TOFセンサ22D-2は、2次元的に受光素子(画素セル)が配置された2次元撮像素子を備えている。
 なお、受光信号SL1及び受光信号SL2は、画素(画素セル)単位で複数出力されている。
FIG. 4 is an explanatory diagram of the first aspect of the light receiving unit.
The light receiving unit 22 includes a light receiving lens 22A, a beam splitter (half mirror) 22B, a first TOF (Times of Flight) sensor 22D-1, and a second TOF sensor 22D-2.
The light receiving lens 22A collects the light received.
The beam splitter 22B divides the light received through the light receiving lens 22A into two systems.
The first TOF sensor 22D-1 is provided with a band-limited filter 22C-1 on the light receiving surface so as to transmit light having a wavelength of λ1 = 850 nm among one of the lights divided by the beam splitter 22B. Then, the first TOF sensor 22D-1 receives light through the filter 22C-1 and outputs a light receiving signal SL1.
The second TOF sensor 22D-2 is provided with a band-limited filter 22C-2 on the light receiving surface so as to transmit light having a wavelength of λ2 = 940 nm among the other light separated by the beam splitter 22B. Then, the second TOF sensor 22D-2 receives light through the filter 22C-2 and outputs a light receiving signal SL2.
Here, the first TOF sensor 22D-1 and the second TOF sensor 22D-2 include a two-dimensional image pickup element in which a light receiving element (pixel cell) is two-dimensionally arranged.
A plurality of light receiving signals SL1 and light receiving signals SL2 are output in pixel (pixel cell) units.
 図5は、受光部の第2態様の説明図である。
 図5においては、理解の容易と、図示の簡略化のため、4×4画素の場合を示している。
 受光部22は、波長λ1=850nm近傍の光を受光するように帯域制限された複数の画素セルC1と、波長λ2=940nm近傍の光を受光するように帯域制限された複数の画素セルC2と、を備えたTOFセンサ25を備えている。
FIG. 5 is an explanatory view of a second aspect of the light receiving unit.
In FIG. 5, the case of 4 × 4 pixels is shown for easy understanding and simplification of the illustration.
The light receiving unit 22 includes a plurality of pixel cells C1 band-limited so as to receive light having a wavelength λ1 = 850 nm, and a plurality of pixel cells C2 band-limited so as to receive light having a wavelength λ2 = 940 nm. The TOF sensor 25 is provided with.
 TOFセンサ25は、各画素セルC1,C2に対応づけてレンズLSが配置されたレンズアレイユニット25Aと、画素セルC1に対応したフィルタFL1及び画素セルC2に対応したフィルタFL2が交互に配置されたフィルタユニット25Bと、受光セルPCがフィルタFL1及びフィルタFL2に対応づけて配置された受光ユニット25Cと、を備えている。 In the TOF sensor 25, the lens array unit 25A in which the lens LS is arranged corresponding to each pixel cell C1 and C2, the filter FL1 corresponding to the pixel cell C1, and the filter FL2 corresponding to the pixel cell C2 are alternately arranged. It includes a filter unit 25B and a light receiving unit 25C in which a light receiving cell PC is arranged so as to correspond to the filter FL1 and the filter FL2.
 図6は、画素補間の説明図である。
 この構成において、TOFセンサ25は、さらに信号補間部を備え、画素セルC1に対応する受光セルPCについては、画素セルC2に対応する受光セルの周囲に位置する画素セルC1に対応する受光セルPCの出力に基づいて、当該画素セルC2に画素セルC1(仮想的な画素セルC1)が位置しているとした場合に当該画素セルC1の受光セルの出力を補間するように構成してもよい。画素セルC2の補間についても同様である。
FIG. 6 is an explanatory diagram of pixel interpolation.
In this configuration, the TOF sensor 25 further includes a signal interpolation unit, and for the light receiving cell PC corresponding to the pixel cell C1, the light receiving cell PC corresponding to the pixel cell C1 located around the light receiving cell corresponding to the pixel cell C2. If the pixel cell C1 (virtual pixel cell C1) is located in the pixel cell C2 based on the output of, the output of the light receiving cell of the pixel cell C1 may be interpolated. .. The same applies to the interpolation of the pixel cell C2.
 具体的には、例えば、画素セルC1についての補間を行う場合に、図6において、クロスハッチングがなされた画素セルC2に対応する受光セルPCの周囲には、4個の画素セルC1が隣接して位置しているので、当該画素セルC2の配置位置に相当する仮想的な画素セルC1の出力を受光セルPCの周囲に隣接して位置する4個の画素セルC1の出力の平均値とすれば良い。 Specifically, for example, when interpolating the pixel cell C1, in FIG. 6, four pixel cells C1 are adjacent to each other around the light receiving cell PC corresponding to the cross-hatched pixel cell C2. Therefore, the output of the virtual pixel cell C1 corresponding to the arrangement position of the pixel cell C2 is equal to the average value of the outputs of the four pixel cells C1 located adjacent to the light receiving cell PC. Just do it.
 また、仮想的な画素セルC1がTOFセンサ25の4隅に位置する場合は、隣接する2個の画素セルC1の出力の平均値とすれば良く、仮想的な画素セルC1がTOFセンサ25の4隅を除く周縁に位置する場合は、隣接する3個の画素セルC1の出力の平均値とすれば良い。 When the virtual pixel cell C1 is located at the four corners of the TOF sensor 25, it may be the average value of the outputs of the two adjacent pixel cells C1, and the virtual pixel cell C1 is the TOF sensor 25. When it is located on the peripheral edge excluding the four corners, it may be the average value of the outputs of the three adjacent pixel cells C1.
 また、画素セルC2に対応する受光セルの周囲に2~4個の画素セルC2が隣接して位置している場合も同様である。 The same applies when 2 to 4 pixel cells C2 are adjacent to each other around the light receiving cell corresponding to the pixel cell C2.
 なお、TOFセンサ25が信号補間部を備えない場合には、同様の処理を信号処理部23において行うようにすれば良い。 If the TOF sensor 25 does not have a signal interpolation unit, the signal processing unit 23 may perform the same processing.
 図7は、第1実施形態の信号処理部の機能ブロックの一例の説明図である。
 信号処理部23は、大別すると、第1RAW画像記憶部30-1、第2RAW画像記憶部30-2、第1反射強度算出部31-1、第2反射強度算出部31-2、強度信号差算出部32、選択信号生成部33、第1I_Q信号算出部34-1、第2I_Q信号算出部34-2、選択部35及びデプス変換部36を備えている。
FIG. 7 is an explanatory diagram of an example of a functional block of the signal processing unit of the first embodiment.
The signal processing unit 23 is roughly classified into a first RAW image storage unit 30-1, a second RAW image storage unit 30-2, a first reflection intensity calculation unit 31-1, a second reflection intensity calculation unit 31-2, and an intensity signal. It includes a difference calculation unit 32, a selection signal generation unit 33, a first I_Q signal calculation unit 34-1, a second I_Q signal calculation unit 34-2, a selection unit 35, and a depth conversion unit 36.
 第1反射強度算出部31-1は、受光部22から第1測距用光L1としての波長λ1=850nmの光に対応するRAW画像データRAW850が入力される。そして、第1反射強度算出部31-1は、画素毎に反射強度信号C850を算出して強度信号差算出部32に出力する。 The first reflection intensity calculation unit 31-1 receives RAW image data RAW850 corresponding to light having a wavelength λ1 = 850 nm as the first ranging light L1 from the light receiving unit 22. Then, the first reflection intensity calculation unit 31-1 calculates the reflection intensity signal C850 for each pixel and outputs it to the intensity signal difference calculation unit 32.
 第2反射強度算出部31-2は、受光部22から第2測距用光L2としての波長λ2=940nmの光に対応するRAW画像データRAW940が入力される。そして、第2反射強度算出部31-2は、画素毎に反射強度信号C940を算出して強度信号差算出部32に出力する。 The second reflection intensity calculation unit 31-2 receives RAW image data RAW940 corresponding to light having a wavelength of λ2 = 940 nm as the second ranging light L2 from the light receiving unit 22. Then, the second reflection intensity calculation unit 31-2 calculates the reflection intensity signal C940 for each pixel and outputs it to the intensity signal difference calculation unit 32.
 強度信号差算出部32は、入力された反射強度信号C850と反射強度信号C940との差Pを次式により算出して選択信号selとして選択部35に出力する。
    P=(C850-C940)/(C850+C940)
The intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it as a selection signal sel to the selection unit 35.
P = (C850-C940) / (C850 + C940)
 第1I_Q信号算出部34-1は、RAW画像データRAW850に基づいて画素毎に、第1同相成分信号I850及び第1同相成分信号I850に対応する第1直交成分信号Q850を算出して選択部35に出力する。 The first I_Q signal calculation unit 34-1 calculates the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850, and the selection unit 35. Output to.
 第2I_Q信号算出部34-2は、RAW画像データRAW940に基づいて画素毎に、第2同相成分信号I940及び第2同相成分信号I940に対応する第2直交成分信号Q940を算出して選択部35に出力する。 The second I_Q signal calculation unit 34-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940, and the selection unit 35. Output to.
 選択信号生成部33は、差Pと、所定の閾値thを比較し、
    P≦th
である場合には、太陽光の影響が少ないとして、第1同相成分信号I850及び第1直交成分信号Q850を選択させるための選択信号selを選択部35に出力する。
The selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P.
P ≤ th
In the case of, the selection signal sel for selecting the first common mode component signal I850 and the first orthogonal component signal Q850 is output to the selection unit 35, assuming that the influence of sunlight is small.
 また、選択信号生成部33は、差Pと、所定の閾値thを比較し、
    P>th
である場合には、太陽光の影響が多いとして、太陽光の影響を受けにくい第2同相成分信号I940及び第2直交成分信号Q940を選択させるための選択信号selを選択部35に出力する。
Further, the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P.
P> th
In this case, the selection signal sel for selecting the second common mode component signal I940 and the second orthogonal component signal Q940, which are less affected by sunlight, is output to the selection unit 35, assuming that the influence of sunlight is large.
 この場合において、所定の閾値thは、太陽光の影響が少ない場合には、生じる可能性がある差Pよりも十分に大きな値であって、太陽光の影響が多い場合に生じる可能性がある差Pよりも十分に小さな値に設定されている。 In this case, the predetermined threshold value th is a value sufficiently larger than the difference P that may occur when the influence of sunlight is small, and may occur when the influence of sunlight is large. It is set to a value sufficiently smaller than the difference P.
 選択部35は、選択信号selに基づいて第1同相成分信号I850及び第1直交成分信号Q850の組合せ、あるいは、第2同相成分信号I940及び第2直交成分信号Q940の組合せのいずれか一方をデプス変換部36に出力する。 The selection unit 35 depths either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940 based on the selection signal sel. Output to the conversion unit 36.
 これらの結果、デプス変換部36は、第1同相成分信号I850及び第1直交成分信号Q850の組合せ、あるいは、第2同相成分信号I940及び第2直交成分信号Q940の組合せのいずれか一方に基づいて物体OBJまでの距離に相当するデプス信号SDPを画素毎に算出して出力する。 As a result, the depth conversion unit 36 is based on either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940. The depth signal SDP corresponding to the distance to the object OBJ is calculated and output for each pixel.
 次に第1実施形態の処理動作について説明する。
 図8は、実施形態の全体処理フローチャートである。
 処理が開始されると、物体OBJの撮像を行い、デプス(物体OBJまでの距離)を計測する処理(ステップS11)及び可視画像取得処理(ステップS12)を並行して行う。
 そして、計測したデプスを可視画像に適用してデプス画像を生成する画像加工処理(ステップS13)を行い、処理を終了する。
Next, the processing operation of the first embodiment will be described.
FIG. 8 is an overall processing flowchart of the embodiment.
When the process is started, the object OBJ is imaged, and the process of measuring the depth (distance to the object OBJ) (step S11) and the visible image acquisition process (step S12) are performed in parallel.
Then, an image processing process (step S13) for generating a depth image by applying the measured depth to the visible image is performed, and the process is completed.
 次にデプス計測処理について詳細に説明する。
 図9は、第1実施形態のデプス計測処理の処理フローチャートである。
 画像処理装置20の照射部21により第1測距用光L1としての波長λ1=850nmの光及び第2測距用光L2としての波長λ2=940nmの光を含む測距用光Lがデプス計測対象(測距対象)の物体OBJに照射されると、受光部22は、第1測距用光L1の反射光及び第2測距用光L2の反射光を受光する。そして、受光部22は、RAW画像データRAW850及びRAW画像データRAW940を生成し、信号処理部23に出力する。
Next, the depth measurement process will be described in detail.
FIG. 9 is a processing flowchart of the depth measurement process of the first embodiment.
The irradiation unit 21 of the image processing apparatus 20 measures the depth of the distance measuring light L including the light having a wavelength λ1 = 850 nm as the first distance measuring light L1 and the light having a wavelength λ2 = 940 nm as the second distance measuring light L2. When the object OBJ of the target (distance measuring target) is irradiated, the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2. Then, the light receiving unit 22 generates RAW image data RAW850 and RAW image data RAW940 and outputs them to the signal processing unit 23.
 これにより信号処理部23は、RAW画像データRAW850を取得し(ステップS21)、第1反射強度算出部31-1及び第1I_Q信号算出部34-1に出力する。
 同様に信号処理部23は、RAW画像データRAW940を取得し(ステップS22)、第2反射強度算出部31-2及び第2I_Q信号算出部34-2に出力する。
As a result, the signal processing unit 23 acquires the RAW image data RAW850 (step S21) and outputs it to the first reflection intensity calculation unit 31-1 and the first I_Q signal calculation unit 34-1.
Similarly, the signal processing unit 23 acquires the RAW image data RAW940 (step S22) and outputs it to the second reflection intensity calculation unit 31-2 and the second I_Q signal calculation unit 34-2.
 これらの場合において、信号処理部23は、取得したRAW画像データRAW850及びRAW画像データRAW940を図示しないワークメモリに格納している。 In these cases, the signal processing unit 23 stores the acquired RAW image data RAW850 and RAW image data RAW940 in a work memory (not shown).
 これにより第1反射強度算出部31-1は、取得したRAW画像データRAW850に基づいて、画素毎に反射強度信号C850を算出して強度信号差算出部32に出力する(ステップS23)。 As a result, the first reflection intensity calculation unit 31-1 calculates the reflection intensity signal C850 for each pixel based on the acquired RAW image data RAW850 and outputs it to the intensity signal difference calculation unit 32 (step S23).
 また、第2反射強度算出部31-2は、取得したRAW画像データRAW940に基づいて、画素毎に反射強度信号C940を算出して強度信号差算出部32に出力する(ステップS24)。 Further, the second reflection intensity calculation unit 31-2 calculates the reflection intensity signal C940 for each pixel based on the acquired RAW image data RAW940 and outputs it to the intensity signal difference calculation unit 32 (step S24).
 これらの結果、強度信号差算出部32は、入力された反射強度信号C850と反射強度信号C940との差Pを次式により算出して選択信号生成部33に出力する(ステップS25)。
    P=(C850-C940)/(C850+C940)
As a result, the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it to the selection signal generation unit 33 (step S25).
P = (C850-C940) / (C850 + C940)
 一方、第1I_Q信号算出部34-1は、RAW画像データRAW850に基づいて、画素毎に第1同相成分信号I850及び第1同相成分信号I850に対応する第1直交成分信号Q850を算出して選択部35に出力する(ステップS26)。 On the other hand, the first I_Q signal calculation unit 34-1 calculates and selects the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850. Output to unit 35 (step S26).
 同様に、第2I_Q信号算出部34-2は、RAW画像データRAW940に基づいて画素毎に第2同相成分信号I940及び第2同相成分信号I940に対応する第2直交成分信号Q940を算出して選択部35に出力する(ステップS27)。 Similarly, the second I_Q signal calculation unit 34-2 calculates and selects the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940. Output to unit 35 (step S27).
 選択信号生成部33は、差Pと、所定の閾値thを比較し、
    P≦th
である場合には、太陽光の影響が少ないとして、第1同相成分信号I850及び第1直交成分信号Q850を選択させるための選択信号selを選択部35に出力し、
    P>th
である場合には、太陽光の影響が多いとして、太陽光の影響を受けにくい第2同相成分信号I940及び第2直交成分信号Q940を選択させるための選択信号selを選択部35に出力する(ステップS28)。
The selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P.
P ≤ th
In the case of, the selection signal sel for selecting the first common mode component signal I850 and the first orthogonal component signal Q850 is output to the selection unit 35, assuming that the influence of sunlight is small.
P> th
In the case of, the selection signal sel for selecting the second common mode component signal I940 and the second orthogonal component signal Q940, which are less affected by sunlight, is output to the selection unit 35 (assuming that the influence of sunlight is large). Step S28).
 選択部35は、選択信号selに基づいて第1同相成分信号I850及び第1直交成分信号Q850の組合せ、あるいは、第2同相成分信号I940及び第2直交成分信号Q940の組合せのいずれか一方をデプス変換部36に出力する(ステップS29)。 The selection unit 35 depths either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940 based on the selection signal sel. Output to the conversion unit 36 (step S29).
 これらの結果、デプス変換部36は、第1同相成分信号I850及び第1直交成分信号Q850の組合せ、あるいは、第2同相成分信号I940及び第2直交成分信号Q940の組合せのいずれか一方に基づいて物体OBJまでの距離に相当するデプス信号DPを画素毎に算出して出力する(ステップS30)。 As a result, the depth conversion unit 36 is based on either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940. The depth signal DP corresponding to the distance to the object OBJ is calculated and output for each pixel (step S30).
 以上の説明のように、本第1実施形態によれば、画素毎に太陽光の影響を受けているか否かを判断し、太陽光の影響が少ない画素であると判断した場合には、散乱が少ない第1波長λ1の第1測距用光L1に対応する第1同相成分信号I850及び第1直交成分信号Q850に基づいて、デプス信号SDPを算出する。また、太陽光の影響が多い画素であると判断した場合には、太陽光の影響を受けることが少ない第2波長λ2の第2測距用光L2に対応する第2同相成分信号I940及び第2直交成分信号Q940に基づいて、デプス信号DPを算出する。したがって、得られるデプス画像は、高精度のものとなる。 As described above, according to the first embodiment, it is determined whether or not each pixel is affected by sunlight, and when it is determined that the pixel is less affected by sunlight, it is scattered. The depth signal SDP is calculated based on the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 having a first wavelength λ1. Further, when it is determined that the pixel is greatly affected by sunlight, the second in-phase component signal I940 and the second in-phase component signal I940 corresponding to the second ranging light L2 having the second wavelength λ2, which is less affected by sunlight, are used. 2 The depth signal DP is calculated based on the orthogonal component signal Q940. Therefore, the obtained depth image is highly accurate.
[3]第2実施形態
 以上の第1実施形態においては、強度信号差に基づいて第1波長λ1の第1測距用光L1に対応する第1同相成分信号I850及び第1直交成分信号Q850あるいは第2波長λ2の第2測距用光L2に対応する第2同相成分信号I940及び第2直交成分信号Q940のいずれかを用いて、デプス信号SDPを生成していた。これに対し、本第2実施形態は、第1波長λ1の第1測距用光L1に対応する第1同相成分信号I850及び第1直交成分信号Q850並びに第2波長λ2の第2測距用光L2に対応する第2同相成分信号I940及び第2直交成分信号Q940のそれぞれについてデプス信号を算出後に領域判定特徴量に基づいていずれかのデプス信号をデプス信号SDPとして選択する場合の実施形態である。
[3] Second Embodiment In the above first embodiment, the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 of the first wavelength λ1 based on the intensity signal difference. Alternatively, the depth signal SDP was generated by using either the second in-phase component signal I940 or the second orthogonal component signal Q940 corresponding to the second ranging light L2 having the second wavelength λ2. On the other hand, in the second embodiment, the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 of the first wavelength λ1 and the second ranging of the second wavelength λ2 are used. In the embodiment in the case where the depth signal is calculated for each of the second in-phase component signal I940 and the second orthogonal component signal Q940 corresponding to the optical L2, and then any depth signal is selected as the depth signal SDP based on the region determination feature amount. is there.
 図10は、第2実施形態の信号処理部の機能ブロックの一例の説明図である。
 この場合において、全体構成は、第1実施形態と同様であるので、図3を参照して説明する。
FIG. 10 is an explanatory diagram of an example of a functional block of the signal processing unit of the second embodiment.
In this case, since the overall configuration is the same as that of the first embodiment, it will be described with reference to FIG.
 第2実施形態における信号処理部23は、大別すると、第1RAW画像記憶部40-1、第2RAW画像記憶部40-2、第1I_Q信号算出部41-1、第2I_Q信号算出部41-2、第1デプス変換部42-1、第2デプス変換部42-2、第1領域判定特徴量算出部43-1、第2領域判定特徴量算出部43-2、比較部44及び選択部45を備えている。 The signal processing unit 23 in the second embodiment is roughly classified into the first RAW image storage unit 40-1, the second RAW image storage unit 40-2, the first I_Q signal calculation unit 41-1, and the second I_Q signal calculation unit 41-2. , 1st depth conversion unit 42-1, 2nd depth conversion unit 42-2, 1st area determination feature amount calculation unit 43-1, 2nd area determination feature amount calculation unit 43-2, comparison unit 44 and selection unit 45 It has.
 第1I_Q信号算出部41-1は、受光部22から第1測距用光L1としての波長λ1=850nmの光に対応するRAW画像データRAW850が入力され、RAW画像データRAW850に基づいて画素毎に第1同相成分信号I850及び第1同相成分信号I850に対応する第1直交成分信号Q850を算出して第1デプス変換部42-1に出力する。 The 1st I_Q signal calculation unit 41-1 receives RAW image data RAW850 corresponding to light having a wavelength λ1 = 850 nm as the first ranging light L1 from the light receiving unit 22, and is input to each pixel based on the RAW image data RAW850. The first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 is calculated and output to the first depth conversion unit 42-1.
 第2I_Q信号算出部41-2は、受光部22から第2測距用光L2としての波長λ2=940nmの光に対応するRAW画像データRAW940が入力され、RAW画像データRAW940に基づいて画素毎に第2同相成分信号I940及び第2同相成分信号I940に対応する第2直交成分信号Q940を算出して第2デプス変換部42-2に出力する。 The second I_Q signal calculation unit 41-2 receives RAW image data RAW940 corresponding to light having a wavelength λ2 = 940 nm as the second ranging light L2 from the light receiving unit 22, and is input to each pixel based on the RAW image data RAW940. The second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 is calculated and output to the second depth conversion unit 42-2.
 第1デプス変換部42-1は、第1同相成分信号I850及び第1直交成分信号Q850の組合せに基づいて物体OBJまでの距離に相当する第1デプス信号DP1を画素毎に算出して選択部45に出力する。 The first depth conversion unit 42-1 calculates the first depth signal DP1 corresponding to the distance to the object OBJ for each pixel based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850, and selects the unit. Output to 45.
 第2デプス変換部42-2は、第2同相成分信号I940及び第2直交成分信号Q940の組合せに基づいて物体OBJまでの距離に相当する第2デプス信号DP2画素毎に算出して選択部45に出力する。 The second depth conversion unit 42-2 calculates for each second depth signal DP2 pixel corresponding to the distance to the object OBJ based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and selects the selection unit 45. Output to.
 一方、第1領域判定特徴量算出部43-1は、第1デプス信号DP1に対応する第1デプス画像に基づいて、第1デプス画像におけるエッジの描出度合いED1(エッジ度:信号がノイズに埋もれている度合い)及び第1デプス画像の平坦部のSN比σ1を算出して比較部44に出力する。 On the other hand, the first region determination feature amount calculation unit 43-1 is based on the first depth image corresponding to the first depth signal DP1, and the edge depiction degree ED1 (edge degree: signal is buried in noise) in the first depth image. The SN ratio σ1 of the flat portion of the first depth image is calculated and output to the comparison unit 44.
 同様に第2領域判定特徴量算出部43-2は、第2デプス信号DP2に対応する第2デプス画像に基づいて、第2デプス画像におけるエッジの描出度合いED2(エッジ度:信号がノイズに埋もれている度合い)及び第2デプス画像の平坦部のSN比σ2を算出して比較部44に出力する。 Similarly, the second region determination feature amount calculation unit 43-2 is based on the second depth image corresponding to the second depth signal DP2, and the edge depiction degree ED2 (edge degree: signal is buried in noise) in the second depth image. The SN ratio σ2 of the flat portion of the second depth image is calculated and output to the comparison unit 44.
 比較部44は、第1デプス画像におけるエッジ度、第1デプス画像GDP1の平坦部のSN比σ、第2デプス画像GDP2におけるエッジ度及び第2デプス画像の平坦部のSN比σをそれぞれ信頼度として、第1デプス画像GDP1及び第2デプス画像GDP2のうち、より信頼度の高いデプス画像を選択するための選択信号selを選択部45に出力する。 The comparison unit 44 determines the reliability of the edge degree in the first depth image, the SN ratio σ of the flat portion of the first depth image GDP1, the edge degree in the second depth image GDP2, and the SN ratio σ of the flat portion of the second depth image, respectively. As a result, a selection signal sel for selecting a more reliable depth image from the first depth image GDP1 and the second depth image GDP2 is output to the selection unit 45.
 これらの結果、選択部45は、選択信号selに基づいて第1デプス信号DP1あるいは第2デプス信号DP2のいずれか一方をデプス信号DPとして画素毎に出力する。 As a result of these, the selection unit 45 outputs either the first depth signal DP1 or the second depth signal DP2 as the depth signal DP for each pixel based on the selection signal sel.
 次に第2実施形態の処理動作について説明する。
 本第2実施形態においても、全体的な処理の流れは、第1実施形態と同様であるので、その詳細な説明を援用するものとする。
Next, the processing operation of the second embodiment will be described.
Also in the second embodiment, the overall flow of processing is the same as that in the first embodiment, and therefore the detailed description thereof will be incorporated.
 次に第2実施形態のデプス計測処理について詳細に説明する。
 図11は、第2実施形態のデプス計測処理の処理フローチャートである。
 画像処理装置20の照射部21により第1測距用光L1としての波長λ1=850nmの光及び第2測距用光L2としての波長λ2=940nmの光を含む測距用光Lがデプス計測対象(測距対象)の物体OBJに照射されると、受光部22は、第1測距用光L1の反射光及び第2測距用光L2の反射光を受光して、RAW画像データRAW850及びRAW画像データRAW940を生成し、第1I_Q信号算出部41-1にRAW画像データRAW850を出力し、第2I_Q信号算出部41-2にRAW画像データRAW940を出力する。
Next, the depth measurement process of the second embodiment will be described in detail.
FIG. 11 is a processing flowchart of the depth measurement process of the second embodiment.
The irradiation unit 21 of the image processing apparatus 20 measures the depth of the distance measuring light L including the light having a wavelength λ1 = 850 nm as the first distance measuring light L1 and the light having a wavelength λ2 = 940 nm as the second distance measuring light L2. When the object OBJ of the target (distance measuring target) is irradiated, the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and the RAW image data RAW850 The RAW image data RAW940 is generated, the RAW image data RAW850 is output to the first I_Q signal calculation unit 41-1, and the RAW image data RAW940 is output to the second I_Q signal calculation unit 41-2.
 これにより信号処理部23の第1RAW画像記憶部40-1は、RAW画像データRAW850を取得する(ステップS41)。
 第1I_Q信号算出部41-1は、第1RAW画像記憶部40-1から読み出したRAW画像データRAW850に基づいて画素毎に第1同相成分信号I850及び第1同相成分信号I850に対応する第1直交成分信号Q850を算出して第1デプス変換部42-1に出力する(ステップS42)。
As a result, the first RAW image storage unit 40-1 of the signal processing unit 23 acquires the RAW image data RAW850 (step S41).
The first I_Q signal calculation unit 41-1 has a first orthogonality corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850 read from the first RAW image storage unit 40-1. The component signal Q850 is calculated and output to the first depth conversion unit 42-1 (step S42).
 第1デプス変換部42-1は、第1同相成分信号I850及び第1直交成分信号Q850の組合せに基づいて物体OBJまでの距離に相当する第1デプス信号SDP1を画素毎に算出して第1領域判定特徴量算出部43-1及び選択部45に出力する(ステップS43)。 The first depth conversion unit 42-1 calculates the first depth signal SDP1 corresponding to the distance to the object OBJ based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 for each pixel. It is output to the area determination feature amount calculation unit 43-1 and the selection unit 45 (step S43).
 続いて、第1領域判定特徴量算出部43-1は、信頼度を算出する(ステップS44)。
 図12は、信頼度算出処理の処理フローチャートである。
 第1領域判定特徴量算出部43-1は、第1デプス信号DP1に対応する第1デプス画像に基づいて、第1デプス画像におけるエッジの描出度合い(エッジ抽出度:信号がノイズに埋もれている度合い)を算出し(ステップS51)、エッジ抽出度に対する信頼度Eを算出する(ステップS52)。
Subsequently, the first region determination feature amount calculation unit 43-1 calculates the reliability (step S44).
FIG. 12 is a processing flowchart of the reliability calculation process.
The first region determination feature amount calculation unit 43-1 is based on the first depth image corresponding to the first depth signal DP1, and the edge depiction degree (edge extraction degree: the signal is buried in noise) in the first depth image. The degree) is calculated (step S51), and the reliability E with respect to the edge extraction degree is calculated (step S52).
 これと並行して第1領域判定特徴量算出部43-1は、第1デプス画像の平坦部のSN比の分散σを算出し(ステップS53)、分散σに対する信頼度Fを算出する(ステップS54)。 In parallel with this, the first region determination feature amount calculation unit 43-1 calculates the variance σ of the SN ratio of the flat portion of the first depth image (step S53), and calculates the reliability F with respect to the variance σ (step). S54).
 続いて、第1領域判定特徴量算出部43-1は、両信頼度E、Fを統合して統合信頼度RL1を比較部44に出力する(ステップS55)。 Subsequently, the first region determination feature amount calculation unit 43-1 integrates both reliability E and F and outputs the integrated reliability RL1 to the comparison unit 44 (step S55).
 同様に、第2RAW画像記憶部40-2は、RAW画像データRAW940を取得する(ステップS45)。
 これにより、第2I_Q信号算出部41-2は、RAW画像データRAW940に基づいて画素毎に第2同相成分信号I940及び第2同相成分信号I940に対応する第2直交成分信号Q940を算出して第2デプス変換部42-2に出力する(ステップS46)。
Similarly, the second RAW image storage unit 40-2 acquires the RAW image data RAW940 (step S45).
As a result, the second I_Q signal calculation unit 41-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940. Output to the 2-depth conversion unit 42-2 (step S46).
 第2デプス変換部42-2は、第2同相成分信号I940及び第2直交成分信号Q940の組合せに基づいて物体OBJまでの距離に相当する第2デプス信号SDP2を画素毎に算出して第2領域判定特徴量算出部43-2及び選択部45に出力する(ステップS47)。
 これにより、第2領域判定特徴量算出部43-2は、信頼度を算出する(ステップS48)。
The second depth conversion unit 42-2 calculates the second depth signal SDP2 corresponding to the distance to the object OBJ for each pixel based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and the second It is output to the area determination feature amount calculation unit 43-2 and the selection unit 45 (step S47).
As a result, the second region determination feature amount calculation unit 43-2 calculates the reliability (step S48).
 具体的には、第2領域判定特徴量算出部43-2は、第2デプス信号SDP2に対応する第2デプス画像に基づいて、第2デプス画像におけるエッジの描出度合い(エッジ抽出度:信号がノイズに埋もれている度合い)を算出し(ステップS51)、エッジ抽出度に対する信頼度Eを算出する(ステップS52)。 Specifically, the second region determination feature amount calculation unit 43-2 uses the second depth image corresponding to the second depth signal SDP2 to draw an edge in the second depth image (edge extraction degree: the signal is The degree of being buried in noise) is calculated (step S51), and the reliability E with respect to the edge extraction degree is calculated (step S52).
 これと並行して第2デプス画像の平坦部のSN比の分散σを算出し(ステップS53)、分散σに対する信頼度Fを算出する(ステップS54)。
 続いて、両信頼度E、Fを統合して統合信頼度RL2を比較部44に出力する(ステップS55)。
In parallel with this, the variance σ of the SN ratio of the flat portion of the second depth image is calculated (step S53), and the reliability F with respect to the variance σ is calculated (step S54).
Subsequently, both reliability E and F are integrated and the integrated reliability RL2 is output to the comparison unit 44 (step S55).
 これらの結果、比較部44は、第1領域判定特徴量算出部43-1が出力した統合信頼度RL1及び第2領域判定特徴量算出部43-2が出力した統合信頼度RL2に基づいて、統合信頼度RL1及び統合信頼度RL2のうち、いずれの統合信頼度がより信頼度が高いかを判定する(ステップS49)。
 そして判定の結果、比較部44は、デプス信号SDP1及びデプス信号SDP2のうち、より信頼度の高いデプス信号を選択するための選択信号selとして選択部45に出力する。
As a result, the comparison unit 44 is based on the integrated reliability RL1 output by the first area determination feature amount calculation unit 43-1 and the integrated reliability RL2 output by the second area determination feature amount calculation unit 43-2. It is determined which of the integrated reliability RL1 and the integrated reliability RL2 has the higher integrated reliability (step S49).
Then, as a result of the determination, the comparison unit 44 outputs the depth signal SDP1 and the depth signal SDP2 to the selection unit 45 as a selection signal sel for selecting a more reliable depth signal.
 この結果、選択部45は、画素毎に選択信号selに基づいて第1デプス信号SDP1あるいは第2デプス信号SDP2のいずれか一方をデプス信号SDPとして出力する(ステップS50)。 As a result, the selection unit 45 outputs either the first depth signal SDP1 or the second depth signal SDP2 as the depth signal SDP for each pixel based on the selection signal sel (step S50).
 以上の説明のように、本第2実施形態によれば、画素毎に太陽光の影響を受けているか否かをデプス画像の信頼度に基づいて判断する。そして、太陽光の影響が少ない画素であると判断した場合には、散乱が少ない第1波長λ1の第1測距用光L1に対応する第1同相成分信号I850及び第1直交成分信号Q850に基づいて、デプス信号SDPを算出する。また、太陽光の影響が多い画素であると判断した場合には、太陽光の影響を受けることが少ない第2周波数f2の第2測距用光L2に対応する第2同相成分信号I940及び第2直交成分信号Q940に基づいて、より信頼度の高いデプス信号SDPを算出する。
 したがって、本第2実施形態によれば、得られるデプス画像は、より高精度のものとなる。
As described above, according to the second embodiment, it is determined whether or not each pixel is affected by sunlight based on the reliability of the depth image. Then, when it is determined that the pixel is less affected by sunlight, the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 having the first wavelength λ1 with less scattering are used. Based on this, the depth signal SDP is calculated. Further, when it is determined that the pixel is greatly affected by sunlight, the second in-phase component signal I940 and the second in-phase component signal I940 corresponding to the second ranging light L2 of the second frequency f2, which is less affected by sunlight, are used. 2 A more reliable depth signal SDP is calculated based on the orthogonal component signal Q940.
Therefore, according to the second embodiment, the obtained depth image has higher accuracy.
[4]第3実施形態
 本第3実施形態が上記各実施形態と異なる点は、強度信号差及び領域判定特徴量の双方に基づいてデプス信号SDPを選択している点である。
 図13は、第3実施形態の信号処理部の機能ブロックの一例の説明図である。
 図13において、図7あるいは図10と同様の部分には、同一の符号を付すものとする。
 第3実施形態における信号処理部23は、大別すると、第1RAW画像記憶部40-1、第2RAW画像記憶部40-2、第1I_Q信号算出部41-1、第2I_Q信号算出部41-2、第1デプス変換部42-1、第2デプス変換部42-2、第1領域判定特徴量算出部43-1、第2領域判定特徴量算出部43-2、第1反射強度算出部31-1、第2反射強度算出部31-2、強度信号差算出部32、選択信号生成部51及び選択部45を備えている。
[4] Third Embodiment The difference between this third embodiment and each of the above embodiments is that the depth signal SDP is selected based on both the intensity signal difference and the region determination feature amount.
FIG. 13 is an explanatory diagram of an example of a functional block of the signal processing unit of the third embodiment.
In FIG. 13, the same parts as those in FIG. 7 or 10 are designated by the same reference numerals.
The signal processing unit 23 in the third embodiment can be roughly classified into a first RAW image storage unit 40-1, a second RAW image storage unit 40-2, a first I_Q signal calculation unit 41-1, and a second I_Q signal calculation unit 41-2. , 1st depth conversion unit 42-1, 2nd depth conversion unit 42-2, 1st area determination feature amount calculation unit 43-1, 2nd area determination feature amount calculation unit 43-2, 1st reflection intensity calculation unit 31 -1, the second reflection intensity calculation unit 31-2, the intensity signal difference calculation unit 32, the selection signal generation unit 51, and the selection unit 45 are provided.
 ここで、再び図9及び図11を援用して、第3実施形態の動作を説明する。
 画像処理装置20の照射部21により第1測距用光L1としての波長λ1=850nmの光及び第2測距用光L2としての波長λ2=940nmの光を含む測距用光Lがデプス計測対象(測距対象)の物体OBJに照射されると、受光部22は、第1測距用光L1の反射光及び第2測距用光L2の反射光を受光して、RAW画像データRAW850及びRAW画像データRAW940を生成し、第1I_Q信号算出部41-1にRAW画像データRAW850を出力し、第2I_Q信号算出部41-2にRAW画像データRAW940を出力する。
Here, the operation of the third embodiment will be described with reference to FIGS. 9 and 11 again.
The irradiation unit 21 of the image processing apparatus 20 measures the depth of the distance measuring light L including the light having a wavelength λ1 = 850 nm as the first distance measuring light L1 and the light having a wavelength λ2 = 940 nm as the second distance measuring light L2. When the object OBJ of the target (distance measuring target) is irradiated, the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and the RAW image data RAW850 The RAW image data RAW940 is generated, the RAW image data RAW850 is output to the first I_Q signal calculation unit 41-1, and the RAW image data RAW940 is output to the second I_Q signal calculation unit 41-2.
 これにより信号処理部23の第1RAW画像記憶部40-1は、RAW画像データRAW850を取得する。
 第1I_Q信号算出部41-1は、第1RAW画像記憶部40-1から読み出したRAW画像データRAW850に基づいて画素毎に第1同相成分信号I850及び第1同相成分信号I850に対応する第1直交成分信号Q850を算出して第1デプス変換部42-1に出力する。
As a result, the first RAW image storage unit 40-1 of the signal processing unit 23 acquires the RAW image data RAW850.
The first I_Q signal calculation unit 41-1 has a first orthogonality corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850 read from the first RAW image storage unit 40-1. The component signal Q850 is calculated and output to the first depth conversion unit 42-1.
 これにより、第1デプス変換部42-1は、第1同相成分信号I850及び第1直交成分信号Q850の組合せに基づいて物体OBJまでの距離に相当する第1デプス信号SDP1を画素毎に算出して第1領域判定特徴量算出部43-1及び選択部45に出力する。 As a result, the first depth conversion unit 42-1 calculates the first depth signal SDP1 corresponding to the distance to the object OBJ for each pixel based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850. The output is output to the first region determination feature amount calculation unit 43-1 and the selection unit 45.
 続いて、第1領域判定特徴量算出部43-1は、信頼度を算出し、統合信頼度RL1を選択信号生成部51に出力する。 Subsequently, the first region determination feature amount calculation unit 43-1 calculates the reliability and outputs the integrated reliability RL1 to the selection signal generation unit 51.
 同様に、第2RAW画像記憶部40-2は、RAW画像データRAW940を取得する(ステップS45)。
 これにより、第2I_Q信号算出部41-2は、RAW画像データRAW940に基づいて画素毎に第2同相成分信号I940及び第2同相成分信号I940に対応する第2直交成分信号Q940を算出して第2デプス変換部42-2に出力する(ステップS46)。
Similarly, the second RAW image storage unit 40-2 acquires the RAW image data RAW940 (step S45).
As a result, the second I_Q signal calculation unit 41-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940. Output to the 2-depth conversion unit 42-2 (step S46).
 第2デプス変換部42-2は、第2同相成分信号I940及び第2直交成分信号Q940の組合せに基づいて物体OBJまでの距離に相当する第2デプス信号SDP2を画素毎に算出して第2領域判定特徴量算出部43-2及び選択部45に出力する(ステップS47)。
 これにより、第2領域判定特徴量算出部43-2は、信頼度を算出し、統合信頼度RL2を選択信号生成部51に出力する。
The second depth conversion unit 42-2 calculates the second depth signal SDP2 corresponding to the distance to the object OBJ for each pixel based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and the second It is output to the area determination feature amount calculation unit 43-2 and the selection unit 45 (step S47).
As a result, the second region determination feature amount calculation unit 43-2 calculates the reliability and outputs the integrated reliability RL2 to the selection signal generation unit 51.
 一方、第1反射強度算出部31-1は、受光部22から第1測距用光L1としての波長λ1=850nmの光に対応するRAW画像データRAW850が入力される。これにより、第1反射強度算出部31-1は、画素毎に反射強度信号C850を算出して強度信号差算出部32に出力する。 On the other hand, the first reflection intensity calculation unit 31-1 inputs RAW image data RAW850 corresponding to light having a wavelength λ1 = 850 nm as the first ranging light L1 from the light receiving unit 22. As a result, the first reflection intensity calculation unit 31-1 calculates the reflection intensity signal C850 for each pixel and outputs it to the intensity signal difference calculation unit 32.
 第2反射強度算出部31-2は、受光部22から第2測距用光L2としての波長λ2=940nmの光に対応するRAW画像データRAW940が入力される。これにより、第2反射強度算出部31-2は、画素毎に反射強度信号C940を算出して強度信号差算出部32に出力する。 The second reflection intensity calculation unit 31-2 receives RAW image data RAW940 corresponding to light having a wavelength of λ2 = 940 nm as the second ranging light L2 from the light receiving unit 22. As a result, the second reflection intensity calculation unit 31-2 calculates the reflection intensity signal C940 for each pixel and outputs it to the intensity signal difference calculation unit 32.
 強度信号差算出部32は、入力された反射強度信号C850と反射強度信号C940との差Pを次式により算出して選択信号生成部51に出力する。
    P=(C850-C940)/(C850+C940)
The intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it to the selection signal generation unit 51.
P = (C850-C940) / (C850 + C940)
 これらの結果、比較部44は、第1領域判定特徴量算出部43-1が出力した統合信頼度RL1、第2領域判定特徴量算出部43-2が出力した統合信頼度RL2及び差Pに基づいてデプス信号SDP1及びデプス信号SDP2のうちいずれを選択すべきか判定して、選択信号selとして選択部45に出力する。 As a result, the comparison unit 44 sets the integrated reliability RL1 output by the first area determination feature amount calculation unit 43-1, the integrated reliability RL2 output by the second area determination feature amount calculation unit 43-2, and the difference P. Based on this, it is determined which of the depth signal SDP1 and the depth signal SDP2 should be selected, and the selection signal sel is output to the selection unit 45.
 この場合において、統合信頼度RL1及び統合信頼度RL2の比較結果並びに差Pの選択結果が同一であれば、当該選択結果を採用する。
 また、統合信頼度RL1及び統合信頼度RL2の差が大きく、差Pが小さい場合には、統合信頼度RL1及び統合信頼度RL2の比較結果を選択結果として採用する。
In this case, if the comparison result of the integrated reliability RL1 and the integrated reliability RL2 and the selection result of the difference P are the same, the selection result is adopted.
When the difference between the integrated reliability RL1 and the integrated reliability RL2 is large and the difference P is small, the comparison result of the integrated reliability RL1 and the integrated reliability RL2 is adopted as the selection result.
 また、統合信頼度RL1及び統合信頼度RL2の差が小さく、差Pが大きい場合には、差Pに基づく選択結果を採用する。
 また、統合信頼度RL1及び統合信頼度RL2の差が小さく、差Pも小さい場合には、いずれを選んでも結果がそれほど違わないと考えられるので、任意に選択(例えば、予めデプス信号SDP1をデプス信号SDPとして選択するように設定)する。
Further, when the difference between the integrated reliability RL1 and the integrated reliability RL2 is small and the difference P is large, the selection result based on the difference P is adopted.
Further, when the difference between the integrated reliability RL1 and the integrated reliability RL2 is small and the difference P is also small, it is considered that the result is not so different regardless of which one is selected, so it is arbitrarily selected (for example, the depth signal SDP1 is depthed in advance). Set to select as signal SDP).
 これらの結果、選択部45は、画素毎に選択信号selに基づいて第1デプス信号SDP1あるいは第2デプス信号SDP2のいずれか一方をデプス信号SDPとして出力する。 As a result of these, the selection unit 45 outputs either the first depth signal SDP1 or the second depth signal SDP2 as the depth signal SDP for each pixel based on the selection signal sel.
 以上の説明のように、本第3実施形態によれば、画素毎に太陽光の影響を受けているか否かをデプス画像の信頼度あるいは強度信号差の双方あるいはいずれかに基づいてより信頼度の高いデプス信号SDPを採用するので、得られるデプス画像は、より高精度のものとなる。 As described above, according to the third embodiment, whether or not each pixel is affected by sunlight is more reliable based on the reliability of the depth image and / or the intensity signal difference. Since the high depth signal SDP is adopted, the obtained depth image becomes more accurate.
[5]第4実施形態
 以上の各実施形態においては、測距用光Lとして、太陽光に多く含まれる成分である第1測距用光L1(例えば、波長850nm)及び太陽光スペクトルにおいて分光放射強度が相対的に低い、すなわち、太陽光にあまり含まれていない成分である第2測距用光L2(例えば、波長940nm)を用いる場合について説明した。これらに対し、本第4実施形態は、測定対象物の種類に固有の反射率の高い周波数帯の第1測距用光L1と、反射率の低い周波数帯の第2測距用光L2を用いる場合の実施形態である。すなわち、第1測距用光L1の反射率(測距対象物の第1の波長における反射率)が第2測距用光L2の反射率(測距対象物の第2の波長における反射率)よりも有意に高くなるように第1測距用光L1及び第2測距用光L2を設定した場合の実施形態である。
[5] Fourth Embodiment In each of the above embodiments, as the range-finding light L, the first range-finding light L1 (for example, wavelength 850 nm), which is a component contained in a large amount in sunlight, and the sunlight spectrum are spectroscopic. The case where the second ranging light L2 (for example, wavelength 940 nm), which is a component having a relatively low radiation intensity, that is, which is not so contained in sunlight, is used has been described. On the other hand, in the fourth embodiment, the first ranging light L1 in the frequency band having high reflectance and the second ranging light L2 in the frequency band having low reflectance, which are specific to the type of the object to be measured, are used. It is an embodiment when it is used. That is, the reflectance of the first ranging light L1 (reflectance of the distance measuring object at the first wavelength) is the reflectance of the second ranging light L2 (reflectance of the ranging object at the second wavelength). ), This is an embodiment in which the first ranging light L1 and the second ranging light L2 are set so as to be significantly higher than.
 まず第4実施形態の原理について説明する。
 各波長帯における反射の強さは測定対象物の物質の種類(例えば、植物、土、水等)によって異なることが知られている。
First, the principle of the fourth embodiment will be described.
It is known that the intensity of reflection in each wavelength band differs depending on the type of substance to be measured (for example, plant, soil, water, etc.).
 すなわち、測定対象物の物質が決まれば、反射率が高い周波数帯と、反射率が低い周波数帯と、が定まる。
 そこで、測定対象物の物質に応じて、反射率が高い周波数帯の反射信号と、反射率が低い周波数帯の反射信号と、の差が大きい領域には、当該測定対象物が存在しているとみなすことができるのである。
That is, once the substance to be measured is determined, the frequency band having high reflectance and the frequency band having low reflectance are determined.
Therefore, the object to be measured exists in a region where the difference between the reflected signal in the frequency band having high reflectance and the reflected signal in the frequency band having low reflectance is large depending on the substance of the object to be measured. Can be regarded as.
 図14は、第4実施形態の説明図である。
 図14においては、測定対象物としての植物、土、水、地表面、水面における波長と反射強度の関係を示している。
 例えば、第1の例としては、植物が測定対象物である場合には、植物の反射率が高い800nm付近の反射信号(第1測距用光L1の反射率に相当)と、植物の反射率が低い500nm付近の反射信号(第2測距用光L2の反射率に相当)の差分をとった時には、差分が大きければ、当該領域には、植物が存在していると判断することが可能となる。
FIG. 14 is an explanatory diagram of the fourth embodiment.
FIG. 14 shows the relationship between the wavelength and the reflection intensity on plants, soil, water, the ground surface, and the water surface as measurement objects.
For example, as a first example, when a plant is an object to be measured, a reflection signal near 800 nm (corresponding to the reflectance of the first ranging light L1) having a high reflectance of the plant and the reflection of the plant. When the difference of the reflected signal (corresponding to the reflectance of the second ranging light L2) near 500 nm, which has a low reflectance, if the difference is large, it can be determined that a plant exists in the region. It will be possible.
 したがって、植物が測定対象物である場合に、反射信号の差分が大きい画素については、植物の反射率が高い800nm付近の反射信号を選択することで、植物までの距離(デプス)を安定して得ることができるのである。 Therefore, when the plant is the object to be measured, for pixels with a large difference in the reflected signal, the distance (depth) to the plant can be stabilized by selecting the reflected signal near 800 nm, which has a high reflectance of the plant. You can get it.
 また、第2の例としては、水面が測定対象物である場合には、水の反射率が高い400nm付近の反射信号(第1測距用光L1の反射率に相当)と、水の反射率が低い800nm付近の反射信号(第2測距用光L2の反射率に相当)の差分をとった時には、差分が大きければ、当該領域には、水が存在していると判断することが可能となる。 Further, as a second example, when the water surface is an object to be measured, a reflected signal near 400 nm (corresponding to the reflectance of the first distance measuring light L1) having a high reflectance of water and a reflection of water. When the difference of the reflected signal (corresponding to the reflectance of the second ranging light L2) near 800 nm, which has a low reflectance, is taken, if the difference is large, it can be determined that water exists in the region. It will be possible.
 したがって、水が測定対象物である場合に、反射信号の差分が大きい画素については、水の反射率が高い400nm付近の反射信号を選択することで、水面までの距離(デプス)を安定して得ることができるのである。 Therefore, when water is the object to be measured, for pixels with a large difference in reflected signals, the distance (depth) to the water surface can be stabilized by selecting a reflected signal near 400 nm, which has a high reflectance of water. You can get it.
 このように、測定対象物の反射率が高い第1測距用光L1と、測定対象物の反射率が低い第2測距用光L2と、を用いることにより、当該測定対象物の存在の有無及び存在する場合に当該測定対象物までの距離(デプス)を正確に求めることが可能となる。 As described above, by using the first distance measuring light L1 having a high reflectance of the measurement object and the second distance measuring light L2 having a low reflectance of the measurement object, the existence of the measurement object is present. It is possible to accurately determine the presence / absence and the distance (depth) to the measurement object if it exists.
[6]実施形態の変形例
 以上の第1実施形態乃至第3実施形態においては、測距用光Lとして、太陽光に多く含まれる成分である第1測距用光L1(例えば、波長850nm)及び太陽光にあまり含まれていない成分である第2測距用光L2(例えば、波長940nm)を用いる場合について説明した。しかしながら、波長に関しては、これに限られるものでは無く、太陽光の影響を除去可能な波長の組み合わせであれば、適宜適用が可能である。
[6] Modifications of the Embodiment In the above-mentioned first to third embodiments, the distance measuring light L is the first distance measuring light L1 (for example, a wavelength of 850 nm) which is a component that is abundantly contained in sunlight. ) And the case where the second ranging light L2 (for example, wavelength 940 nm), which is a component not so much contained in sunlight, is used. However, the wavelength is not limited to this, and any combination of wavelengths that can eliminate the influence of sunlight can be appropriately applied.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Note that the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 なお、本技術は以下のような構成も採ることができる。
(1)
 第1の波長の第1測距用光の反射光の受光信号及び第2の波長の第2測距用光の反射光の受光信号に基づいて、同一の測距位置における前記第1測距用光の反射強度と、前記第2測距用光の反射強度と、の差を算出する差算出部と、
 前記差に基づいて前記測距位置毎の前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する前記同相成分信号及び前記直交成分信号に対応するデプス信号を出力する出力部と、
 を備えた画像処理装置。
(2)
 前記出力部は、前記差に基づいて前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかを選択して出力する選択部と、
 前記選択部により選択された前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号から同相成分信号及び直交成分信号を算出する算出部と、
 前記算出部の算出結果に基づいて、前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する前記同相成分信号及び前記直交成分信号についてのデプス変換を行い、前記デプス信号を出力する変換部と、
 を備えた(1)記載の画像処理装置。
(3)
 前記出力部は、前記第1測距用光の反射光の受光信号及び前記第2測距用光の反射光の受光信号から同相成分信号及び直交成分信号をそれぞれ算出する算出部と、
 前記第1測距用光の反射光の受光信号から算出した同相成分信号及び直交成分信号についてのデプス変換を行い、前記デプス信号としての第1デプス信号並びに前記第2測距用光の反射光の受光信号から算出した同相成分信号及び直交成分信号についてのデプス変換を行い、前記デプス信号としての第2デプス信号を出力する変換部と、
 前記差に基づいて、前記第1デプス信号あるいは前記第2デプス信号のいずれかを選択して出力する選択部と、
 を備えた(1)記載の画像処理装置。
(4)
 前記第1測距用光は、太陽光スペクトラムにおいて分光放射強度が相対的に高く、前記第2測距用光は、前記分光放射強度が相対的に低く、かつ、第2の波長は、第1の波長よりも長い、
 (1)~(3)のいずれかに記載の画像処理装置。
(5)
 前記第1の波長は、中心波長が850nmであり、前記第2の波長は、中心波長が940nmである、
 (4)記載の画像処理装置。
(6)
 前記第1の波長及び前記第2の波長は、測距対象物の前記第1の波長における反射率が前記第2の波長における反射率よりも有意に高くなるように設定されている、
 (1)~(4)のいずれかに記載の画像処理装置。
(7)
 画像処理装置で実行される方法であって、
 第1の波長の第1測距用光の反射光の受光信号及び第2の波長の第2測距用光の反射光の受光信号に基づいて、同一の測距位置における前記第1測距用光の反射強度と、前記第2測距用光の反射強度と、の差を算出する過程と、
 前記差に基づいて前記測距位置毎の前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する前記同相成分信号及び前記直交成分信号に対応するデプス信号を出力する過程と、
 を備えた方法。
(8)
 第1の波長の第1測距用光及び第2の波長の第2測距用光を照射する照射部と、
 前記第1測距用光の反射光及び前記第2測距用光の反射光を受光して、前記第1測距用光の反射光の受光信号及び前記第2測距用光の反射光の受光信号を出力する撮像部と、
 前記第1測距用光の反射光の受光信号及び前記第2測距用光の反射光の受光信号に基づいて、同一の測距位置における前記第1測距用光の反射強度と、前記第2測距用光の反射強度と、の差を算出する差算出部と、
 前記差に基づいて前記測距位置毎の前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する前記同相成分信号及び前記直交成分信号に対応するデプス信号を出力する出力部と、
 を備えた電子機器。
In addition, this technology can also adopt the following configurations.
(1)
The first ranging at the same ranging position based on the received signal of the reflected light of the first ranging light of the first wavelength and the received signal of the reflected light of the second ranging light of the second wavelength. A difference calculation unit that calculates the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second distance measurement light.
Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said An output unit that outputs a depth signal corresponding to the orthogonal component signal,
Image processing device equipped with.
(2)
The output unit is a selection unit that selects and outputs either a received signal of the reflected light of the first ranging light or a received signal of the reflected light of the second ranging light based on the difference.
A calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light selected by the selection unit.
Based on the calculation result of the calculation unit, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the orthogonality A conversion unit that performs depth conversion on the component signal and outputs the depth signal,
The image processing apparatus according to (1).
(3)
The output unit includes a calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, respectively.
Depth conversion is performed on the in-phase component signal and the orthogonal component signal calculated from the received signal of the reflected light of the first range-finding light, and the first depth signal as the depth signal and the reflected light of the second range-finding light are performed. A conversion unit that performs depth conversion on the in-phase component signal and the orthogonal component signal calculated from the received signal of the above and outputs the second depth signal as the depth signal.
A selection unit that selects and outputs either the first depth signal or the second depth signal based on the difference.
The image processing apparatus according to (1).
(4)
The first ranging light has a relatively high spectral radiant intensity in the sunlight spectrum, the second ranging light has a relatively low spectral radiant intensity, and the second wavelength has a second wavelength. Longer than 1 wavelength,
The image processing apparatus according to any one of (1) to (3).
(5)
The first wavelength has a center wavelength of 850 nm, and the second wavelength has a center wavelength of 940 nm.
(4) The image processing apparatus according to the above.
(6)
The first wavelength and the second wavelength are set so that the reflectance of the distance measuring object at the first wavelength is significantly higher than the reflectance at the second wavelength.
The image processing apparatus according to any one of (1) to (4).
(7)
A method performed by an image processor
The first ranging at the same ranging position based on the received signal of the reflected light of the first ranging light of the first wavelength and the received signal of the reflected light of the second ranging light of the second wavelength. The process of calculating the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second ranging light, and
Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said The process of outputting the depth signal corresponding to the orthogonal component signal and
A method equipped with.
(8)
An irradiation unit that irradiates the first ranging light of the first wavelength and the second ranging light of the second wavelength,
The reflected light of the first ranging light and the reflected light of the second ranging light are received, and the received signal of the reflected light of the first ranging light and the reflected light of the second ranging light are received. The image pickup unit that outputs the received signal of
Based on the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, the reflection intensity of the first ranging light at the same ranging position and the said A difference calculation unit that calculates the difference between the reflection intensity of the second distance measuring light and
Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said An output unit that outputs a depth signal corresponding to the orthogonal component signal,
Electronic equipment equipped with.
 10  画像処理装置
 11  差算出部
 12  出力部
 20  画像処理装置
 21  照射部
 22  受光部
 22A 受光レンズ
 22B ビームスプリッタ
 22C-1、22C-2 フィルタ
 22D-1 第1TOFセンサ
 22D-2 第2TOFセンサ
 23  信号処理部
 25  TOFセンサ
 25A レンズアレイユニット
 25B フィルタユニット
 25C 受光ユニット
 30-1 第1RAW画像記憶部
 30-2 第2RAW画像記憶部
 31-1 第1反射強度算出部
 31-2 第2反射強度算出部
 32  強度信号差算出部
 33  選択信号生成部
 34-1、41-1 第1I_Q信号算出部
 34-2、41-2 第2I_Q信号算出部
 35  選択部
 36  デプス変換部
 40-1 第1RAW画像記憶部
 40-2 第2RAW画像記憶部
 42-1 第1デプス変換部
 42-2 第2デプス変換部
 43-1  第1領域判定特徴量算出部
 43-2  第2領域判定特徴量算出部
 44  比較部
 45  選択部
 51  選択信号生成部
 C1  画素セル
 C2  画素セル
 C850 反射強度信号
 C940 反射強度信号
 d   差
 DP1 第1デプス信号
 DP2 第2デプス信号
 E、F   信頼度
 GDP1 第1デプス画像
 GDP2 第2デプス画像
 I1、I2  同相成分信号
 L1  第1測距用光
 L2  第2測距用光
 I850 第1同相成分信号
 I940 第2同相成分信号
 L   測距用光
 LS  レンズ
 Q850 第1直交成分信号
 Q940 第2直交成分信号
 RAW850 RAW画像データ
 RAW940 RAW画像データ
 OBJ 物体
 P   差
 PC  受光セル
 Q   直交成分信号
 Q1  直交成分信号
 Q2  直交成分信号
 RL1 統合信頼度
 RL2 統合信頼度
 SDP デプス信号
 SDP1 第1デプス信号
 SDP2 第2デプス信号
 sel 選択信号
 SL1 受光信号
 SL2 受光信号
 th  閾値
 λ1  第1の波長
 λ2  第2の波長
 σ1、σ2  SN比
10 Image processing device 11 Difference calculation unit 12 Output unit 20 Image processing device 21 Irradiation unit 22 Light receiving unit 22A Light receiving lens 22B Beam splitter 22C-1, 22C-2 Filter 22D-1 1st TOF sensor 22D-2 2nd TOF sensor 23 Signal processing Part 25 TOF sensor 25A Lens array unit 25B Filter unit 25C Light receiving unit 30-1 1st RAW image storage unit 30-2 2nd RAW image storage unit 31-1 1st reflection intensity calculation unit 31-2 2nd reflection intensity calculation unit 32 Strength Signal difference calculation unit 33 Selection signal generation unit 34-1, 41-1 1st I_Q signal calculation unit 34-2, 41-2 2nd I_Q signal calculation unit 35 Selection unit 36 Depth conversion unit 40-1 1st RAW image storage unit 40- 2 2nd RAW image storage unit 42-1 1st depth conversion unit 42-2 2nd depth conversion unit 43-1 1st area judgment feature amount calculation unit 4-3 2nd area judgment feature amount calculation unit 44 Comparison unit 45 Selection unit 51 Selection signal generator C1 Pixel cell C2 Pixel cell C850 Reflection intensity signal C940 Reflection intensity signal d Difference DP1 First depth signal DP2 Second depth signal E, F Reliability GDP1 First depth image GDP2 Second depth image I1, I2 In-phase Component signal L1 1st ranging light L2 2nd ranging light I850 1st in-phase component signal I940 2nd in-phase component signal L ranging light LS lens Q850 1st orthogonal component signal Q940 2nd orthogonal component signal RAW850 RAW image Data RAW940 RAW image data OBJ Object P difference PC light receiving cell Q Orthogonal component signal Q1 Orthogonal component signal Q2 Orthogonal component signal RL1 Integrated reliability RL2 Integrated reliability SDP depth signal SDP1 1st depth signal SDP2 2nd depth signal sel selection signal SL1 Signal SL2 Received signal th Threshold λ1 First wavelength λ2 Second wavelength σ1, σ2 SN ratio

Claims (8)

  1.  第1の波長の第1測距用光の反射光の受光信号及び第2の波長の第2測距用光の反射光の受光信号に基づいて、同一の測距位置における前記第1測距用光の反射強度と、前記第2測距用光の反射強度と、の差を算出する差算出部と、
     前記差に基づいて前記測距位置毎の前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する同相成分信号及び直交成分信号に基づくデプス信号を出力する出力部と、
     を備えた画像処理装置。
    The first ranging at the same ranging position based on the received signal of the reflected light of the first ranging light of the first wavelength and the received signal of the reflected light of the second ranging light of the second wavelength. A difference calculation unit that calculates the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second distance measurement light.
    Based on the difference, the in-phase component signal and the orthogonal component corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light for each distance measuring position. An output unit that outputs a depth signal based on the signal,
    Image processing device equipped with.
  2.  前記出力部は、前記差に基づいて前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかを選択して出力する選択部と、
     前記選択部により選択された前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号から同相成分信号及び直交成分信号を算出する算出部と、
     前記算出部の算出結果に基づいて、前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する前記同相成分信号及び前記直交成分信号についてのデプス変換を行い、前記デプス信号を出力する変換部と、
     を備えた請求項1記載の画像処理装置。
    The output unit is a selection unit that selects and outputs either a received signal of the reflected light of the first ranging light or a received signal of the reflected light of the second ranging light based on the difference.
    A calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light selected by the selection unit.
    Based on the calculation result of the calculation unit, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the orthogonality A conversion unit that performs depth conversion on the component signal and outputs the depth signal,
    The image processing apparatus according to claim 1.
  3.  前記出力部は、前記第1測距用光の反射光の受光信号及び前記第2測距用光の反射光の受光信号から同相成分信号及び直交成分信号をそれぞれ算出する算出部と、
     前記第1測距用光の反射光の受光信号から算出した同相成分信号及び直交成分信号についてのデプス変換を行い、前記デプス信号としての第1デプス信号並びに前記第2測距用光の反射光の受光信号から算出した同相成分信号及び直交成分信号についてのデプス変換を行い、前記デプス信号としての第2デプス信号を出力する変換部と、
     前記差に基づいて、前記第1デプス信号あるいは前記第2デプス信号のいずれかを選択して出力する選択部と、
     を備えた請求項1記載の画像処理装置。
    The output unit includes a calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, respectively.
    Depth conversion is performed on the in-phase component signal and the orthogonal component signal calculated from the received signal of the reflected light of the first range-finding light, and the first depth signal as the depth signal and the reflected light of the second range-finding light are performed. A conversion unit that performs depth conversion on the in-phase component signal and the orthogonal component signal calculated from the received signal of the above and outputs the second depth signal as the depth signal.
    A selection unit that selects and outputs either the first depth signal or the second depth signal based on the difference.
    The image processing apparatus according to claim 1.
  4.  前記第1測距用光は、太陽光スペクトラムにおいて分光放射強度が相対的に高く、前記第2測距用光は、前記分光放射強度が相対的に低く、かつ、第2の波長は、第1の波長よりも長い、
     請求項1記載の画像処理装置。
    The first ranging light has a relatively high spectral radiant intensity in the sunlight spectrum, the second ranging light has a relatively low spectral radiant intensity, and the second wavelength has a second wavelength. Longer than 1 wavelength,
    The image processing apparatus according to claim 1.
  5.  前記第1の波長は、中心波長が850nmであり、前記第2の波長は、中心波長が940nmである、
     請求項4記載の画像処理装置。
    The first wavelength has a center wavelength of 850 nm, and the second wavelength has a center wavelength of 940 nm.
    The image processing apparatus according to claim 4.
  6.  前記第1の波長及び前記第2の波長は、測距対象物の前記第1の波長における反射率が前記第2の波長における反射率よりも有意に高くなるように設定されている、
     請求項1記載の画像処理装置。
    The first wavelength and the second wavelength are set so that the reflectance of the distance measuring object at the first wavelength is significantly higher than the reflectance at the second wavelength.
    The image processing apparatus according to claim 1.
  7.  画像処理装置で実行される方法であって、
     第1の波長の第1測距用光の反射光の受光信号及び第2の波長の第2測距用光の反射光の受光信号に基づいて、同一の測距位置における前記第1測距用光の反射強度と、前記第2測距用光の反射強度と、の差を算出する過程と、
     前記差に基づいて前記測距位置毎の前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する同相成分信号及び直交成分信号に基づくデプス信号を出力する過程と、
     を備えた方法。
    A method performed by an image processor
    The first ranging at the same ranging position based on the received signal of the reflected light of the first ranging light of the first wavelength and the received signal of the reflected light of the second ranging light of the second wavelength. The process of calculating the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second ranging light, and
    Based on the difference, the in-phase component signal and the orthogonal component corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light for each distance measuring position. The process of outputting a depth signal based on the signal and
    A method equipped with.
  8.  第1の波長の第1測距用光及び第2の波長の第2測距用光を照射する照射部と、
     前記第1測距用光の反射光及び前記第2測距用光の反射光を受光して、前記第1測距用光の反射光の受光信号及び前記第2測距用光の反射光の受光信号を出力する撮像部と、
     前記第1測距用光の反射光の受光信号及び前記第2測距用光の反射光の受光信号に基づいて、同一の測距位置における前記第1測距用光の反射強度と、前記第2測距用光の反射強度と、の差を算出する差算出部と、
     前記差に基づいて前記測距位置毎の前記第1測距用光の反射光の受光信号あるいは前記第2測距用光の反射光の受光信号のいずれかに対応する同相成分信号及び直交成分信号に基づくデプス信号を出力する出力部と、
     を備えた電子機器。
    An irradiation unit that irradiates the first ranging light of the first wavelength and the second ranging light of the second wavelength,
    The reflected light of the first ranging light and the reflected light of the second ranging light are received, and the received signal of the reflected light of the first ranging light and the reflected light of the second ranging light are received. The image pickup unit that outputs the received signal of
    Based on the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, the reflection intensity of the first ranging light at the same ranging position and the said A difference calculation unit that calculates the difference between the reflection intensity of the second ranging light and
    Based on the difference, the in-phase component signal and the orthogonal component corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light for each distance measuring position. An output unit that outputs a depth signal based on the signal,
    Electronic equipment equipped with.
PCT/JP2020/019368 2019-05-22 2020-05-14 Image-processing device, method, and electronic apparatus WO2020235458A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019096216 2019-05-22
JP2019-096216 2019-05-22

Publications (1)

Publication Number Publication Date
WO2020235458A1 true WO2020235458A1 (en) 2020-11-26

Family

ID=73458897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/019368 WO2020235458A1 (en) 2019-05-22 2020-05-14 Image-processing device, method, and electronic apparatus

Country Status (1)

Country Link
WO (1) WO2020235458A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023119797A1 (en) * 2021-12-23 2023-06-29 株式会社Jvcケンウッド Imaging device and imaging method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003014430A (en) * 2001-07-03 2003-01-15 Minolta Co Ltd Three-dimensional measuring method and three- dimensional measuring apparatus
JP2008032427A (en) * 2006-07-26 2008-02-14 Fujifilm Corp Distance image creation method, distance image sensor, and photographing apparatus
JP2008175538A (en) * 2007-01-16 2008-07-31 Fujifilm Corp Imaging apparatus, method, and program
US20110032508A1 (en) * 2009-08-06 2011-02-10 Irvine Sensors Corporation Phase sensing and scanning time of flight LADAR using atmospheric absorption bands
US20110158481A1 (en) * 2009-12-30 2011-06-30 Hon Hai Precision Industry Co., Ltd. Distance measuring system
US20170234977A1 (en) * 2016-02-17 2017-08-17 Electronics And Telecommunications Research Institute Lidar system and multiple detection signal processing method thereof
WO2018104464A1 (en) * 2016-12-07 2018-06-14 Sony Semiconductor Solutions Corporation Apparatus and method
WO2019078074A1 (en) * 2017-10-20 2019-04-25 Sony Semiconductor Solutions Corporation Depth image acquiring apparatus, control method, and depth image acquiring system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003014430A (en) * 2001-07-03 2003-01-15 Minolta Co Ltd Three-dimensional measuring method and three- dimensional measuring apparatus
JP2008032427A (en) * 2006-07-26 2008-02-14 Fujifilm Corp Distance image creation method, distance image sensor, and photographing apparatus
JP2008175538A (en) * 2007-01-16 2008-07-31 Fujifilm Corp Imaging apparatus, method, and program
US20110032508A1 (en) * 2009-08-06 2011-02-10 Irvine Sensors Corporation Phase sensing and scanning time of flight LADAR using atmospheric absorption bands
US20110158481A1 (en) * 2009-12-30 2011-06-30 Hon Hai Precision Industry Co., Ltd. Distance measuring system
US20170234977A1 (en) * 2016-02-17 2017-08-17 Electronics And Telecommunications Research Institute Lidar system and multiple detection signal processing method thereof
WO2018104464A1 (en) * 2016-12-07 2018-06-14 Sony Semiconductor Solutions Corporation Apparatus and method
WO2019078074A1 (en) * 2017-10-20 2019-04-25 Sony Semiconductor Solutions Corporation Depth image acquiring apparatus, control method, and depth image acquiring system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023119797A1 (en) * 2021-12-23 2023-06-29 株式会社Jvcケンウッド Imaging device and imaging method

Similar Documents

Publication Publication Date Title
JP5448617B2 (en) Distance estimation device, distance estimation method, program, integrated circuit, and camera
JP7086001B2 (en) Adaptive optical raider receiver
JP6863342B2 (en) Optical ranging device
KR102561099B1 (en) ToF(time of flight) capturing apparatus and method for reducing of depth distortion caused by multiple reflection thereof
US7511801B1 (en) Method and system for automatic gain control of sensors in time-of-flight systems
US10670719B2 (en) Light detection system having multiple lens-receiver units
US9258548B2 (en) Apparatus and method for generating depth image
KR102112298B1 (en) Method and apparatus for generating color image and depth image
US12270950B2 (en) LIDAR system with fog detection and adaptive response
KR102787166B1 (en) LiDAR device and operating method of the same
KR101145132B1 (en) The three-dimensional imaging pulsed laser radar system using geiger-mode avalanche photo-diode focal plane array and auto-focusing method for the same
JP2020020612A (en) Distance measuring device, method for measuring distance, program, and mobile body
Lee et al. Highly precise AMCW time-of-flight scanning sensor based on parallel-phase demodulation
WO2020235458A1 (en) Image-processing device, method, and electronic apparatus
US8805075B2 (en) Method and apparatus for identifying a vibrometry spectrum in imaging applications
US20230243974A1 (en) Method And Device For The Dynamic Extension of a Time-of-Flight Camera System
JP6135871B2 (en) Light source position detection device, light source tracking device, control method, and program
JP2024153778A (en) Signal Processing Device
JP7262064B2 (en) Ranging Imaging System, Ranging Imaging Method, and Program
KR102211483B1 (en) Information estimation apparatus and mothod of the object based on the laser pattern analysis
KR20220048196A (en) Apparatus for LIDAR
KR20210072671A (en) Lidar module
KR20150133086A (en) Method for generating depth image and image generating apparatus using thereof
US20190212421A1 (en) Laser Scanning Devices and Methods for Extended Range Depth Mapping
CN111445507A (en) Data processing method for non-visual field imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20809749

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20809749

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP