WO2013190899A1 - 撮像装置及び自動焦点調節方法 - Google Patents
撮像装置及び自動焦点調節方法 Download PDFInfo
- Publication number
- WO2013190899A1 WO2013190899A1 PCT/JP2013/061840 JP2013061840W WO2013190899A1 WO 2013190899 A1 WO2013190899 A1 WO 2013190899A1 JP 2013061840 W JP2013061840 W JP 2013061840W WO 2013190899 A1 WO2013190899 A1 WO 2013190899A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- phase difference
- pixel
- pixels
- output signal
- color
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 94
- 238000001514 detection method Methods 0.000 claims description 121
- 239000000203 mixture Substances 0.000 claims description 49
- 238000000034 method Methods 0.000 claims description 24
- 238000005096 rolling process Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000002093 peripheral effect Effects 0.000 claims description 13
- 210000001747 pupil Anatomy 0.000 claims description 12
- 238000012935 Averaging Methods 0.000 claims description 5
- 238000003491 array Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 45
- 230000006870 function Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 18
- 238000012937 correction Methods 0.000 description 11
- 239000004973 liquid crystal related substance Substances 0.000 description 9
- 239000003086 colorant Substances 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 229920006395 saturated elastomer Polymers 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/34—Systems for automatic generation of focusing signals using different areas in a pupil plane
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/12—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/703—SSIS architectures incorporating pixels for producing signals other than image signals
- H04N25/704—Pixels specially adapted for focusing, e.g. phase difference pixel sets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
Definitions
- the present invention relates to an image pickup apparatus and an automatic focus adjustment method, and more particularly to an image pickup apparatus and an automatic focus adjustment method that automatically adjust a focus by detecting a phase difference of an imaged signal.
- first and second phase difference pixels are provided in a part of an image sensor, the phase difference of the output signals of the first and second phase difference pixels is detected, and the focus of the focus lens is based on the detected phase difference.
- Automatic focus adjustment phase difference AF for adjusting the position is known.
- the influence of crosstalk from other pixels of different colors adjacent to the first and second phase difference pixels is affected, and the focus detection accuracy is improved. There is a problem of lowering.
- the focus detection apparatus described in Patent Literature 1 corrects the output signals of the first and second phase difference pixels according to the output signals of the surrounding pixels, and based on the corrected signals. To detect the phase difference.
- the imaging apparatus described in Patent Document 2 determines whether or not pixels around the phase difference pixel are saturated, and switches crosstalk correction according to the determination result. Yes.
- the phase difference pixel described in Patent Document 1 is a white pixel having a spectral sensitivity characteristic that is obtained by adding the spectral sensitivity characteristics of RGB pixels, and the phase difference pixel described in Patent Document 2 is a G pixel. Even if crosstalk correction is performed on the output signal of the phase difference pixel, if the subject is extremely red or the color temperature is low, the signal output necessary to detect the phase difference accurately cannot be obtained, and the phase difference is accurate. There is a problem that AF cannot be performed.
- G pixels among red (R), green (G), and blue (B) pixels are arranged in a checkered pattern, and R pixels and B pixels are alternately arranged.
- R pixels and B pixels are alternately arranged.
- R pixels and B pixels are arranged in a line-sequential manner, and have a so-called Bayer array color filter, and the R pixel and the B pixel are not arranged adjacent to each other at the minimum pitch, but between the R pixel and the B pixel.
- the present invention has been made in view of such circumstances, and an object thereof is to provide an imaging apparatus and an automatic focus adjustment method capable of accurately performing phase difference AF even when an extremely red subject or color temperature is low. To do.
- an imaging apparatus includes a photographing lens, at least red (R), green (G), and blue (B) pixels, and first and second photographing lenses that are different.
- the first and second phase difference pixels, each of which is formed by pupil-dividing the subject image that has passed through this area, are adjacent to the first R pixel among the R pixels at a minimum pitch.
- An imaging element having a second B pixel adjacent to a second direction opposite to the first direction with respect to the second R pixel among the first B pixel and the R pixel in the direction;
- a determination unit that determines whether or not the subject color in the focus detection region is red based on the set focus detection region output signal, and the focus detection when the determination unit determines that the subject color in the focus detection region is not red Output signals based on the output signals of the first and second phase difference pixels in the region.
- the discrimination unit determines that the subject color in the focus detection area is red
- the output signal of each output signal is determined based on the output signals of the first and second B pixels in the focus detection area.
- a phase difference detection unit that detects the phase difference; and a focus adjustment unit that adjusts the focal position of the photographing lens based on the phase difference detected by the phase difference detection unit.
- An imaging device includes an R pixel, a G pixel, a B pixel, a first phase difference pixel, and a second phase difference pixel.
- the B pixels (first and second B pixels) are adjacent to each other at the minimum pitch in the direction of. It is determined whether or not the subject color in the preset focus detection area of the image sensor is red, and if it is determined that the subject color is not red, each output of the first and second phase difference pixels in the focus detection area is determined. A phase difference is detected based on the signal. On the other hand, when it is determined that the subject color in the focus detection area is red, the phase difference is detected based on the output signals of the first and second B pixels in the focus detection area.
- the color mixture with the angle dependency from the R pixel and the angle dependency of the first and second phase difference pixels themselves are mixed. Therefore, the phase difference AF cannot be performed with high accuracy based on the output signals of the first and second phase difference pixels.
- the first and second B pixels have a mixed color with angle dependency from the adjacent R pixel, and particularly when the subject color is red, The color mixture component increases, and the color mixture component may exceed the original output of the pixel.
- the present invention uses a color mixture having an angle dependency from adjacent R pixels.
- the first and second B pixels are used as phase difference pixels, and the first and second pixels
- the phase difference AF can be accurately performed based on the output signal of the B pixel. Note that it is not necessary to provide a dedicated phase difference pixel when the subject color is red, and the phase difference is detected using the normal B pixel of the image sensor, so that the resolution is not sacrificed.
- the determination unit calculates a ratio between the integrated value of the output signal of the R pixel and the integrated value of the output signal of the G pixel in the preset focus detection region of the imaging element.
- the calculated ratio is preferably compared with a preset threshold value to determine that the subject color in the focus detection area is red.
- the output of the G pixel is smaller than the output of the R pixel.
- the ratio between the integrated value of the output signal of the R pixel and the integrated output signal of the G pixel changes greatly. Therefore, it is possible to determine whether or not the subject color in the focus detection region is red by comparing the ratio with a preset threshold value.
- the ratio when the accuracy of the phase difference AF due to the output signals of the first and second B pixels is higher than the phase difference AF due to the output signals of the first and second phase difference pixels is obtained by testing, etc.
- the time ratio can be used as a threshold value.
- the first and second phase difference pixels are pixels each having a G filter or a colorless filter. This is because the output signal of the pixel in which the G filter or the colorless filter is arranged contributes to the luminance signal as compared with the output signal of the other color pixels, and is suitable for the phase difference AF.
- the first B pixels and the second B pixels are alternately arranged on one line in the first direction
- the phase difference detection unit includes the first Based on the output signal of the first B pixel arranged in the first line in the direction and the output signal of the second B pixel arranged in the second line adjacent to the first line.
- the second phase difference is detected, and the detected first and second phase differences are averaged to detect the phase difference.
- the first phase difference pixels and the second phase difference pixels are alternately arranged on one line in the first direction
- the phase difference detection unit includes: An output signal of the first phase difference pixel arranged in the third line in the direction of 1 and an output signal of the second phase difference pixel arranged in the fourth line adjacent to the third line.
- a third phase difference is detected based on the output signal of the second phase difference pixel arranged in the third line in the first direction, and the first phase difference pixel arranged in the fourth line.
- the fourth phase difference is detected based on the output signal and the detected third and fourth phase differences are averaged to detect the phase difference.
- An imaging apparatus includes a rolling readout unit that sequentially reads out a signal for each line of the imaging device, and a mechanical shutter that blocks light incident on the imaging device, and the phase difference detection unit includes It is preferable that the phase difference is continuously detected based on a signal continuously read by the rolling reading unit with the mechanical shutter opened.
- the phase difference can be detected without being affected by the rolling readout without using a mechanical shutter, and in particular, the phase difference AF can be performed during moving images, and the phase difference AF can be performed even in the case of a still image. The speed can be increased.
- a defocus amount calculation unit for obtaining the defocus amount of the photographing lens is provided, and the focus adjustment unit moves the photographing lens to a position where the defocus amount obtained by the defocus amount calculation unit becomes zero.
- the defocus amount calculation unit corrects the relational expression indicating the relationship between the phase difference and the defocus amount when there is no color mixing with the color mixing ratio, and obtains the defocus amount based on the detected phase difference and the corrected relational expression.
- a table showing the relationship between the phase difference and the defocus amount for each color mixture rate is provided, and the corresponding defocus amount is read from the table based on the detected phase difference and color mixture rate. Also good.
- the phase difference detection unit is based on a color mixture ratio from the peripheral pixels to at least one of the first and second phase difference pixels and an output signal of the peripheral pixels.
- the output signal of at least one of the first and second phase difference pixels is corrected, and the phase difference is detected based on the corrected output signals of the first and second phase difference pixels.
- the phase difference is detected based on the output signals of the first and second phase difference pixels. Even in this case, at least one position of the first and second phase difference pixels is detected.
- the phase difference pixel is affected by color mixture from surrounding pixels. Therefore, correction is performed to remove the color mixture component on the output signal of at least one of the first and second phase difference pixels, and the correction is performed based on the output signals of the first and second phase difference pixels after correction. To detect the phase difference.
- an R pixel is disposed adjacent to the first direction of either one of the first and second phase difference pixels in the first direction.
- the color mixing rate is a color mixing rate from the R pixel to the phase difference pixel in which the R pixel is arranged adjacent to the first direction.
- the phase difference detection unit obtains the color mixing ratio based on a ratio between the output signal of the first B pixel and the output signal of the second B pixel.
- the imaging element has a first and second B pixel in the first and second directions, and a third perpendicular to the first and second directions. , Having the first and second B pixels in the fourth direction, and the phase difference detecting unit is arranged in the first and second B pixels in the first and second directions, or in the third and fourth directions. It is preferable to detect the phase difference based on the output signals of the first and second B pixels.
- the first and second directions are the left and right directions when the imaging apparatus main body is horizontally held, and the vertical and horizontal detection is performed to detect horizontal shooting or vertical shooting.
- the phase difference detection unit detects the phase difference based on the output signals of the first and second B pixels in the first and second directions when horizontal shooting is detected by the vertical and horizontal detection unit.
- vertical shooting is detected by the vertical / horizontal detection unit, it is preferable to detect the phase difference based on the output signals of the first and second B pixels in the third and fourth directions.
- the first and second B pixels are arranged over the entire area of the image sensor, and exist in the first and second directions and in the third and fourth directions perpendicular to the first and second directions. .
- the imaging element corresponds to 6 ⁇ 6 pixels in the first and second directions and the third and fourth directions perpendicular to the first and second directions.
- a basic arrangement pattern of color filters wherein the basic arrangement pattern is repeatedly arranged in the first, second, third, and fourth directions, and the basic arrangement pattern corresponds to a first corresponding to 3 ⁇ 3 pixels;
- Each arranged at a diagonal position, focus detection of the image sensor First pass, the pixels having the four corners of one of the G filter of the second sequence, the first respectively, which is preferably configured as a second phase difference pixels.
- the first and second phase difference pixels may be arranged in the entire area of the image sensor or may be arranged only in a specific area. Even when the first and second phase difference pixels are arranged in the entire area of the image sensor, the focus detection area is set at an arbitrary position in the entire area, and the first and second in the set focus detection area are set.
- the output signals of the phase difference pixels are used for phase difference detection.
- An imaging apparatus provides a photographic lens, at least red (R), green (G), and blue (B) pixels, and a subject that has passed through different first and second regions of the photographic lens.
- a first B in a first direction adjacent to the first R pixel of the R pixels at a minimum pitch, the first and second phase difference pixels being imaged by pupil division.
- the second R pixel is adjacent to the second R pixel in the second direction opposite to the first direction with respect to the second R pixel.
- At least one of the phase difference pixels includes an image sensor in which an R pixel is disposed adjacent to the first direction or the second direction, and first and second phase differences in a focus detection region set in the image sensor.
- a phase difference detection unit that detects a phase difference of each output signal based on each output signal of the pixel, and a phase output by the phase difference detection unit
- a focus adjustment unit that adjusts the focal position of the photographic lens based on the phase difference detection unit, and the phase difference detection unit calculates a color mixture ratio from the surrounding pixels to at least one of the first and second phase difference pixels. Obtained based on the output signal of at least one of the output signal of the first B pixel and the output signal of the second B pixel, and based on the obtained color mixing ratio and the output signal of the surrounding pixels, An output signal of at least one of the phase difference pixels is corrected.
- An imaging apparatus includes a determination unit that determines whether the subject color in the focus detection region is red based on an output signal of the focus detection region, and the phase difference detection unit is When it is determined that the subject color in the focus detection area is red, the color mixture ratio from the surrounding pixels is obtained from the output signal of at least one of the output signal of the first B pixel and the output signal of the second B pixel, and the color mixture
- the output signal of at least one of the first and second phase difference pixels is corrected based on the rate and the output signal of the surrounding pixels, and the corrected output signals of the first and second phase difference pixels are corrected. It is preferable to detect the phase difference based on this.
- An imaging apparatus provides a photographic lens, at least red (R), green (G), and blue (B) pixels, and a subject that has passed through different first and second regions of the photographic lens.
- a first B in a first direction adjacent to the first R pixel of the R pixels at a minimum pitch, the first and second phase difference pixels being imaged by pupil division.
- the second R pixel is adjacent to the second R pixel in the second direction opposite to the first direction with respect to the second R pixel.
- At least one of the phase difference pixels includes an image sensor in which an R pixel is arranged adjacent to the first direction or the second direction, and first and second phase differences in a focus detection region set in the image sensor.
- a phase difference detection unit that detects a phase difference of each output signal based on each output signal of the pixel, and a phase difference detected by the phase difference detection unit
- a defocus amount calculation unit that obtains a defocus amount of the photographing lens based on a color mixture ratio from surrounding pixels to at least one of the first and second phase difference pixels, and a defocus amount calculation unit.
- a focus adjustment unit that moves the photographic lens to a position where the calculated defocus amount becomes zero, and the phase difference detection unit is at least one of the output signal of the first B pixel and the output signal of the second B pixel. Based on the output signal, the color mixture ratio from the surrounding pixels is obtained.
- An imaging apparatus includes a determination unit that determines whether the subject color in the focus detection region is red based on an output signal of the focus detection region, and the phase difference detection unit is When it is determined that the subject color in the focus detection area is red, the color mixture ratio from the surrounding pixels is obtained from the output signal of at least one of the output signal of the first B pixel and the output signal of the second B pixel, It is preferable that the focus amount calculation unit obtains the defocus amount of the photographing lens based on the phase difference detected by the phase difference detection unit and the color mixing ratio from the surrounding pixels.
- an automatic focus adjustment method in which a subject image that has passed through at least red (R), green (G), and blue (B) pixels and different first and second regions of a photographing lens.
- a phase difference detecting step for detecting a phase difference between the output signals based on the output signals of the second B pixel, and a focus for adjusting the focal position of the photographing lens based on the phase difference detected by the phase difference detecting step. And an adjusting step.
- phase difference AF can be accurately performed using an output signal of a normal B pixel.
- FIG. 1 is a perspective view showing an embodiment of an imaging apparatus according to the present invention.
- FIG. 2 is a rear view of the imaging device shown in FIG.
- FIG. 3 is a block diagram illustrating an embodiment of the internal configuration of the imaging apparatus illustrated in FIG. 1.
- FIG. 4 is a diagram showing a new color filter array arranged in the image sensor.
- FIG. 5 is a diagram showing a state in which the basic array pattern shown in FIG. 4 is divided into 4 ⁇ 3 ⁇ 3 pixels.
- FIG. 6A is a diagram illustrating an arrangement of first and second phase difference pixels in an image sensor and an arrangement of B pixels that can be used as phase difference pixels.
- FIG. 1 is a perspective view showing an embodiment of an imaging apparatus according to the present invention.
- FIG. 2 is a rear view of the imaging device shown in FIG.
- FIG. 3 is a block diagram illustrating an embodiment of the internal configuration of the imaging apparatus illustrated in FIG. 1.
- FIG. 4 is a diagram showing a new color filter array arranged in the image
- FIG. 6B is another diagram showing the arrangement of the first and second phase difference pixels in the image sensor and the arrangement of the B pixels that can be used as the phase difference pixels.
- FIG. 7A is a diagram illustrating a configuration example of a phase difference pixel.
- FIG. 7B is another diagram illustrating a configuration example of the phase difference pixel.
- FIG. 8A is a diagram used to describe B pixels that can be used as phase difference pixels.
- FIG. 8B is another diagram used to describe B pixels that can be used as phase difference pixels.
- FIG. 9 is a flowchart showing an automatic focusing method according to the present invention.
- FIG. 10 is a diagram illustrating an imaging area and an AF area of the imaging element.
- FIG. 10 is a diagram illustrating an imaging area and an AF area of the imaging element.
- FIG. 11 is a diagram used for explaining a method of discriminating the subject color from the signal of the pixel in the AF area.
- FIG. 12 is a diagram illustrating an arrangement of B pixels used as phase difference pixels.
- FIG. 13 is a diagram illustrating an arrangement of the first and second phase difference pixels.
- FIG. 14 is a graph illustrating an example of a relational expression between the phase difference and the defocus amount.
- FIG. 15 is a diagram illustrating another embodiment of the color filter array arranged in the image sensor.
- FIG. 16 is a diagram showing a state where the basic array pattern shown in FIG. 15 is divided into 4 ⁇ 3 ⁇ 3 pixels.
- FIG. 17 is an external view of a smartphone that is another embodiment of the imaging apparatus.
- FIG. 18 is a block diagram illustrating a main configuration of the smartphone.
- Imaging device 1 and 2 are a perspective view and a rear view, respectively, showing an embodiment of an imaging apparatus according to the present invention.
- the imaging device 10 is a digital camera that receives light passing through a lens with an imaging device, converts the light into a digital signal, and records the signal on a recording medium.
- an imaging device 10 has a photographing lens (shooting optical system) 12, a strobe 1 and the like on the front, and a shutter button 2, a power / mode switch 3, a mode dial 4 and the like on the top. It is arranged.
- a liquid crystal monitor 30 for 3D display, a zoom button 5, a cross button 6, a MENU / OK button 7, a playback button 8, a BACK button 9, and the like are disposed on the back of the camera. .
- the photographing lens 12 is constituted by a retractable zoom lens, and is set out from the camera body by setting the camera mode to the photographing mode by the power / mode switch 3.
- the strobe 1 irradiates strobe light toward a main subject.
- the shutter button 2 is composed of a two-stroke switch composed of a so-called “half press” and “full press”.
- the shutter button 2 is “half-pressed” to operate the AE / AF, and “full-pressed” to perform photographing. Further, when the imaging apparatus 10 is driven in the shooting mode, the shutter button 2 is “fully pressed” to execute shooting.
- the power / mode switch 3 has both a function as a power switch for turning on / off the power of the image pickup apparatus 10 and a function as a mode switch for setting the mode of the image pickup apparatus 10. It is slidably arranged between “position” and “photographing position”. The image pickup apparatus 10 is turned on by sliding the power / mode switch 3 to the “reproduction position” or “shooting position”, and turned off by setting it to the “OFF position”. Then, the power / mode switch 3 is slid and set to “playback position” to set to “playback mode”, and to the “shooting position” to set to “shooting mode”.
- the mode dial 4 functions as a shooting mode setting means for setting the shooting mode of the imaging device 10, and the shooting mode of the imaging device 10 is set to various modes depending on the setting position of the mode dial. For example, there are “still image shooting mode” in which still image shooting is performed, “moving image shooting mode” in which moving image shooting is performed, and the like.
- the liquid crystal monitor 30 displays a live view image (through image) in the shooting mode, displays a still image or a moving image in the playback mode, displays a menu screen, and the like, thereby displaying a graphical user interface (GUI). It functions as a part.
- GUI graphical user interface
- the zoom button 5 functions as zoom instruction means for instructing zooming, and includes a tele button 5T for instructing zooming to the telephoto side and a wide button 5W for instructing zooming to the wide angle side.
- the focal length of the photographic lens 12 is changed by operating the tele button 5T and the wide button 5W in the photographing mode. Further, when the tele button 5T and the wide button 5W are operated in the reproduction mode, the image being reproduced is enlarged or reduced.
- the cross button 6 is an operation unit for inputting instructions in four directions, up, down, left, and right, and is a button (cursor moving operation means) for selecting an item from a menu screen or instructing selection of various setting items from each menu.
- the left / right key functions as a frame advance (forward / reverse feed) button in the playback mode.
- the MENU / OK button 7 is an operation having a function as a menu button for instructing to display a menu on the screen of the liquid crystal monitor 30 and a function as an OK button for instructing confirmation and execution of selection contents. Key.
- the playback button 8 is a button for switching to a playback mode in which a captured still image or moving image is displayed on the liquid crystal monitor 30.
- the BACK button 9 functions as a button for instructing to cancel the input operation or return to the previous operation state.
- FIG. 3 is a block diagram showing an embodiment of the internal configuration of the imaging apparatus 10.
- the imaging apparatus 10 records captured images on a memory card 54, and the overall operation of the apparatus is controlled by a central processing unit (CPU) 40.
- CPU central processing unit
- the imaging device 10 is provided with operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, a zoom button, a BACK key, and the like.
- operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, a zoom button, a BACK key, and the like.
- a signal from the operation unit 38 is input to the CPU 40, and the CPU 40 controls each circuit of the imaging device 10 based on the input signal. For example, image sensor drive control, lens drive control, aperture drive control, photographing operation control, Image processing control, image data recording / reproduction control, display control of the liquid crystal monitor 30, and the like are performed.
- the luminous flux that has passed through the photographing lens 12, the diaphragm 14, the mechanical shutter (mechanical shutter) 15 and the like forms an image on the imaging element 16 that is a CMOS (Complementary Metal-Oxide Semiconductor) type color image sensor.
- the image sensor 16 is not limited to the CMOS type, but may be an XY address type or a CCD (Charge-Coupled Device) type color image sensor.
- the image sensor 16 has a large number of light receiving elements (photodiodes) arranged in a two-dimensional array, and a subject image formed on the light receiving surface of each photodiode is an amount of signal voltage (or charge) corresponding to the amount of incident light. Is converted into a digital signal via an A / D converter in the image sensor 16 and output.
- photodiodes light receiving elements
- a / D converter A / D converter
- FIG. 4 is a diagram showing an embodiment of the image pickup device 16, and particularly shows a new color filter array arranged on the light receiving surface of the image pickup device 16.
- the color filter array of the image sensor 16 includes a basic array pattern P (pattern indicated by a thick frame) corresponding to M ⁇ N (6 ⁇ 6) pixels, and the basic array pattern P is repeated in the horizontal direction and the vertical direction.
- a basic array pattern P pattern indicated by a thick frame
- filters of each color of red (R), green (G), and blue (B) R filter, G filter, B filter
- R filter, G filter, and B filter are arranged with periodicity in this way, when image processing of RGB RAW data (mosaic image) read from the image sensor 16 is performed, processing is performed according to a repetitive pattern. be able to.
- the G filter corresponding to the color that contributes most to obtain the luminance signal is the horizontal, vertical, diagonal upper right, and diagonal upper left of the color filter array.
- One or more are arranged in each direction line.
- the demosaic process is a process for calculating (converting to simultaneous expression) all the RGB color information for each pixel from the RGB mosaic image associated with the color filter array of the single-plate color image sensor. It is also referred to as mosaicing or synchronization (same in this specification).
- the color filter array shown in FIG. 4 has an R filter and a B filter corresponding to two or more other colors (R and B colors in this embodiment) other than the G color, respectively.
- One or more are arranged in each of the horizontal and vertical lines of P.
- the R filter and B filter are arranged in the horizontal and vertical lines of the color filter array, the occurrence of false colors (color moire) can be reduced. Thereby, an optical low-pass filter for reducing (suppressing) the occurrence of false colors can be omitted. Even when an optical low-pass filter is applied, a filter having a weak function of cutting high-frequency components for preventing the occurrence of false colors can be applied, and resolution can be maintained.
- the basic arrangement pattern P of the color filter arrangement shown in FIG. 4 has the number of R, G, and B pixels corresponding to the R, G, and B filters in the basic arrangement pattern as 8 pixels and 20 pixels, respectively. , 8 pixels. That is, the ratio of the number of pixels of RGB pixels is 2: 5: 2, and the ratio of the number of G pixels that contributes most to obtain a luminance signal is the ratio of R pixels and B pixels of other colors. It is larger than the ratio of the number of pixels.
- the ratio between the number of G pixels and the number of R and B pixels is different, and in particular, the ratio of the number of G pixels that contributes most to obtain a luminance signal is equal to the number of R and B pixels. Since the ratio is larger than the ratio, aliasing at the time of demosaic processing can be suppressed, and high frequency reproducibility can be improved.
- FIG. 5 shows a state where the basic array pattern P shown in FIG. 4 is divided into 4 ⁇ 3 ⁇ 3 pixels.
- the basic array pattern P includes a 3 ⁇ 3 pixel A array surrounded by a solid frame and a 3 ⁇ 3 pixel B array surrounded by a broken frame alternately in the horizontal and vertical directions. It can also be understood that the array is arranged.
- the G filters are arranged at the four corners and the center, respectively, and arranged on both diagonal lines.
- the R filter is arranged in the horizontal direction with the central G filter interposed therebetween, and the B filter is arranged in the vertical direction.
- the B filter is arranged in the horizontal direction with the central G filter interposed therebetween.
- the R filters are arranged in the vertical direction. That is, in the A array and the B array, the positional relationship between the R filter and the B filter is reversed, but the other arrangements are the same.
- the G filters at the four corners of the A array and the B array become a square array G filter corresponding to 2 ⁇ 2 pixels by alternately arranging the A array and the B array in the horizontal and vertical directions.
- 6A and 6B are other basic array patterns P ′ corresponding to 6 ⁇ 6 pixels cut out from positions different from the basic array pattern P shown in FIG. 4.
- the pixels at the positions of the upper left and lower left G filters of the four corners of the A array are the first phase difference pixel p1 and the second phase difference pixel, respectively.
- the pixels at the positions of the upper left and lower left G filters among the G filters at the four corners of the B array are configured as a second phase difference pixel p2 and a first phase difference pixel p1, respectively.
- FIG. 7A and FIG. 7B are enlarged views of main parts showing configurations of the first phase difference pixel p1 and the second phase difference pixel p2, respectively.
- a light shielding member 16A is arranged on the front side (microlens L side) of the photodiode PD of the first phase difference pixel p1, while the second phase difference is shown in FIG. 7B.
- a light shielding member 16B is disposed on the front side of the photodiode PD of the pixel p2.
- the microlens L and the light shielding members 16A and 16B function as pupil dividing means.
- the light shielding member 16A shields the left half of the light receiving surface of the photodiode PD. Therefore, only the left side of the optical axis of the light beam passing through the exit pupil of the photographing lens 12 is received by the first phase difference pixel p1.
- the light shielding member 16B shields the right half of the light receiving surface of the photodiode PD of the second phase difference pixel p2. Therefore, only the right side of the optical axis of the light beam passing through the exit pupil of the photographing lens 12 is received by the second phase difference pixel p2. In this way, the light beam passing through the exit pupil is divided into left and right by the microlens L and the light shielding members 16A and 16B, which are pupil dividing means, and are incident on the first phase difference pixel p1 and the second phase difference pixel p2, respectively. To do.
- the color filter array of the image sensor 16 has an R filter arranged in a horizontal direction with a G filter at the center of the A array in between, and a G filter at the center of the B array in between. Since the B filters are arranged in the horizontal direction, the pixels in which the R filters are arranged (R pixels) and the pixels in which the B filters are arranged (B pixels p3 to p6) are adjacent to each other at the minimum pitch in the horizontal direction. Yes. Further, in FIG. 6B, the B pixels p3 and p5 are adjacent to the left direction of the R pixel, and the B pixels p4 and p6 are adjacent to the right direction of the R pixel. Here, assuming that the B pixels p3 and p5 are the first B pixels, the B pixels p4 and p6 are the second B pixels.
- 8A and 8B are diagrams showing an internal configuration of each pixel in which R, B, and G pixels are arranged in the horizontal direction (left-right direction).
- R light (subject light whose subject reflection color is biased to a long wavelength, light when the subject is extremely red or when the color temperature is low) is incident from the left, R light is incident and signal charges are accumulated due to color mixing with angle dependence from adjacent R pixels.
- FIG. 8B when the R light is incident from the right direction, the B pixel is adjacent to the G pixel on the right side. Is not accumulated or is very small even if color mixing occurs.
- the B pixels p3 and p5 adjacent to the left side of the R pixel shown in FIG. 6B receive the signal charges due to the color mixture with the angle dependency from the R pixel when the R light is incident from the right direction on FIG. 6B.
- the signal charges are mixed due to the color mixture having the angle dependency from the R pixel. Accumulated. That is, the B pixels p3 and p5 and the B pixels p4 and p6 function as first and second phase difference pixels for the R light.
- the sensor driving unit (rolling reading unit) 32 is a part that controls reading of a digital signal (image signal) from the image sensor 16, and sequentially reads the image signal for each line from the image sensor 16.
- the reading method in the CMOS image sensor is a rolling shutter method in which resetting and reading are sequentially performed for each line from above.
- This rolling shutter method has a problem that the image of the subject is distorted in the case of a moving subject because there is a time difference in exposure timing for each line. Therefore, during still image shooting, the shutter drive unit 33 controls the mechanical shutter 15 to be opened and closed (controls the exposure time) so that distortion due to the rolling shutter does not occur.
- Image signals (R, G, B signals) read from the image sensor 16 are output to the image input controller 22.
- the digital signal processing unit 24 performs signal processing such as offset processing, white balance correction, gamma correction processing, demosaic processing, and YC processing on the digital image signal input via the image input controller 22.
- the image data processed by the digital signal processing unit 24 is input to the VRAM 50.
- the VRAM 50 includes an A area and a B area for recording image data each representing an image for one frame.
- image data representing an image for one frame is rewritten alternately in the A area and the B area.
- the written image data is read from an area other than the area where the image data is rewritten.
- the image data read from the VRAM 50 is encoded by the video encoder 28 and output to the liquid crystal monitor 30 provided on the back of the camera, whereby the subject image is continuously displayed on the display screen of the liquid crystal monitor 30. .
- the CPU 40 starts an AF operation and an AE (automatic exposure) operation, and the focus in the photographing lens 12 is set via the lens driving unit 36.
- the lens is moved in the optical axis direction, and control is performed so that the focus lens comes to the in-focus position.
- the AF processing unit (phase difference detection unit) 42 is a part that performs phase difference AF processing according to the present invention, and each output signal of the first phase difference pixel p1 and the second phase difference pixel p2 shown in FIG. 6A. Is used to detect the phase difference, or the phase difference is detected using the output signals of the B pixels p3, p5, B pixels p4 and p6 shown in FIG. 6B. Details of the phase difference detection by the AF processing unit 42 will be described later.
- the CPU 40 moves the zoom lens forward and backward in the optical axis direction via the lens driving unit 36 in accordance with the zoom command from the zoom button 5 to change the focal length.
- the image data output from the image sensor 16 when the shutter button 2 is half-pressed is taken into the AE detection unit 44.
- the AE detection unit 44 integrates the G signals of the entire screen or integrates the G signals that are weighted differently in the central portion and the peripheral portion of the screen, and outputs the integrated value to the CPU 40.
- the CPU 40 calculates the brightness of the subject (shooting Ev value) from the integrated value input from the AE detection unit 44, and programs the F value of the aperture 14 and the exposure time (shutter speed) of the mechanical shutter 15 based on this shooting EV value. Determine according to the diagram.
- the mechanical shutter 15 is opened, image data is continuously read from the image sensor 16, the brightness of the subject is calculated in the same manner as described above, and the shutter speed (charge accumulation time by the rolling shutter) is calculated by the sensor driving unit 32. ) To control.
- Reference numeral 47 denotes a ROM (EEPROM) that stores a camera control program, defect information of the image sensor 16, various parameters and tables used for image processing, and the like.
- EEPROM EEPROM
- the shutter button 2 When the shutter button 2 is half-pressed, the AE operation and the AF operation are completed.
- the shutter button When the shutter button is pressed in the second stage (full press), the image data output from the image sensor 16 in response to the press is input to the image.
- the data is input to the memory (SDRAM) 48 from the controller 22 and temporarily stored.
- the image data temporarily stored in the memory 48 is appropriately read out by the digital signal processing unit 24.
- gain control processing including offset processing, white balance correction, sensitivity correction, gamma correction processing, demosaicing processing, Signal processing including edge enhancement image processing and YC processing (luminance data and color difference data generation processing of image data) is performed, and the YC-processed image data (YC data) is stored in the memory 48 again.
- the YC data stored in the memory 48 is output to the compression / decompression processing unit 26, and after being subjected to compression processing such as JPEG (joint photographic photographic experts group), it is stored in the memory 48 again.
- An image file is generated from the YC data (compressed data) stored in the memory 48, and the image file is read by the media controller 52 and recorded on the memory card 54.
- FIG. 9 is a flowchart showing an automatic focusing method according to the present invention.
- phase difference pixel If color mixing occurs in the phase difference pixel, it is impossible to correctly detect the phase difference and focus correctly. Since the longer the wavelength of the incident light, the easier it is to leak into adjacent pixels and color mixing tends to occur. First, it is determined whether the subject is an extremely red subject or the color temperature is low.
- the AF processing unit 42 discriminating unit first outputs the output signals (R signal, G) of the RGB pixels adjacent to the phase difference pixels included in the AF area set in the imaging area as shown in FIG. Signal, B signal) is acquired (step S10).
- FIG. 11A shows a part of the pixel group in the AF area shown in FIG.
- the same number of RGB pixels are extracted from this AF area.
- RGB pixels are extracted by 8 pixels for each color.
- the output signals of the extracted RGB pixels are integrated by color, and integrated values ( ⁇ G), ( ⁇ B), and ( ⁇ R) are calculated.
- a ratio ( ⁇ R / ⁇ G) between the integrated value ( ⁇ R) and the integrated value ( ⁇ G) is calculated, and it is determined whether or not the ratio ( ⁇ R / ⁇ G) is equal to or greater than a preset threshold value ( Step S12).
- the ratio ( ⁇ R / ⁇ G) increases, the ratio ( ⁇ R / ⁇ G) becomes equal to or greater than the threshold, and the subject color in the AF area is red (or Color temperature is low).
- the threshold value can be set as follows. Illumination light whose saturation gradually changes from white to red is irradiated to an achromatic subject, and an output signal of a pixel in the AF area of the image sensor 16 is acquired for each saturation. Of the output signals acquired for each saturation, the phase difference obtained from each output signal of the first phase difference pixel p1 and the second phase difference pixel p2, and the B pixel usable as the phase difference pixel under R light The phase difference obtained from each output signal of B pixel p3, p5 and B pixel p4, p6 is compared with the phase difference obtained from each output signal of B pixel p4, p6.
- a ratio ( ⁇ R / ⁇ G) under illumination (saturation) when the accuracy first becomes higher than the phase difference obtained from each output signal of the pixel p1 and the second phase difference pixel p2 is obtained, and this ratio ( Let ⁇ R / ⁇ G) be a threshold value. This threshold value is preferably obtained for each image pickup apparatus at the time of adjustment before product shipment and stored in the ROM 47 in advance.
- step S12 when it is determined in step S12 that the ratio ( ⁇ R / ⁇ G) is greater than or equal to the threshold value (in the case of “Yes”), the AF pixels 42 B3, p5 and B pixels in the AF area The phase difference is detected from the output signals p4 and p6 (step S14).
- the detection of the phase difference by the AF processing unit 42 is performed as follows. As shown in FIG. 12, B pixels p3 and B pixels p6 are alternately arranged in the line direction of the horizontal line La (first line), and similarly, a horizontal line Lb (second line) that is three lines below the horizontal line La. B pixels p4 and B pixels p5 are alternately arranged in the line direction. A first phase difference is detected based on the output signals of a pair of vertical B pixels p3 and p4 between the horizontal lines La and Lb.
- each time when the correlation between the output signals of the vertical pair of B pixels of the B pixel p3 and the B pixel p4 is maximized (when the integrated value of the absolute difference values of the output signals of the vertical pair of B pixels is minimized).
- the first phase difference is obtained from the shift amount in the left-right direction between the output signals.
- the second phase difference is obtained from the shift amount in the left-right direction between the output signals when the correlation between the output signals of the vertical pair of B pixels of the B pixel p5 and the B pixel p6 is maximized.
- the phase difference is detected by arithmetically averaging the first and second phase differences thus obtained.
- the vertical pair of B pixels p3 and B pixels p4 are arranged at a position where the B pixel p4 is shifted by one pixel pitch in the right direction with respect to the B pixel p3.
- p5 and B pixel p6 are arranged at a position where B pixel p5 is shifted by 1 pixel pitch in the left direction with respect to B pixel p6.
- the first and second phase differences are obtained. By arithmetically averaging the obtained first and second phase differences, it is possible to calculate a highly accurate phase difference that cancels out the horizontal deviation of the pair of vertical B pixels.
- the horizontal lines La and Lb are separated in the vertical direction by 3 pixels, it is possible to prevent a phase difference detection error due to the distance between the horizontal lines La and Lb.
- a shift amount (defocus amount) between the focus position by the photographing lens 12 and the imaging surface of the image sensor 16 is calculated from the detected phase difference (step S16).
- the calculated defocus amount is output from the AF processing unit (defocus amount calculation unit) 42 to the CPU (focus adjustment unit) 40. Since the relationship between the phase difference and the defocus amount can be expressed by a certain relational expression, the defocus amount can be calculated from the relational expression by detecting the phase difference.
- the CPU 40 moves the focus lens in the photographing lens 12 via the lens driving unit (focus adjustment unit) 36 so that the defocus amount becomes zero (step S18). ).
- the AF processing unit 42 compares the first phase difference pixel p1 and the second position in the AF area. A phase difference is detected from each output signal of the phase difference pixel p2 (step S20).
- the first phase difference pixel p1 and the second phase difference pixel p2 are respectively assigned to the G pixel (see FIG. 6A), but the first phase difference pixel p1 and the second phase difference pixel p2
- the filter of the pixel of the phase difference pixel p2 may be colorless (white or gray).
- the phase difference detection using the output signals of the first phase difference pixel p1 and the second phase difference pixel p2 is performed in the same manner as the phase difference detection in step S14. That is, as shown in FIG. 13, the first phase difference pixel p1, the second phase difference pixel p2 and the horizontal line Ld which are alternately arranged at regular intervals in the line direction of the horizontal line Lc (third line). Of the second phase difference pixels p2 and the first phase difference pixels p1 that are alternately arranged at regular intervals in the line direction of (fourth line), a pair of vertical pairs between the horizontal lines Lc and Ld.
- the first phase difference is detected on the basis of the output signals of the first phase difference pixel p1 and the second phase difference pixel p2, and the upper and lower sides of the vertical pair of the first phase difference pixel p1 and the second phase difference pixel p2 are detected.
- the second phase difference is detected on the basis of the output signals of the pair of second phase difference pixel p2 and first phase difference pixel p1 having the opposite relationship, and the first and second phase differences are arithmetically averaged. To calculate the phase difference.
- the rolling shutter method since the rolling shutter method is employed in which the mechanical shutter 15 is opened during video shooting, and the resetting and reading are sequentially performed for each line, a time difference occurs in the exposure timing for each line, and the subject image is changed due to the movement of the subject. Distortion (rolling distortion) occurs. Therefore, in the case of the rolling shutter system, the line image on the horizontal line Lc and the line image on the horizontal line Ld may be relatively shifted in the left-right direction.
- step S22 A defocus amount corrected by the obtained color mixture ratio and the phase difference detected in step S20 is calculated (step S22).
- the color mixture ratio is obtained from the ratio of the output signal of the B pixel adjacent to the right side of the R pixel and the output signal of the B pixel adjacent to the left side of the R pixel.
- the color mixing ratio may be calculated based on the signal.
- the color mixing rate from the R pixel adjacent to the second phase difference pixel p2 is stored in advance, and the color mixing amount is calculated from the color mixing rate and the output signal of the R pixel.
- the defocus amount may be calculated from the color mixture amount and the phase difference.
- the color mixture amount is not limited to the color mixture amount from the R pixel, and may be calculated by adding the color mixture amounts from other color pixels adjacent in the vertical and horizontal directions, or the color mixture amount for the first phase difference pixel p1. The amount may be calculated.
- the relational expression a changes due to color mixing.
- the relational expression a changes to the relational expression b. Therefore, the relational expression is changed according to the color mixture rate, and the defocus amount is corrected.
- the defocus amount y is calculated by the relational expression a.
- the defocus amount y ′ is calculated.
- the processing efficiency can be improved by calculating the color mixture rate from the pixels adjacent in the horizontal direction and the vertical direction adjacent to each other at the minimum pitch of the pixels, not in all directions.
- step S18 focus lens drive control according to the defocus amount is performed.
- the relational expression is changed according to the color mixture rate.
- a three-dimensional lookup table for reading the defocus amount using the color mixture rate and the phase difference as parameters is prepared in advance.
- the defocus amount may be read from this three-dimensional lookup table.
- the defocus amount is calculated based on the detected phase difference and the color mixing ratio.
- the second phase difference pixel p2 or the first phase difference pixel p1 and the second phase difference are calculated.
- a color mixture amount calculated from the surrounding pixels to the pixel p2 and calculated from the output signal of the second phase difference pixel p2 or the output signals of the first phase difference pixel p1 and the second phase difference pixel p2 May be subtracted to obtain an output signal without color mixture.
- the defocus amount can be calculated from the relational expression a in FIG.
- the AF area shown in FIG. 10 is set as a horizontally long area at the center of the imaging area. However, since the B pixel is arranged in the entire imaging area, the AF area is set at any position in the imaging area. Even in this case, the phase difference AF based on the B pixel can be performed. Also, the AF area shown in FIG. 10 is an AF area that is long in the horizontal direction (left-right direction: first and second directions) when the imaging apparatus main body is held horizontally, but is vertical (up-down direction: The AF area may be long in the third and fourth directions.
- a gravity sensor vertical / horizontal detection unit or the like that detects horizontal shooting or vertical shooting is provided, and when horizontal shooting is detected by the vertical / horizontal detection unit, an output signal of a B pixel in an AF area that is long in the horizontal direction
- the phase difference is detected on the basis of the vertical axis and the vertical / horizontal detection unit detects vertical shooting, and the B pixels in the AF area that is long in the vertical direction (the AF area that is long in the horizontal direction with respect to the main body of the image pickup apparatus) It is preferable to detect the phase difference based on the output signal.
- the phase difference detection can be performed even if any position in the imaging region is set as the AF region in the same manner as the phase difference detection by the B pixel.
- the image sensor applied to the present invention is not limited to the one having the color filter array shown in FIG. 1, and the R pixel and the B pixel are adjacent to each other at the minimum pitch, and the R pixel (first R pixel).
- the second direction (right direction) opposite to the first direction with respect to the B pixel adjacent in the first direction (left direction) and the second R pixel different from the first R pixel Any pixel may be used as long as it has B pixels adjacent to each other.
- the subject color is red or the color temperature is low, and if the subject color is determined to be red, the first B pixel (B pixel p3, p5) and the second B pixel ( The phase difference is detected using the output signals from the B pixels p4 and p6).
- the B pixels p3 and p5 first The color mixing ratio from the R pixel adjacent to the B pixel on the right side in the horizontal direction is obtained from the output signal from the output signal from the B pixel, and the phase difference pixel adjacent to the R pixel on the right side in the horizontal direction using the calculated color mixing rate. You may make it obtain
- the color mixture rate can be calculated with high accuracy by using the output signal of the B pixel arranged adjacent to the R pixel. This is a configuration that cannot be realized by a conventional general Bayer arrangement in which the R pixel and the B pixel are only adjacent to the G pixel in the horizontal direction or the vertical direction, and takes advantage of the characteristics of the color filter arrangement.
- the phase difference pixel p ⁇ b> 2 is adjacent to the R pixel on the right side in the drawing, but is adjacent to the G pixel on the left side, the G pixel on the upper side, and the B pixel on the lower side.
- the color mixing ratio can be obtained from the output signal of the B pixel p3 or p5 to obtain the color mixing ratio of the phase difference pixel p2, but even in that case, the color mixing ratio can be more accurately compared to the conventional general Bayer arrangement.
- the color mixing rate can be obtained. That is, in the Bayer array, only the G pixel is adjacent to the R pixel in the horizontal direction and the vertical direction, and it is conceivable to obtain the color mixing ratio from the output signal of the G pixel. Are adjacent to the two R pixels, and color mixing occurs in both the R pixel to the G pixel and the G pixel to the R pixel, so that it is possible to accurately calculate the color mixing ratio from the surrounding pixels in the G pixel.
- the wavelength is short by calculating the color mixture ratio from the periphery in the B pixel as in the present invention, the color mixture (light leakage) from the B pixel to the surrounding pixels is very small and the color mixture is not mixed. Therefore, it is possible to calculate the color mixture component from the peripheral pixels with high accuracy.
- the above characteristics and effects can be obtained even when it is not determined whether the subject color is red or the color temperature is low. That is, taking advantage of the characteristics of the color filter array and the influence of the color mixture from the adjacent pixels at the minimum pixel pitch is not wide, and the color mixture ratio from the adjacent R pixel is set to the minimum pixel pitch.
- the color mixture ratio to the phase difference pixels can be calculated with high accuracy.
- color filter array is not limited to the array shown in FIGS. 4, 5, 6, and 12, but may be the array shown in FIGS.
- the color filter array shown in FIG. 15 includes a basic array pattern (pattern indicated by a thick frame) composed of a square array pattern corresponding to 6 ⁇ 6 pixels, and this basic array pattern is repeatedly arranged in the horizontal and vertical directions. . That is, in this color filter array, R, G, and B color filters (R filter, G filter, and B filter) are arrayed with periodicity.
- FIG. 16 shows a state in which the basic array pattern shown in FIG. 15 is divided into 4 ⁇ 3 ⁇ 3 pixels.
- a 3 ⁇ 3 pixel A array surrounded by a solid frame and a 3 ⁇ 3 pixel B array surrounded by a broken frame are arranged alternately in the horizontal and vertical directions. It can also be understood that it has been arranged.
- an R filter is arranged at the center, B filters are arranged at the four corners, and G filters are arranged at the top, bottom, left and right with the center R filter interposed therebetween.
- a B filter is disposed at the center, R filters are disposed at the four corners, and G filters are disposed vertically and horizontally with the center B filter interposed therebetween.
- the first phase difference pixel p1 is arranged at a pixel position corresponding to the G filter adjacent to the right side of the R filter of the A array
- the second phase difference pixel p2 is a B filter of the B array.
- the RB pixel and the BR pixel are adjacently arranged in the central four pixels of the basic array pattern, and the G pixel is adjacent to the four sides of these four pixels. .
- phase difference AF can be accurately performed using the output signal of the normal B pixel.
- the imaging device to which the present invention is applied is not limited to the above-described embodiment, and the first and second phase difference pixels are arranged and at the minimum pitch with respect to the R pixel (first R pixel).
- the B direction (first B pixel) in the adjacent first direction (left direction) and the R pixel (second R pixel) different from the first R pixel are opposite to the first direction.
- Any image sensor may be used as long as it has B pixels (second B pixels) adjacent in the second direction (right direction).
- imaging device 10 includes, for example, a mobile phone or smartphone having a camera function, a PDA (Personal Digital Assistant), and a portable game machine.
- a smartphone will be described as an example, and will be described in detail with reference to the drawings.
- FIG. 17 shows the appearance of a smartphone 500 that is another embodiment of the imaging apparatus 10.
- a smartphone 500 illustrated in FIG. 17 includes a flat housing 502, and a display input in which a display panel 521 as a display unit and an operation panel 522 as an input unit are integrated on one surface of the housing 502. Part 520.
- the housing 502 includes a speaker 531, a microphone 532, an operation unit 540, and a camera unit 541.
- the configuration of the housing 502 is not limited thereto, and for example, a configuration in which the display unit and the input unit are independent, or a configuration having a folding structure or a slide mechanism can be employed.
- FIG. 18 is a block diagram showing a configuration of the smartphone 500 shown in FIG.
- the main components of the smartphone include a wireless communication unit 510, a display input unit 520, a call unit 530, an operation unit 540, a camera unit 541, a storage unit 550, and an external input / output unit. 560, a GPS (Global Positioning System) receiving unit 570, a motion sensor unit 580, a power supply unit 590, and a main control unit 501.
- a wireless communication function for performing mobile wireless communication via the base station device BS and the mobile communication network NW is provided as a main function of the smartphone 500.
- the wireless communication unit 510 performs wireless communication with the base station apparatus BS accommodated in the mobile communication network NW according to an instruction from the main control unit 501. Using this wireless communication, transmission and reception of various file data such as audio data and image data, e-mail data, and reception of Web data and streaming data are performed.
- the display input unit 520 displays images (still images and moving images), character information, and the like visually under the control of the main control unit 501, and visually transmits information to the user, and detects user operations on the displayed information.
- This is a so-called touch panel, and includes a display panel 521 and an operation panel 522.
- the display panel 521 is preferably a 3D display panel.
- the display panel 521 uses an LCD (Liquid Crystal Display), an OELD (Organic Electro-Luminescence Display), or the like as a display device.
- the operation panel 522 is a device that is placed so that an image displayed on the display surface of the display panel 521 is visible and detects one or a plurality of coordinates operated by a user's finger or stylus. When this device is operated by a user's finger or stylus, a detection signal generated due to the operation is output to the main control unit 501. Next, the main control unit 501 detects an operation position (coordinates) on the display panel 521 based on the received detection signal.
- the display panel 521 and the operation panel 522 of the smartphone 500 integrally form the display input unit 520, but the operation panel 522 is disposed so as to completely cover the display panel 521. ing.
- the operation panel 522 may have a function of detecting a user operation even in an area outside the display panel 521.
- the operation panel 522 includes a detection area (hereinafter referred to as a display area) for an overlapping portion that overlaps the display panel 521 and a detection area (hereinafter, a non-display area) for an outer edge portion that does not overlap the other display panel 521. May be included).
- the operation panel 522 may include two sensitive regions of the outer edge portion and the other inner portion. Further, the width of the outer edge portion is appropriately designed according to the size of the housing 502 and the like.
- examples of the position detection method employed in the operation panel 522 include a matrix switch method, a resistive film method, a surface acoustic wave method, an infrared method, an electromagnetic induction method, a capacitance method, and the like. You can also
- the call unit 530 includes a speaker 531 and a microphone 532, and converts a user's voice input through the microphone 532 into voice data that can be processed by the main control unit 501, and outputs the voice data to the main control unit 501, or a wireless communication unit 510 or the audio data received by the external input / output unit 560 is decoded and output from the speaker 531.
- the speaker 531 can be mounted on the same surface as the display input unit 520 and the microphone 532 can be mounted on the side surface of the housing 502.
- the operation unit 540 is a hardware key using a key switch or the like, and receives an instruction from the user.
- the operation unit 540 is mounted on the lower and lower side of the display unit of the housing 502 of the smartphone 500, and turns on when pressed with a finger or the like, and restores a spring or the like when the finger is released. It is a push button type switch that is turned off by force.
- the storage unit 550 includes control programs and control data of the main control unit 501, address data in which names and telephone numbers of communication partners are associated, transmitted and received e-mail data, Web data downloaded by Web browsing, and downloaded contents Data is stored, and streaming data and the like are temporarily stored.
- the storage unit 550 includes an internal storage unit 551 with a built-in smartphone and an external storage unit 552 having a removable external memory slot.
- Each of the internal storage unit 551 and the external storage unit 552 constituting the storage unit 550 includes a flash memory type (flash memory type), a hard disk type (hard disk type), a multimedia card micro type (multimedia card micro type), It is realized using a storage medium such as a card type memory (for example, Micro SD (registered trademark) memory), RAM (Random Access Memory), ROM (Read Only Memory), or the like.
- flash memory type flash memory type
- hard disk type hard disk type
- multimedia card micro type multimedia card micro type
- a storage medium such as a card type memory (for example, Micro SD (registered trademark) memory), RAM (Random Access Memory), ROM (Read Only Memory), or the like.
- the external input / output unit 560 serves as an interface with all external devices connected to the smartphone 500, and communicates with other external devices (for example, universal serial bus (USB), IEEE1394, etc.) or a network.
- external devices for example, universal serial bus (USB), IEEE1394, etc.
- a network for example, Internet, wireless LAN, Bluetooth (registered trademark), RFID (Radio Frequency Identification), infrared communication (IrDA) (registered trademark), UWB (Ultra Wideband) (registered trademark), ZigBee (Registered trademark) etc.
- Examples of the external device connected to the smartphone 500 include a memory card connected via a wired / wireless headset, wired / wireless external charger, wired / wireless data port, card socket, and SIM (Subscriber).
- Identity Module Card / UIM User Identity Module Card
- external audio / video equipment connected via audio / video I / O (Input / Output) terminal
- external audio / video equipment connected wirelessly, yes / no
- the external input / output unit may transmit data received from such an external device to each component inside the smartphone 500, or may allow data inside the smartphone 500 to be transmitted to the external device. it can.
- the GPS receiving unit 570 receives GPS signals transmitted from the GPS satellites ST1 to STn in accordance with instructions from the main control unit 501, performs positioning calculation processing based on the received GPS signals, and calculates the latitude of the smartphone 500 Detect the position consisting of longitude and altitude.
- the GPS reception unit 570 can acquire position information from the wireless communication unit 510 or the external input / output unit 560 (for example, a wireless LAN), the GPS reception unit 570 can also detect the position using the position information.
- the motion sensor unit 580 includes a triaxial acceleration sensor, for example, and detects the physical movement of the smartphone 500 in accordance with an instruction from the main control unit 501. By detecting the physical movement of the smartphone 500, the moving direction and acceleration of the smartphone 500 are detected. This detection result is output to the main control unit 501.
- the power supply unit 590 supplies power stored in a battery (not shown) to each unit of the smartphone 500 in accordance with an instruction from the main control unit 501.
- the main control unit 501 includes a microprocessor, operates according to a control program and control data stored in the storage unit 550, and controls each unit of the smartphone 500 in an integrated manner. Further, the main control unit 501 includes a mobile communication control function for controlling each unit of the communication system and an application processing function in order to perform voice communication and data communication through the wireless communication unit 510.
- the application processing function is realized by the main control unit 501 operating according to the application software stored in the storage unit 550.
- the application processing function includes, for example, an infrared communication function for controlling the external input / output unit 560 to perform data communication with the opposite device, an e-mail function for sending / receiving e-mails, a web browsing function for browsing web pages, and the present invention. And a function for generating a 3D image from the 2D image according to the above.
- the main control unit 501 has an image processing function such as displaying video on the display input unit 520 based on image data (still image data or moving image data) such as received data or downloaded streaming data.
- the image processing function is a function in which the main control unit 501 decodes the image data, performs image processing on the decoding result, and displays an image on the display input unit 520.
- the main control unit 501 executes display control for the display panel 521 and operation detection control for detecting a user operation through the operation unit 540 and the operation panel 522.
- the main control unit 501 displays an icon for starting application software, a software key such as a scroll bar, or a window for creating an e-mail.
- a software key such as a scroll bar, or a window for creating an e-mail.
- the scroll bar refers to a software key for accepting an instruction to move the display portion of a large image that does not fit in the display area of the display panel 521.
- the main control unit 501 detects a user operation through the operation unit 540, or accepts an operation on the icon or an input of a character string in the input field of the window through the operation panel 522. Or a display image scroll request through a scroll bar.
- the main control unit 501 causes the operation position with respect to the operation panel 522 to overlap with the display panel 521 (display area) or other outer edge part (non-display area) that does not overlap with the display panel 521.
- a touch panel control function for controlling the sensitive area of the operation panel 522 and the display position of the software key.
- the main control unit 501 can also detect a gesture operation on the operation panel 522 and execute a preset function according to the detected gesture operation.
- Gesture operation is not a conventional simple touch operation, but an operation that draws a trajectory with a finger or the like, designates a plurality of positions at the same time, or combines these to draw a trajectory for at least one of a plurality of positions. means.
- the camera unit 541 is a digital camera that performs electronic photography using an imaging device such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge-Coupled Device), and has functions equivalent to the functions shown in the block diagram of FIG. ing. Also, the camera unit 541 converts image data obtained by imaging into compressed image data such as JPEG (Joint Photographic coding Experts Group) under the control of the main control unit 501, and records the data in the storage unit 550 or externally The data can be output through the input / output unit 560 and the wireless communication unit 510.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge-Coupled Device
- the camera unit 541 is mounted on the same surface as the display input unit 520, but the mounting position of the camera unit 541 is not limited thereto, and may be mounted on the back surface of the display input unit 520. Alternatively, a plurality of camera units 541 may be mounted. Note that in the case where a plurality of camera units 541 are mounted, the camera unit 541 used for shooting can be switched to perform shooting alone, or a plurality of camera units 541 can be used for shooting simultaneously.
- the camera unit 541 can be used for various functions of the smartphone 500.
- an image acquired by the camera unit 541 can be displayed on the display panel 521, or the image of the camera unit 541 can be used as one of operation inputs of the operation panel 522.
- the GPS receiving unit 570 detects the position, the position can also be detected with reference to an image from the camera unit 541.
- the optical axis direction of the camera unit 541 of the smartphone 500 is determined without using the triaxial acceleration sensor or in combination with the triaxial acceleration sensor. It is also possible to determine the current usage environment.
- the image from the camera unit 541 can be used in the application software.
- the position information acquired by the GPS receiver 570 to the image data of the still image or the moving image, the voice information acquired by the microphone 532 (the text information may be converted into voice information by the main control unit or the like), Posture information and the like acquired by the motion sensor unit 580 can be added and recorded in the storage unit 550 or output through the external input / output unit 560 or the wireless communication unit 510.
- whether or not the subject color in the focus detection area is red is determined by comparing the integrated value ( ⁇ R) of the output signal of the R pixel in the focus detection area and the integrated value ( ⁇ G) of the output signal of the G pixel. ( ⁇ R / ⁇ G) is determined based on whether or not it is equal to or greater than a preset threshold value.
- the present invention is not limited thereto, and the integrated average value of the R pixel output signal, the integrated average value of the G pixel output signal, The average color of the focus detection area may be determined based on whether the average color of the output signals of the R pixels belongs to the R area in the color space.
- SYMBOLS 10 Imaging device, 12 ... Shooting lens, 14 ... Aperture, 15 ... Mechanical shutter, 16 ... Imaging element, 24 ... Digital signal processing part, 30 ... Liquid crystal monitor, 32 ... Sensor drive part, 33 ... Shutter drive part, 34 ... Diaphragm Drive unit 36 ... Lens drive unit 40 ... Central processing unit (CPU) 42 ... AF processing unit 47 ... ROM (EEPROM) 48 ... Memory 500 ... Smartphone
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
- Color Television Image Signal Generators (AREA)
- Automatic Focus Adjustment (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
Description
図1及び図2はそれぞれ本発明に係る撮像装置の実施形態を示す斜視図及び背面図である。この撮像装置10は、レンズを通った光を撮像素子で受け、デジタル信号に変換して記録メディアに記録するデジタルカメラである。
図3は上記撮像装置10の内部構成の実施形態を示すブロック図である。この撮像装置10は、撮像した画像をメモリカード54に記録するもので、装置全体の動作は、中央処理装置(CPU)40によって統括制御される。
図4は、上記撮像素子16の実施形態を示す図であり、特に撮像素子16の受光面上に配置されている新規のカラーフィルタ配列に関して示している。
ッタ方式となる事が知られている。このローリングシャッタ方式は、ライン毎に露光タイミングに時間差があるため、動く被写体の場合には、被写体の画像が歪むという問題がある。従って、静止画撮影時には、シャッタ駆動部33によりメカシャッタ15を開閉制御(露光時間を制御)してローリングシャッタによる歪みが発生しないようにしている。
図9は本発明に係る自動焦点調節方法を示すフローチャートである。
上記の実施形態では、検出した位相差と混色率とに基づいてデフォーカス量を算出するようにしたが、第2の位相差画素p2、又は第1の位相差画素p1,第2の位相差画素p2への周辺画素からの混色量を算出し、第2の位相差画素p2の出力信号、又は第1の位相差画素p1,第2の位相差画素p2の各出力信号から算出した混色量を減算し、混色のない出力信号を取得するようにしてもよい。この場合、図14の関係式aからデフォーカス量を算出することができる。
図17は、撮像装置10の他の実施形態であるスマートフォン500の外観を示すものである。図17に示すスマートフォン500は、平板状の筐体502を有し、筐体502の一方の面に表示部としての表示パネル521と、入力部としての操作パネル522とが一体となった表示入力部520を備えている。また、筐体502は、スピーカ531と、マイクロホン532、操作部540と、カメラ部541とを備えている。なお、筐体502の構成はこれに限定されず、例えば、表示部と入力部とが独立した構成を採用したり、折り畳み構造やスライド機構を有する構成を採用することもできる。
を備える。
をアプリケーションソフトウェア内で利用することもできる。
この実施形態では、焦点検出領域における被写体色が赤色か否かの判別を、焦点検出領域のR画素の出力信号の積算値(ΣR)とG画素の出力信号の積算値(ΣG)との比(ΣR/ΣG)が予め設定した閾値以上か否かにより行うようにしたが、これに限らず、焦点検出領域のR画素の出力信号の積算平均値、G画素の出力信号の積算平均値、及びR画素の出力信号の積算平均値に基づいて焦点検出領域の平均の色が、色空間内のRの領域に属するか否か等により判別するようにしてもよい。
Claims (18)
- 撮影レンズと、
少なくとも赤(R)、緑(G)、青(B)画素と、前記撮影レンズの異なる第1、第2の領域を通過した被写体像が瞳分割されてそれぞれ結像される第1、第2の位相差画素とを含み、前記R画素のうち第1のR画素に対して最小ピッチで隣接する第1の方向の第1のB画素と前記R画素のうち第2のR画素に対して前記第1の方向とは逆の第2の方向に隣接する第2のB画素とを有する撮像素子と、
前記撮像素子に設定された焦点検出領域の出力信号に基づいて前記焦点検出領域における被写体色が赤色か否かを判別する判別部と、
前記判別部により前記焦点検出領域における被写体色が赤色でないと判別されると、前記焦点検出領域内の前記第1、第2の位相差画素の各出力信号に基づいて各出力信号の位相差を検出し、前記判別部により前記焦点検出領域における被写体色が赤色であると判別されると、前記焦点検出領域内の前記第1、第2のB画素の各出力信号に基づいて各出力信号の位相差を検出する位相差検出部と、
前記位相差検出部により検出された位相差に基づいて前記撮影レンズの焦点位置を調節する焦点調節部と、
を備えた撮像装置。 - 前記判別部は、前記撮像素子の予め設定された焦点検出領域のR画素の出力信号の積算値とG画素の出力信号の積算値との比を算出し、前記算出した比を予め設定した閾値と比較して前記焦点検出領域における被写体色が赤色と判別する、請求項1に記載の撮像装置。
- 前記第1、第2の位相差画素は、それぞれGのフィルタ又は無色のフィルタが配置された画素である、請求項1又は2に記載の撮像装置。
- 前記第1のB画素と第2のB画素とは、前記第1の方向の1ライン上に交互に配列され、
前記位相差検出部は、
前記第1の方向の第1のラインに配列された前記第1のB画素の出力信号と、前記第1のラインと近接した第2のラインに配設された第2のB画素の出力信号とに基づいて第1の位相差を検出し、
前記第1の方向の第1のラインに配列された前記第2のB画素の出力信号と、前記第2のラインに配設された第1のB画素の出力信号とに基づいて第2の位相差を検出し、
前記検出した第1、第2の位相差を平均して前記位相差を検出する、
請求項1から3のいずれか1項に記載の撮像装置。 - 前記第1の位相差画素と第2の位相差画素とは、前記第1の方向の1ライン上に交互に配列され、
前記位相差検出部は、
前記第1の方向の第3のラインに配列された前記第1の位相差画素の出力信号と、前記第3のラインと近接した第4のラインに配設された第2の位相差画素の出力信号とに基づいて第3の位相差を検出し、
前記第1の方向の第3のラインに配列された前記第2の位相差画素の出力信号と、前記第4のラインに配設された第1の位相差画素の出力信号とに基づいて第4の位相差を検出し、
前記検出した第3、第4の位相差を平均して前記位相差を検出する、
請求項1から4のいずれか1項に記載の撮像装置。 - 前記撮像素子のライン毎に順次信号を読み出すローリング読出部と、
前記撮像素子に入射する光を遮断する機械的シャッタと、を備え、
前記位相差検出部は、前記機械的シャッタを開いた状態で前記ローリング読出部により連続的に読み出された信号に基づいて連続的に位相差を検出する、
請求項4又は5に記載の撮像装置。 - 前記位相差検出部により検出された位相差と前記第1、第2の位相差画素の少なくとも一方の位相差画素への周辺画素からの混色率とに基づいて前記撮影レンズのデフォーカス量を求めるデフォーカス量算出部を備え、
前記焦点調節部は、前記デフォーカス量算出部により求めたデフォーカス量がゼロになる位置に前記撮影レンズを移動させる、
請求項1から6のいずれか1項に記載の撮像装置。 - 前記位相差検出部は、
前記第1、第2の位相差画素の少なくとも一方の位相差画素への周辺画素からの混色率及び周辺画素の出力信号に基づいて前記第1、第2の位相差画素の少なくとも一方の位相差画素の出力信号を補正し、
補正後の第1、第2の位相差画素の出力信号に基づいて前記位相差を検出する、
請求項1から6のいずれか1項に記載の撮像装置。 - 前記第1、第2の位相差画素のうちのいずれか一方の位相差画素の前記第1の方向に隣接してR画素が配置され、
前記周辺画素からの混色率は、前記第1の方向に隣接してR画素が配置された位相差画素へのR画素からの混色率である、
請求項7又は8に記載の撮像装置。 - 前記位相差検出部は、前記第1のB画素の出力信号と第2のB画素の出力信号との比に基づいて前記周辺画素からの混色率を求める、請求項9に記載の撮像装置。
- 前記撮像素子は、前記第1、第2の方向に第1、第2のB画素を有し、かつ前記第1、第2の方向に垂直な第3、第4の方向に前記第1、第2のB画素を有し、
前記位相差検出部は、前記第1、第2の方向の第1、第2のB画素、又は第3、第4の方向の第1、第2のB画素の出力信号に基づいて位相差を検出する、
請求項1から10のいずれか1項に記載の撮像装置。 - 前記第1、第2の方向は、撮像装置本体を水平に構えたときの左右方向であり、
横撮り撮影か縦撮り撮影かを検出する縦横検出部を備え、
前記位相差検出部は、
前記縦横検出部により横撮り撮影が検出されると、前記第1、第2の方向の第1、第2のB画素の出力信号に基づいて位相差を検出し、
前記縦横検出部により縦撮り撮影が検出されると、前記第3、第4の方向の第1、第2のB画素の出力信号に基づいて位相差を検出する、
請求項11に記載の撮像装置。 - 前記撮像素子は、前記第1、第2の方向及び第1、第2の方向に垂直な第3、第4の方向に6×6画素に対応するカラーフィルタの基本配列パターンを有し、該基本配列パターンが前記第1、第2の方向及び第3、第4の方向に繰り返し配設され、
前記基本配列パターンは、
3×3画素に対応する第1の配列であって、中心と4隅にGフィルタが配置され、中心のGフィルタを挟んで上下にBフィルタが配置され、左右にRフィルタが配列された第1の配列と、
3×3画素に対応する第2の配列であって、中心と4隅にGフィルタが配置され、中心のGフィルタを挟んで上下にRフィルタが配置され、左右にBフィルタが配列された第2の配列とが、それぞれ対角位置に配置されて構成され、
前記撮像素子の焦点検出領域の前記第1、第2の配列の4隅の1つのGフィルタを有する画素が、それぞれ前記第1、第2の位相差画素として構成されている、
請求項1から12のいずれか1項に記載の撮像装置。 - 撮影レンズと、
少なくとも赤(R)、緑(G)、青(B)画素と、前記撮影レンズの異なる第1、第2の領域を通過した被写体像が瞳分割されてそれぞれ結像される第1、第2の位相差画素とを含み、前記R画素のうち第1のR画素に対して最小ピッチで隣接する第1の方向の第1のB画素と前記R画素のうち第2のR画素に対して前記第1の方向とは逆の第2の方向に隣接する第2のB画素とを有し、前記第1、第2の位相差画素のうちの少なくとも一方の位相差画素は前記第1の方向または第2の方向に隣接してR画素が配置される撮像素子と、
前記撮像素子に設定された焦点検出領域内の前記第1、第2の位相差画素の各出力信号に基づいて各出力信号の位相差を検出する位相差検出部と、
前記位相差検出部により出力された位相差に基づいて前記撮影レンズの焦点位置を調節する焦点調節部と、を備え、
前記位相差検出部は、前記第1、第2の位相差画素の少なくとも一方の位相差画素への周辺画素からの混色率を前記第1のB画素の出力信号と第2のB画素の出力信号との少なくとも一方の出力信号に基づいて求め、かつ前記求めた混色率及び前記周辺画素の出力信号に基づいて前記第1、第2の位相差画素の少なくとも一方の位相差画素の出力信号を補正する、
撮像装置。 - 前記焦点検出領域の出力信号に基づいて前記焦点検出領域における被写体色が赤色か否かを判別する判別部を備え、
前記位相差検出部は、前記判別部により前記焦点検出領域における被写体色が赤色であると判別されると、前記第1のB画素の出力信号と第2のB画素の出力信号の少なくとも一方の出力信号から前記周辺画素からの混色率を求め、
前記混色率及び周辺画素の出力信号に基づいて前記第1、第2の位相差画素の少なくとも一方の位相差画素の出力信号を補正し、前記補正がされた第1、第2の位相差画素の出力信号に基づいて前記位相差を検出する、
請求項14に記載の撮像装置。 - 撮影レンズと、
少なくとも赤(R)、緑(G)、青(B)画素と、前記撮影レンズの異なる第1、第2の領域を通過した被写体像が瞳分割されてそれぞれ結像される第1、第2の位相差画素とを含み、前記R画素のうち第1のR画素に対して最小ピッチで隣接する第1の方向の第1のB画素と前記R画素のうち第2のR画素に対して前記第1の方向とは逆の第2の方向に隣接する第2のB画素とを有し、前記第1、第2の位相差画素のうちの少なくとも一方の位相差画素は前記第1の方向または第2の方向に隣接してR画素が配置される撮像素子と、
前記撮像素子に設定された焦点検出領域内の前記第1、第2の位相差画素の各出力信号に基づいて各出力信号の位相差を検出する位相差検出部と、
前記位相差検出部により検出された位相差と前記第1、第2の位相差画素の少なくとも一方の位相差画素への周辺画素からの混色率とに基づいて前記撮影レンズのデフォーカス量を求めるデフォーカス量算出部と、
前記デフォーカス量算出部により求めたデフォーカス量がゼロになる位置に前記撮影レンズを移動させる焦点調節部と、を備え、
前記位相差検出部は、前記第1のB画素の出力信号と第2のB画素の出力信号の少なくとも一方の出力信号に基づいて前記周辺画素からの混色率を求める、
撮像装置。 - 前記焦点検出領域の出力信号に基づいて前記焦点検出領域における被写体色が赤色か否かを判別する判別部を備え、
前記位相差検出部は、前記判別部により前記焦点検出領域における被写体色が赤色であると判別されると、前記第1のB画素の出力信号と第2のB画素の出力信号の少なくとも一方の出力信号から前記周辺画素からの混色率を求め、
前記デフォーカス量算出部は、前記位相差検出部により検出された位相差と前記周辺画素からの混色率とに基づいて前記撮影レンズのデフォーカス量を求める、
請求項16に記載の撮像装置。 - 少なくとも赤(R)、緑(G)、青(B)画素と、撮影レンズの異なる第1、第2の領域を通過した被写体像が瞳分割されてそれぞれ結像される第1、第2の位相差画素とを含み、前記R画素のうち第1のR画素に対して最小ピッチで隣接する第1の方向の第1のB画素と前記R画素のうち第2のR画素に対して前記第1の方向とは逆の第2の方向に隣接する第2のB画素とを有する撮像素子から出力信号を取得する信号取得工程と、
前記信号取得工程で取得した出力信号のうちの前記撮像素子に設定された焦点検出領域の出力信号に基づいて前記焦点検出領域における被写体色が赤色か否かを判別する判別工程と、
前記判別工程により前記焦点検出領域における被写体色が赤色でないと判別されると、前記焦点検出領域内の前記第1、第2の位相差画素の各出力信号に基づいて各出力信号の位相差を検出し、前記判別工程により前記焦点検出領域における被写体色が赤色であると判別されると、前記焦点検出領域内の前記第1、第2のB画素の各出力信号に基づいて各出力信号の位相差を検出する位相差検出工程と、
前記位相差検出工程により検出された位相差に基づいて前記撮影レンズの焦点位置を調節する焦点調節工程と、
を含む自動焦点調節方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201380032618.XA CN104380168B (zh) | 2012-06-19 | 2013-04-23 | 摄像装置及自动调焦方法 |
JP2014520997A JP5697801B2 (ja) | 2012-06-19 | 2013-04-23 | 撮像装置及び自動焦点調節方法 |
US14/560,373 US9237319B2 (en) | 2012-06-19 | 2014-12-04 | Imaging device and automatic focus adjustment method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-137774 | 2012-06-19 | ||
JP2012137774 | 2012-06-19 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/560,373 Continuation US9237319B2 (en) | 2012-06-19 | 2014-12-04 | Imaging device and automatic focus adjustment method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013190899A1 true WO2013190899A1 (ja) | 2013-12-27 |
Family
ID=49768506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/061840 WO2013190899A1 (ja) | 2012-06-19 | 2013-04-23 | 撮像装置及び自動焦点調節方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9237319B2 (ja) |
JP (1) | JP5697801B2 (ja) |
CN (1) | CN104380168B (ja) |
WO (1) | WO2013190899A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015184433A (ja) * | 2014-03-24 | 2015-10-22 | キヤノン株式会社 | 撮像素子、撮像装置、画像処理方法、並びにプログラム |
JP2020092431A (ja) * | 2017-03-30 | 2020-06-11 | 富士フイルム株式会社 | 撮像装置及び画像処理方法 |
JP2021110795A (ja) * | 2020-01-08 | 2021-08-02 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co., Ltd | 制御装置、撮像装置、制御方法、及びプログラム |
JP2021110794A (ja) * | 2020-01-08 | 2021-08-02 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co., Ltd | 制御装置、撮像装置、制御方法、及びプログラム |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112013005594T5 (de) * | 2012-11-22 | 2015-10-22 | Fujifilm Corporation | Abbildungsvorrichtung, Unschärfebetrag-Berechnungsverfahren und Objektivvorrichtung |
WO2014087808A1 (ja) * | 2012-12-07 | 2014-06-12 | 富士フイルム株式会社 | 画像処理装置、画像処理方法及びプログラム、並びに記録媒体 |
JP6104049B2 (ja) * | 2013-05-21 | 2017-03-29 | オリンパス株式会社 | 画像処理装置、画像処理方法、および画像処理用プログラム |
JP5775918B2 (ja) * | 2013-09-27 | 2015-09-09 | オリンパス株式会社 | 撮像装置、画像処理方法及び画像処理プログラム |
JP2015129846A (ja) * | 2014-01-07 | 2015-07-16 | キヤノン株式会社 | 撮像装置およびその制御方法 |
US9432568B2 (en) * | 2014-06-30 | 2016-08-30 | Semiconductor Components Industries, Llc | Pixel arrangements for image sensors with phase detection pixels |
JP6584059B2 (ja) * | 2014-09-26 | 2019-10-02 | キヤノン株式会社 | 撮像装置及びその制御方法、プログラム、記憶媒体 |
US10044959B2 (en) * | 2015-09-24 | 2018-08-07 | Qualcomm Incorporated | Mask-less phase detection autofocus |
WO2017057071A1 (ja) * | 2015-09-30 | 2017-04-06 | 富士フイルム株式会社 | 合焦制御装置、合焦制御方法、合焦制御プログラム、レンズ装置、撮像装置 |
US10264174B2 (en) * | 2015-12-08 | 2019-04-16 | Samsung Electronics Co., Ltd. | Photographing apparatus and focus detection method using the same |
CN106937107B (zh) * | 2015-12-29 | 2019-03-12 | 宁波舜宇光电信息有限公司 | 基于色差的摄像模组调焦方法 |
DE112020003782T5 (de) * | 2019-08-09 | 2022-07-07 | Semiconductor Energy Laboratory Co., Ltd. | Abbildungsvorrichtung oder Abbildungssystem |
CN112235494B (zh) * | 2020-10-15 | 2022-05-20 | Oppo广东移动通信有限公司 | 图像传感器、控制方法、成像装置、终端及可读存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012004729A (ja) * | 2010-06-15 | 2012-01-05 | Fujifilm Corp | 撮像装置及び画像処理方法 |
JP2012022147A (ja) * | 2010-07-14 | 2012-02-02 | Olympus Corp | 位相差検出用情報取得装置、位相差検出装置、撮像装置 |
JP2012114797A (ja) * | 2010-11-26 | 2012-06-14 | Nikon Corp | 撮像装置 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02210995A (ja) | 1989-02-10 | 1990-08-22 | Fuji Photo Film Co Ltd | カラー固体撮像デバイス |
DE19616440A1 (de) | 1996-04-25 | 1997-10-30 | Eastman Kodak Co | Verfahren und Vorrichtung zur Gewinnung eines vollen Farbbildes oder Multispektralbildes aus Bilddaten eines CCD-Bildsensors mit Mosaik-Farbfilter |
JP5151075B2 (ja) | 2005-06-21 | 2013-02-27 | ソニー株式会社 | 画像処理装置及び画像処理方法、撮像装置、並びにコンピュータ・プログラム |
EP2179581B1 (en) * | 2007-08-10 | 2015-11-11 | Canon Kabushiki Kaisha | Image-pickup apparatus and control method therof |
JP5374862B2 (ja) * | 2007-11-16 | 2013-12-25 | 株式会社ニコン | 焦点検出装置および撮像装置 |
JP5109641B2 (ja) * | 2007-12-18 | 2012-12-26 | ソニー株式会社 | 撮像素子および撮像装置 |
US7745779B2 (en) | 2008-02-08 | 2010-06-29 | Aptina Imaging Corporation | Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers |
JP5251323B2 (ja) * | 2008-07-15 | 2013-07-31 | 株式会社ニコン | 撮像装置 |
JP5434761B2 (ja) * | 2010-04-08 | 2014-03-05 | 株式会社ニコン | 撮像デバイスおよび撮像装置 |
JP2011257565A (ja) | 2010-06-08 | 2011-12-22 | Canon Inc | 撮像装置 |
CN103081457B (zh) * | 2010-08-24 | 2016-04-13 | 富士胶片株式会社 | 固态成像装置 |
JP5597078B2 (ja) * | 2010-09-17 | 2014-10-01 | キヤノン株式会社 | 撮像装置及びその制御方法 |
CN103988490B (zh) * | 2011-12-13 | 2018-05-22 | 索尼公司 | 图像处理装置、图像处理方法和记录介质 |
-
2013
- 2013-04-23 WO PCT/JP2013/061840 patent/WO2013190899A1/ja active Application Filing
- 2013-04-23 CN CN201380032618.XA patent/CN104380168B/zh not_active Expired - Fee Related
- 2013-04-23 JP JP2014520997A patent/JP5697801B2/ja not_active Expired - Fee Related
-
2014
- 2014-12-04 US US14/560,373 patent/US9237319B2/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012004729A (ja) * | 2010-06-15 | 2012-01-05 | Fujifilm Corp | 撮像装置及び画像処理方法 |
JP2012022147A (ja) * | 2010-07-14 | 2012-02-02 | Olympus Corp | 位相差検出用情報取得装置、位相差検出装置、撮像装置 |
JP2012114797A (ja) * | 2010-11-26 | 2012-06-14 | Nikon Corp | 撮像装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015184433A (ja) * | 2014-03-24 | 2015-10-22 | キヤノン株式会社 | 撮像素子、撮像装置、画像処理方法、並びにプログラム |
JP2020092431A (ja) * | 2017-03-30 | 2020-06-11 | 富士フイルム株式会社 | 撮像装置及び画像処理方法 |
JP7060634B2 (ja) | 2017-03-30 | 2022-04-26 | 富士フイルム株式会社 | 撮像装置及び画像処理方法 |
JP2021110795A (ja) * | 2020-01-08 | 2021-08-02 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co., Ltd | 制御装置、撮像装置、制御方法、及びプログラム |
JP2021110794A (ja) * | 2020-01-08 | 2021-08-02 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co., Ltd | 制御装置、撮像装置、制御方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP5697801B2 (ja) | 2015-04-08 |
CN104380168A (zh) | 2015-02-25 |
US9237319B2 (en) | 2016-01-12 |
US20150146052A1 (en) | 2015-05-28 |
CN104380168B (zh) | 2016-04-20 |
JPWO2013190899A1 (ja) | 2016-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5697801B2 (ja) | 撮像装置及び自動焦点調節方法 | |
JP5740054B2 (ja) | 撮像装置及び画像処理方法 | |
JP5749403B2 (ja) | 撮像装置及び画像処理方法 | |
CN104365089B (zh) | 摄像装置及图像显示方法 | |
WO2014034486A1 (ja) | 画像処理装置、方法、プログラム及び記録媒体並びに撮像装置 | |
WO2014046037A1 (ja) | 撮像装置及びその制御方法 | |
WO2014091854A1 (ja) | 画像処理装置、撮像装置、画像処理方法、及び画像処理プログラム | |
US11496666B2 (en) | Imaging apparatus with phase difference detecting element | |
JP5753323B2 (ja) | 撮像装置及び画像表示方法 | |
WO2014106917A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
WO2014006783A1 (ja) | 撮像装置及び画像処理方法 | |
WO2014077065A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP2020092431A (ja) | 撮像装置及び画像処理方法 | |
JPWO2014045738A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
WO2014045741A1 (ja) | 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム | |
JP5740053B6 (ja) | 撮像装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201380032618.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13806377 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014520997 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13806377 Country of ref document: EP Kind code of ref document: A1 |