WO2017170718A1 - Dispositif de capture d'image, dispositif de détection de sujet et appareil électronique - Google Patents
Dispositif de capture d'image, dispositif de détection de sujet et appareil électronique Download PDFInfo
- Publication number
- WO2017170718A1 WO2017170718A1 PCT/JP2017/012959 JP2017012959W WO2017170718A1 WO 2017170718 A1 WO2017170718 A1 WO 2017170718A1 JP 2017012959 W JP2017012959 W JP 2017012959W WO 2017170718 A1 WO2017170718 A1 WO 2017170718A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging
- image data
- unit
- region
- subject
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 317
- 238000003384 imaging method Methods 0.000 claims description 1850
- 238000012545 processing Methods 0.000 claims description 1033
- 238000006243 chemical reaction Methods 0.000 claims description 54
- 238000003860 storage Methods 0.000 claims description 18
- 230000015654 memory Effects 0.000 claims description 6
- 238000012935 Averaging Methods 0.000 claims description 5
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims 12
- 238000012937 correction Methods 0.000 description 502
- 238000000034 method Methods 0.000 description 433
- 230000008569 process Effects 0.000 description 420
- 230000000875 corresponding effect Effects 0.000 description 189
- 208000003443 Unconsciousness Diseases 0.000 description 179
- 238000004364 calculation method Methods 0.000 description 60
- 230000004048 modification Effects 0.000 description 43
- 238000012986 modification Methods 0.000 description 43
- 238000010586 diagram Methods 0.000 description 37
- 230000003287 optical effect Effects 0.000 description 37
- 230000035945 sensitivity Effects 0.000 description 32
- 238000004891 communication Methods 0.000 description 21
- 238000007781 pre-processing Methods 0.000 description 21
- 238000009825 accumulation Methods 0.000 description 17
- 230000007547 defect Effects 0.000 description 17
- 230000009467 reduction Effects 0.000 description 16
- 230000008859 change Effects 0.000 description 14
- 230000007423 decrease Effects 0.000 description 12
- 238000012546 transfer Methods 0.000 description 11
- 239000004065 semiconductor Substances 0.000 description 8
- 238000005375 photometry Methods 0.000 description 7
- 239000002131 composite material Substances 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 210000001747 pupil Anatomy 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000009499 grossing Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000002161 passivation Methods 0.000 description 3
- 238000003825 pressing Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000004397 blinking Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 150000004767 nitrides Chemical class 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B7/00—Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
- G03B7/08—Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
- G03B7/091—Digital circuits
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
Definitions
- the present invention relates to an imaging device, a subject detection device, and an electronic apparatus.
- Patent Document 1 An imaging device equipped with a technique for detecting a subject using a signal from an imaging element is known (see Patent Document 1). Conventionally, it has been required to improve the accuracy of subject detection.
- the imaging apparatus includes: an imaging unit having a first imaging area that is imaged under a first imaging condition; and a second imaging area that is imaged under a second imaging condition different from the first imaging condition; And a detection unit that detects a subject imaged in the first imaging area by light imaged under a second imaging condition.
- the imaging device includes an imaging unit having a first imaging area that is imaged under a first imaging condition and a second imaging area that is imaged under a second imaging condition different from the first imaging condition; And a generation unit that generates a signal for detecting a subject imaged in the first imaging area by light imaged under the second imaging condition.
- the imaging apparatus captures the first imaging region in which light from the lens is imaged under the first imaging condition and the second imaging condition in which the light from the lens is imaged under a second imaging condition different from the first imaging condition.
- the imaging apparatus captures the first imaging region for imaging light from the lens under the first imaging condition and the second imaging condition for imaging light from the lens under the second imaging condition.
- the subject detection device images the light from the lens under the first imaging condition and the light from the lens under the second imaging condition different from the first imaging condition.
- An input unit that inputs image data from an imaging unit having a second imaging region; and a detection unit that detects a subject imaged in the first imaging region by light imaged under the second imaging condition.
- the subject detection device images the first imaging region for imaging light from the lens under the first imaging condition and the light from the lens under the second imaging condition different from the first imaging condition.
- An input unit for inputting image data from an imaging unit having a second imaging region, and a signal for detecting a subject imaged in the first imaging region by light imaged under the second imaging condition A generating unit.
- the subject detection device includes: a first imaging area that is imaged under the first imaging condition; and a second imaging area that is imaged under a second imaging condition different from the first imaging condition.
- An input unit that inputs image data; and a detection unit that detects a subject imaged in the first imaging region by light imaged under an imaging condition different from the first imaging condition.
- the subject detection device images the first imaging region for imaging light from the lens under the first imaging condition and the light from the lens under the second imaging condition different from the first imaging condition.
- the electronic device includes a first imaging element having a plurality of imaging areas for imaging a subject, a second imaging element for imaging the subject, and the plurality of imaging areas of the first imaging element.
- a detection unit that detects a subject imaged in the first imaging region by a signal corrected so as to be imaged according to a second imaging condition by a signal of the subject imaged by the second imaging element.
- the plurality of pixels are arranged, the first imaging element having the first imaging region and the second imaging region for imaging the subject, and the plurality of pixels are arranged to image the subject.
- a second imaging element a setting unit that sets the first imaging area to an imaging condition different from the imaging condition of the second imaging area; and the pixels arranged in the first imaging area of the first imaging element; Of the pixels arranged in the second imaging element, the image was picked up in the first imaging region by a signal from a pixel selected using a signal from the pixel arranged in the first imaging region.
- a detection unit for detecting a subject the electronic apparatus captures a first imaging area different from the second imaging area among the imaging element having a plurality of imaging areas for imaging a subject and the plurality of imaging areas of the imaging element.
- a first setting unit configured to set a condition; and a part of the image signal of the subject in the first imaging area captured under the first imaging condition among the plurality of imaging areas.
- a detection unit configured to detect a subject imaged in the first imaging region using a signal corrected so as to be imaged according to the second imaging condition based on an image signal of the subject in the imaging region.
- the plurality of pixels are arranged, the imaging device having the first imaging area and the second imaging area for imaging the subject, the first imaging area, the first imaging area of the second imaging area
- a setting unit that can be set to an imaging condition different from the imaging condition, the pixel arranged in the first imaging region set to the first imaging condition by the setting unit, and the second imaging condition set by the setting unit
- the pixels arranged in the first imaging region a pixel selected using a signal from the pixel arranged in the first imaging region set in the first imaging condition by the setting unit
- a detection unit that detects a subject imaged in the first imaging area.
- FIG. 7A is a diagram illustrating a predetermined range in the live view image
- FIG. 7B is an enlarged view of the predetermined range.
- FIG. 8A illustrates the main image data corresponding to FIG. FIG.
- FIG. 8B is a diagram illustrating the processing image data corresponding to FIG.
- FIG. 9A is a diagram illustrating a region of interest in the live view image
- FIG. 9B is an enlarged view of the pixel of interest and the reference pixel Pr.
- 10A is a diagram illustrating the arrangement of photoelectric conversion signals output from the pixels
- FIG. 10B is a diagram illustrating interpolation of G color component image data
- FIG. 10C is a diagram illustrating G after interpolation. It is a figure which illustrates the image data of a color component.
- 11A is a diagram obtained by extracting R color component image data from FIG. 10A
- FIG. 11B is a diagram illustrating interpolation of the color difference component Cr
- FIG. 11C is an image of the color difference component Cr.
- FIG. 12A is a diagram obtained by extracting B color component image data from FIG. 10A
- FIG. 12B is a diagram illustrating interpolation of the color difference component Cb
- FIG. 12C is an image of the color difference component Cb.
- FIG. 16A is a diagram illustrating a template image representing an object to be detected
- FIG. 16B is a diagram illustrating a live view image and a search range.
- FIG. 17B is a diagram illustrating a case where processing image data is captured at the start of display of the live view image.
- FIG. 17C is a diagram illustrating processing image data at the end of display of the live view image. It is a figure which illustrates the case where it picturizes. It is a flowchart explaining the flow of the process which sets an imaging condition for every area and images.
- FIG. 19A to FIG. 19C are diagrams illustrating the arrangement of the first imaging region and the second imaging region on the imaging surface of the imaging device.
- 20 is a block diagram illustrating a configuration of an imaging system according to Modification 11. It is a figure explaining supply of the program to a mobile device. It is a block diagram which illustrates the composition of the camera by a 2nd embodiment. It is the figure which showed typically the correspondence of each block in 2nd Embodiment, and several correction
- a digital camera will be described as an example of an electronic device equipped with the image processing apparatus according to the first embodiment.
- the camera 1 (FIG. 1) is configured to be able to capture images under different conditions for each region of the imaging surface of the image sensor 32a.
- the image processing unit 33 performs appropriate processing in areas with different imaging conditions. Details of the camera 1 will be described with reference to the drawings.
- FIG. 1 is a block diagram illustrating the configuration of the camera 1 according to the first embodiment.
- the camera 1 includes an imaging optical system 31, an imaging unit 32, an image processing unit 33, a control unit 34, a display unit 35, an operation member 36, and a recording unit 37.
- the imaging optical system 31 guides the light flux from the object scene to the imaging unit 32.
- the imaging unit 32 includes an imaging element 32a and a driving unit 32b, and photoelectrically converts an object image formed by the imaging optical system 31.
- the imaging unit 32 can capture images under the same conditions over the entire imaging surface of the imaging device 32a, or can perform imaging under different conditions for each region of the imaging surface of the imaging device 32a. Details of the imaging unit 32 will be described later.
- the drive unit 32b generates a drive signal necessary for causing the image sensor 32a to perform accumulation control.
- An imaging instruction such as a charge accumulation time for the imaging unit 32 is transmitted from the control unit 34 to the driving unit 32b.
- the image processing unit 33 includes an input unit 33a, a correction unit 33b, and a generation unit 33c.
- Image data acquired by the imaging unit 32 is input to the input unit 33a.
- the correction unit 33b performs preprocessing for correcting the input image data. Details of the preprocessing will be described later.
- the generation unit 33c performs image processing on the input image data and the preprocessed image data to generate an image.
- Image processing includes, for example, color interpolation processing, pixel defect correction processing, edge enhancement processing, noise reduction processing, white balance adjustment processing, gamma correction processing, display luminance adjustment processing, saturation adjustment processing, and the like.
- the generation unit 33 c generates an image to be displayed by the display unit 35.
- the control unit 34 is constituted by a CPU, for example, and controls the overall operation of the camera 1. For example, the control unit 34 performs a predetermined exposure calculation based on the photoelectric conversion signal acquired by the imaging unit 32, the charge accumulation time (exposure time) of the imaging element 32a necessary for proper exposure, and the aperture of the imaging optical system 31.
- the exposure conditions such as the value and ISO sensitivity are determined and instructed to the drive unit 32b.
- image processing conditions for adjusting saturation, contrast, sharpness, and the like are determined and instructed to the image processing unit 33 according to the imaging scene mode set in the camera 1 and the type of the detected subject element. The detection of the subject element will be described later.
- the control unit 34 includes an object detection unit 34a, a setting unit 34b, an imaging control unit 34c, and a lens movement control unit 34d. These are realized as software by the control unit 34 executing a program stored in a nonvolatile memory (not shown). However, these may be configured by an ASIC or the like.
- the object detection unit 34a performs a known object recognition process, and from the image acquired by the imaging unit 32, a person (person's face), an animal such as a dog or a cat (animal face), a plant, a bicycle, an automobile , Detecting a subject element such as a vehicle such as a train, a building, a stationary object, a landscape such as a mountain or a cloud, or a predetermined specific object.
- the setting unit 34b divides the imaging screen by the imaging unit 32 into a plurality of regions including the subject element detected as described above.
- the setting unit 34b further sets imaging conditions for a plurality of areas.
- Imaging conditions include the exposure conditions (charge accumulation time, gain, ISO sensitivity, frame rate, etc.) and the image processing conditions (for example, white balance adjustment parameters, gamma correction curves, display brightness adjustment parameters, saturation adjustment parameters, etc.) ).
- the same imaging conditions can be set for all of the plurality of areas, or different imaging conditions can be set for the plurality of areas.
- the imaging control unit 34c controls the imaging unit 32 (imaging element 32a) and the image processing unit 33 by applying imaging conditions set for each region by the setting unit 34b. Thereby, it is possible to cause the imaging unit 32 to perform imaging under different exposure conditions for each of the plurality of regions, and for the image processing unit 33, images with different image processing conditions for each of the plurality of regions. Processing can be performed. Any number of pixels may be included in the region, for example, 1000 pixels or 1 pixel. Further, the number of pixels may be different between regions.
- the lens movement control unit 34d controls an automatic focus adjustment (autofocus: AF) operation for focusing on a corresponding subject at a predetermined position (called a focus point) on the imaging screen.
- autofocus AF
- the lens movement control unit 34d is a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result, for example, a signal for adjusting the subject image with the focus lens of the imaging optical system 31.
- the lens movement control unit 34d functions as a moving unit that moves the focus lens of the imaging optical system 31 in the optical axis direction based on the calculation result.
- the process performed by the lens movement control unit 34d for the AF operation is also referred to as a focus detection process. Details of the focus detection process will be described later.
- the display unit 35 reproduces and displays the image generated by the image processing unit 33, the image processed image, the image read by the recording unit 37, and the like.
- the display unit 35 also displays an operation menu screen, a setting screen for setting imaging conditions, and the like.
- the operation member 36 is composed of various operation members such as a release button and a menu button.
- the operation member 36 sends an operation signal corresponding to each operation to the control unit 34.
- the operation member 36 includes a touch operation member provided on the display surface of the display unit 35.
- the recording unit 37 records image data or the like on a recording medium including a memory card (not shown) in response to an instruction from the control unit 34.
- the recording unit 37 reads image data recorded on the recording medium in response to an instruction from the control unit 34.
- the photometric sensor 38 outputs an image signal for photometry according to the brightness of the subject image.
- the photometric sensor 38 is constituted by, for example, a CMOS image sensor.
- FIG. 2 is a cross-sectional view of the image sensor 100.
- the imaging element 100 includes an imaging chip 111, a signal processing chip 112, and a memory chip 113.
- the imaging chip 111 is stacked on the signal processing chip 112.
- the signal processing chip 112 is stacked on the memory chip 113.
- the imaging chip 111, the signal processing chip 112, the signal processing chip 112, and the memory chip 113 are electrically connected by a connection unit 109.
- the connection unit 109 is, for example, a bump or an electrode.
- the imaging chip 111 captures a light image from a subject and generates image data.
- the imaging chip 111 outputs image data from the imaging chip 111 to the signal processing chip 112.
- the signal processing chip 112 performs signal processing on the image data output from the imaging chip 111.
- the memory chip 113 has a plurality of memories and stores image data.
- the image sensor 100 may include an image pickup chip and a signal processing chip.
- a storage unit for storing image data may be provided in the signal processing chip or may be provided separately from the imaging device 100. .
- the incident light is incident mainly in the positive direction of the Z axis indicated by the white arrow.
- the left direction of the paper orthogonal to the Z axis is the X axis plus direction
- the front side of the paper orthogonal to the Z axis and the X axis is the Y axis plus direction.
- the coordinate axes are displayed so that the orientation of each figure can be understood with reference to the coordinate axes in FIG.
- the imaging chip 111 is, for example, a CMOS image sensor. Specifically, the imaging chip 111 is a backside illumination type CMOS image sensor.
- the imaging chip 111 includes a microlens layer 101, a color filter layer 102, a passivation layer 103, a semiconductor layer 106, and a wiring layer 108.
- the imaging chip 111 is arranged in the order of the microlens layer 101, the color filter layer 102, the passivation layer 103, the semiconductor layer 106, and the wiring layer 108 in the positive Z-axis direction.
- the microlens layer 101 has a plurality of microlenses L.
- the microlens L condenses incident light on the photoelectric conversion unit 104 described later.
- One pixel or one filter corresponds to one microlens L.
- the color filter layer 102 includes a plurality of color filters F.
- the color filter layer 102 has a plurality of types of color filters F having different spectral characteristics.
- the color filter layer 102 includes a first filter (R) having a spectral characteristic that mainly transmits red component light and a second filter (Gb, Gr) that has a spectral characteristic that mainly transmits green component light. ) And a third filter (B) having a spectral characteristic that mainly transmits blue component light.
- the passivation layer 103 is made of a nitride film or an oxide film, and protects the semiconductor layer 106.
- the semiconductor layer 106 includes a photoelectric conversion unit 104 and a readout circuit 105.
- the semiconductor layer 106 includes a plurality of photoelectric conversion units 104 between a first surface 106a that is a light incident surface and a second surface 106b opposite to the first surface 106a.
- the semiconductor layer 106 includes a plurality of photoelectric conversion units 104 arranged in the X-axis direction and the Y-axis direction.
- the photoelectric conversion unit 104 has a photoelectric conversion function of converting light into electric charge. In addition, the photoelectric conversion unit 104 accumulates charges based on the photoelectric conversion signal.
- the photoelectric conversion unit 104 is, for example, a photodiode.
- the semiconductor layer 106 includes a readout circuit 105 on the second surface 106b side of the photoelectric conversion unit 104.
- a plurality of readout circuits 105 are arranged in the X-axis direction and the Y-axis direction.
- the readout circuit 105 includes a plurality of transistors, reads out image data generated by the electric charges photoelectrically converted by the photoelectric conversion unit 104, and outputs the image data to the wiring layer 108.
- the wiring layer 108 has a plurality of metal layers.
- the metal layer is, for example, an Al wiring, a Cu wiring, or the like.
- the wiring layer 108 outputs the image data read by the reading circuit 105.
- the image data is output from the wiring layer 108 to the signal processing chip 112 via the connection unit 109.
- connection unit 109 may be provided for each photoelectric conversion unit 104. Further, the connection unit 109 may be provided for each of the plurality of photoelectric conversion units 104. When the connection unit 109 is provided for each of the plurality of photoelectric conversion units 104, the pitch of the connection units 109 may be larger than the pitch of the photoelectric conversion units 104. In addition, the connection unit 109 may be provided in a peripheral region of the region where the photoelectric conversion unit 104 is disposed.
- the signal processing chip 112 has a plurality of signal processing circuits.
- the signal processing circuit performs signal processing on the image data output from the imaging chip 111.
- the signal processing circuit includes, for example, an amplifier circuit that amplifies the signal value of the image data, a correlated double sampling circuit that performs noise reduction processing of the image data, and analog / digital (A / D) conversion that converts the analog signal into a digital signal. Circuit etc.
- a signal processing circuit may be provided for each photoelectric conversion unit 104.
- a signal processing circuit may be provided for each of the plurality of photoelectric conversion units 104.
- the signal processing chip 112 has a plurality of through electrodes 110.
- the through electrode 110 is, for example, a silicon through electrode.
- the through electrode 110 connects circuits provided in the signal processing chip 112 to each other.
- the through electrode 110 may also be provided in the peripheral region of the imaging chip 111 and the memory chip 113.
- some elements constituting the signal processing circuit may be provided in the imaging chip 111.
- a comparator that compares an input voltage with a reference voltage may be provided in the imaging chip 111, and circuits such as a counter circuit and a latch circuit may be provided in the signal processing chip 112.
- the memory chip 113 has a plurality of storage units.
- the storage unit stores image data that has been subjected to signal processing by the signal processing chip 112.
- the storage unit is a volatile memory such as a DRAM, for example.
- a storage unit may be provided for each photoelectric conversion unit 104.
- the storage unit may be provided for each of the plurality of photoelectric conversion units 104.
- the image data stored in the storage unit is output to the subsequent image processing unit.
- FIG. 3 is a diagram for explaining the pixel array and the unit area 131 of the imaging chip 111.
- a state where the imaging chip 111 is observed from the back surface (imaging surface) side is shown.
- 20 million or more pixels are arranged in a matrix in the pixel region.
- four adjacent pixels of 2 pixels ⁇ 2 pixels form one unit region 131.
- the grid lines in the figure indicate the concept that adjacent pixels are grouped to form a unit region 131.
- the number of pixels forming the unit region 131 is not limited to this, and may be about 1000, for example, 32 pixels ⁇ 32 pixels, more or less, or one pixel.
- the unit area 131 in FIG. 3 includes a so-called Bayer array composed of four pixels of green pixels Gb, Gr, blue pixels B, and red pixels R.
- the green pixels Gb and Gr are pixels having a green filter as the color filter F, and receive light in the green wavelength band of incident light.
- the blue pixel B is a pixel having a blue filter as the color filter F and receives light in the blue wavelength band
- the red pixel R is a pixel having a red filter as the color filter F and having a red wavelength band. Receives light.
- a plurality of blocks are defined so as to include at least one unit region 131 per block. That is, the minimum unit of one block is one unit area 131. As described above, of the possible values for the number of pixels forming one unit region 131, the smallest number of pixels is one pixel. Therefore, when one block is defined in units of pixels, the minimum number of pixels among the number of pixels that can define one block is one pixel.
- Each block can control pixels included in each block with different control parameters. In each block, all the unit areas 131 in the block, that is, all the pixels in the block are controlled under the same imaging condition. That is, photoelectric conversion signals having different imaging conditions can be acquired between a pixel group included in a certain block and a pixel group included in another block.
- control parameters examples include a frame rate, a gain, a thinning rate, the number of addition rows or addition columns to which photoelectric conversion signals are added, a charge accumulation time or accumulation count, a digitization bit number (word length), and the like.
- the imaging device 100 can freely perform not only thinning in the row direction (X-axis direction of the imaging chip 111) but also thinning in the column direction (Y-axis direction of the imaging chip 111).
- the control parameter may be a parameter in image processing.
- FIG. 4 is a diagram for explaining a circuit in the unit region 131.
- one unit region 131 is formed by four adjacent pixels of 2 pixels ⁇ 2 pixels.
- the number of pixels included in the unit region 131 is not limited to this, and may be 1000 pixels or more, or may be a minimum of 1 pixel.
- the two-dimensional position of the unit area 131 is indicated by reference signs A to D.
- the reset transistor (RST) of the pixel included in the unit region 131 is configured to be turned on and off individually for each pixel.
- a reset wiring 300 for turning on / off the reset transistor of the pixel A is provided, and a reset wiring 310 for turning on / off the reset transistor of the pixel B is provided separately from the reset wiring 300.
- a reset line 320 for turning on and off the reset transistor of the pixel C is provided separately from the reset lines 300 and 310.
- a dedicated reset wiring 330 for turning on and off the reset transistor is also provided for the other pixels D.
- the pixel transfer transistor (TX) included in the unit region 131 is also configured to be turned on and off individually for each pixel.
- a transfer wiring 302 for turning on / off the transfer transistor of the pixel A, a transfer wiring 312 for turning on / off the transfer transistor of the pixel B, and a transfer wiring 322 for turning on / off the transfer transistor of the pixel C are separately provided.
- a dedicated transfer wiring 332 for turning on / off the transfer transistor is provided for the other pixels D.
- the pixel selection transistor (SEL) included in the unit region 131 is also configured to be turned on and off individually for each pixel.
- a selection wiring 306 for turning on / off the selection transistor of the pixel A, a selection wiring 316 for turning on / off the selection transistor of the pixel B, and a selection wiring 326 for turning on / off the selection transistor of the pixel C are separately provided.
- a dedicated selection wiring 336 for turning on and off the selection transistor is provided for the other pixels D.
- the power supply wiring 304 is commonly connected from the pixel A to the pixel D included in the unit region 131.
- the output wiring 308 is commonly connected to the pixel D from the pixel A included in the unit region 131.
- the power supply wiring 304 is commonly connected between a plurality of unit regions, but the output wiring 308 is provided for each unit region 131 individually.
- the load current source 309 supplies current to the output wiring 308.
- the load current source 309 may be provided on the imaging chip 111 side or may be provided on the signal processing chip 112 side.
- the charge accumulation including the charge accumulation start time, the accumulation end time, and the transfer timing is controlled from the pixel A to the pixel D included in the unit region 131. can do.
- the photoelectric conversion signals of the pixels A to D can be output via the common output wiring 308.
- a so-called rolling shutter system in which charge accumulation is controlled in a regular order with respect to rows and columns for the pixels A to D included in the unit region 131.
- photoelectric conversion signals are output in the order of “ABCD” in the example of FIG.
- the charge accumulation time can be controlled for each unit region 131.
- the unit area 131 included in another block is rested while the unit area 131 included in a part of the block is charged (imaged), so that a predetermined block of the imaging chip 111 can be used. Only the imaging can be performed, and the photoelectric conversion signal can be output.
- a block accumulation control target block
- charge accumulation imaging
- the output wiring 308 is provided corresponding to each of the unit areas 131. Since the image pickup device 100 includes the image pickup chip 111, the signal processing chip 112, and the memory chip 113, each chip is arranged in the surface direction by using the electrical connection between the chips using the connection portion 109 for the output wiring 308. The wiring can be routed without increasing the size.
- an imaging condition can be set for each of a plurality of blocks in the imaging device 32a.
- the imaging control unit 34c of the control unit 34 causes the plurality of areas to correspond to the block and performs imaging under imaging conditions set for each area.
- FIG. 5 is a diagram schematically showing an image of a subject formed on the image sensor 32a of the camera 1.
- the camera 1 photoelectrically converts the subject image to obtain a live view image before an imaging instruction is given.
- the live view image refers to a monitor image that is repeatedly imaged at a predetermined frame rate (for example, 60 fps).
- the control unit 34 sets the same imaging condition over the entire area of the imaging chip 111 (that is, the entire imaging screen) before the setting unit 34b divides the area.
- the same imaging condition refers to setting a common imaging condition for the entire imaging screen. For example, even if there is a variation in apex value of less than about 0.3, it is regarded as the same.
- the imaging conditions set to be the same throughout the imaging chip 111 are determined based on the exposure conditions corresponding to the photometric value of the subject luminance or the exposure conditions manually set by the user.
- an image including a person 61a, an automobile 62a, a bag 63a, a mountain 64a, and clouds 65a and 66a is formed on the imaging surface of the imaging chip 111.
- the person 61a holds the bag 63a with both hands.
- the automobile 62a stops at the right rear of the person 61a.
- the control unit 34 divides the screen of the live view image into a plurality of regions as follows. First, a subject element is detected from the live view image by the object detection unit 34a. The subject element is detected using a known subject recognition technique. In the example of FIG. 5, the object detection unit 34a detects a person 61a, a car 62a, a bag 63a, a mountain 64a, a cloud 65a, and a cloud 66a as subject elements.
- the setting unit 34b divides the live view image screen into regions including the subject elements.
- the region including the person 61a is defined as the first region 61
- the region including the automobile 62a is defined as the second region 62
- the region including the bag 63a is defined as the third region 63
- the region including the mountain 64a is defined as the fourth region.
- the region 64, the region including the cloud 65a is referred to as a fifth region 65
- the region including the cloud 66a is described as a sixth region 66.
- the control unit 34 causes the display unit 35 to display a setting screen as illustrated in FIG. In FIG. 6, a live view image 60a is displayed, and an imaging condition setting screen 70 is displayed on the right side of the live view image 60a.
- the setting screen 70 lists frame rate, shutter speed (TV), and gain (ISO) in order from the top as an example of setting items for imaging conditions.
- the frame rate is the number of frames of a live view image acquired per second or a moving image recorded by the camera 1.
- Gain is ISO sensitivity.
- the setting items for the imaging conditions may be added as appropriate in addition to those illustrated in FIG. When all the setting items do not fit in the setting screen 70, other setting items may be displayed by scrolling the setting items up and down.
- the control unit 34 sets the region selected by the user among the regions divided by the setting unit 34b as a target for setting (changing) the imaging condition. For example, in the camera 1 capable of touch operation, the user taps the display position of the main subject for which the imaging condition is to be set (changed) on the display surface of the display unit 35 on which the live view image 60a is displayed. For example, when the display position of the person 61a is tapped, the control unit 34 sets the first area 61 including the person 61a in the live view image 60a as an imaging condition setting (changing) target area and the first area 61. The outline is highlighted.
- a first area 61 that displays an outline with emphasis is an area for setting (changing) imaging conditions.
- the first area 61 is a target for setting (changing) the imaging condition.
- the control unit 34 controls the shutter speed for the highlighted area (first area 61). Is displayed on the screen (reference numeral 68).
- the camera 1 is described on the premise of a touch operation.
- the imaging condition may be set (changed) by operating a button or the like constituting the operation member 36.
- the setting unit 34b increases or decreases the shutter speed display 68 from the current setting value according to the tap operation.
- the imaging unit 32 (FIG. 1) is instructed to change the imaging condition of the unit area 131 (FIG. 3) of the imaging element 32a corresponding to the displayed area (first area 61) according to the tap operation. send.
- the decision icon 72 is an operation icon for confirming the set imaging condition.
- the setting unit 34b performs the setting (change) of the frame rate and gain (ISO) in the same manner as the setting (change) of the shutter speed (TV).
- the setting unit 34b has been described as setting the imaging condition based on the user's operation, the setting unit 34b is not limited to this.
- the setting unit 34b may set the imaging condition based on the determination of the control unit 34 without being based on a user operation. For the area that is not highlighted (the area other than the first area 61), the set imaging conditions are maintained.
- the control unit 34 displays the entire target area brightly, increases the contrast of the entire target area, or displays the entire target area. May be displayed blinking.
- the target area may be surrounded by a frame.
- the display of the frame surrounding the target area may be a double frame or a single frame, and the display mode such as the line type, color, and brightness of the surrounding frame may be appropriately changed.
- the control unit 34 may display an indication of an area for which an imaging condition is set, such as an arrow, in the vicinity of the target area.
- the control unit 34 may darkly display a region other than the target region for which the imaging condition is set (changed), or may display a low contrast other than the target region.
- the control unit 34 is operated.
- imaging main imaging
- the image processing unit 33 performs image processing on the image data acquired by the imaging unit 32.
- This image data is image data recorded in the recording unit 37 and is hereinafter referred to as main image data.
- the imaging unit 32 acquires processing image data different from the main image data at a timing different from that of the main image data. For processing image data, correction processing is performed on the main image data, image processing is performed on the main image data, and various detection processing and setting processing for capturing the main image data are performed. This is image data used for.
- the recording unit 37 that receives an instruction from the control unit 34 records the main image data after the image processing on a recording medium including a memory card (not shown). To do. Thereby, a series of imaging processes is completed.
- the control unit 34 acquires the processing image data as follows. In other words, at least at the boundary portion of the region divided by the setting unit 34b (the first region to the sixth region in the above example), an image pickup condition different from the image pickup condition set when the main image data is picked up is processed. Set as data imaging conditions. For example, in the boundary between the first area 61 and the fourth area 64, when the first imaging condition is set at the time of capturing the main image data at the boundary between the first area 61 and the fourth area 64, The fourth imaging condition is set when the processing image data is captured.
- the setting unit 34b is not limited to the boundary between the first area 61 and the fourth area 64, and may set the entire area of the imaging surface of the imaging element 32a as the processing imaging area. In this case, processing image data in which the sixth imaging condition is set from the first imaging condition is acquired. The timing for acquiring each image data for processing will be described later.
- the correction unit 33b of the image processing unit 33 performs the first correction process as one of the pre-processing performed before the image processing, focus detection processing, subject detection (subject element detection) processing, and processing for setting imaging conditions. Do as needed.
- the imaging condition is set for the region selected by the user or the region determined by the control unit 34 ( Change).
- the divided areas are first area 61 to sixth area 66 (see FIG. 7A), and the first imaging condition to the sixth imaging condition are respectively applied to the first area 61 to the sixth area 66.
- Shall be set.
- the block is a minimum unit in which imaging conditions can be individually set in the imaging device 32a.
- FIG. 7A is a diagram illustrating a predetermined range 80 including the boundary between the first region 61 and the fourth region 64 in the live view image 60a.
- FIG. 7B is an enlarged view of the predetermined range 80 in FIG.
- the predetermined range 80 includes a plurality of blocks 81 to 89.
- blocks 81 and 84 for capturing a person are included in the first area 61
- blocks 82, 85, and 87 for capturing a person and a mountain are also included in the first area 61.
- the first imaging condition is set for the blocks 81, 82, 84, 85, and 87.
- blocks 83, 86, 88, and 89 that image mountains are included in the fourth region 64.
- the fourth imaging condition is set for the blocks 83, 86, 88 and 89.
- the white background part in FIG. 7 (b) shows the part corresponding to the person. Also, the hatched portion in FIG. 7B shows a portion corresponding to a mountain.
- the block 82, the block 85, and the block 87 include a boundary B1 between the first area 61 and the fourth area 64.
- the shaded portion in FIG. 7 (b) indicates a portion corresponding to a mountain.
- the block is the minimum unit for setting the imaging condition
- the same imaging condition is set in one block.
- the first imaging condition is set for the blocks 82, 85, and 87 including the boundary B1 between the first area 61 and the fourth area 64
- the blocks 82, 85, and 87 The first imaging condition is also set in the hatched portion, that is, the portion corresponding to the mountain. That is, imaging conditions different from the fourth imaging conditions set in the blocks 83, 86, 88, and 89 for imaging mountains are set in the hatched portions in the blocks 82, 85, and 87.
- the shaded portions of the block 82, the block 85, and the block 87 and the shaded portions of the blocks 83, 86, 88, and 89 may differ in image brightness, contrast, hue, and the like.
- overexposure or blackout occurs in the main image data corresponding to the shaded portion.
- the block 85 the first imaging condition suitable for a person is not suitable for the shaded portion (that is, the mountain portion) of the block 85, and whiteout or blackout may occur in the main image data corresponding to the shaded portion.
- Overexposure means that the gradation of data in a high-luminance portion of an image is lost due to overexposure.
- blackout means that the gradation of data in the low-luminance portion of the image is lost due to underexposure.
- FIG. 8 (a) is a diagram illustrating the main image data corresponding to FIG. 7 (b).
- FIG. 8B is a diagram illustrating processing image data corresponding to FIG. 7B.
- the processing image data all the blocks 81 to 89 including the boundary between the first area 61 and the fourth area 64 are acquired under the fourth imaging condition suitable for the mountain.
- the blocks 81 to 89 of the main image data are each composed of 4 pixels of 2 pixels ⁇ 2 pixels. Among these, it is assumed that black crushing occurs in the pixel 85b and the pixel 85d in the block 85 located in the center of FIG. In FIG.
- the processing image data blocks 81 to 89 are each composed of 4 pixels of 2 pixels ⁇ 2 pixels, which is the same as the main image data. Assume that no black crushing occurs in the processing image data in FIG.
- the correction unit 33b according to the present embodiment corrects the image by performing a replacement process of replacing the main image data in which whiteout or blackout has occurred in the block of the main image data with the processing image data. This correction is referred to as a first correction process.
- the correction unit 33b includes a boundary between regions based on a plurality of subject elements as in the block 85, and when the main image data by the block 85 includes whiteout or blackout, The first correction process is performed on all blocks where black crushing exists. Note that the first correction process is not required when there is no whiteout or blackout in the main image data.
- the correction unit 33b performs a first correction process on the target block of the main image data with the block including the main image data in which whiteout or blackout has occurred as the target block.
- the target block is an area including image data in which overexposure or blackout occurs, but it may not be completely overexposed or blackout.
- an area where the pixel value is not less than the first threshold or an area where the pixel value is not more than the second threshold may be set as the block of interest.
- FIG. 7B and FIG. 8A reference is made to eight blocks around the target block 85 included in a predetermined range 80 (eg, 3 ⁇ 3 blocks) centered on the predetermined target block 85 of the main image data. Let it be a block. That is, the blocks 81 to 84 and the blocks 86 to 89 around the predetermined target block 85 are reference blocks.
- the correction unit 33b uses the block at the position corresponding to the target block of the main image data as the target block for the processing image data in FIG. 8B, and refers to the block at the position corresponding to the reference block of the main image data. Let it be a block. Note that the number of blocks constituting the predetermined range 80 is not limited to the 3 ⁇ 3 block, and may be changed as appropriate.
- the correcting unit 33b uses the processing image data acquired in one block (target block or reference block) of the processing image data as the first correction process, and uses the processing image data as the target image data. Correct some areas in the block. Specifically, the correction unit 33b uses the image data acquired in one block (the target block or the reference block) of the processing image data, and all of the main image data in which whiteout or blackout has occurred. Is replaced. At this time, one block (target block or reference block) of the processing image data has the same position as the target block of the main image data.
- the embodiment of the treatment (1-1) for example, any one of the following (i) to (iv) is used.
- the correcting unit 33b sets the main image data of the block of interest that is whiteout or blackout in the main image data to one block of processing image data that corresponds to the position closest to the whiteout or blackout area. Replacement is performed by the processing image data acquired in (the target block or the reference block). Even when there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is processed image data corresponding to the closest position described above. Are replaced by the same processing image data acquired in one block (target block or reference block).
- the correction unit 33b performs replacement as follows.
- the correction unit 33b obtains the main image data of the target block in which whiteout or blackout has occurred in the main image data from the target block of the processing image data at the position corresponding to the target block of the main image data.
- Replace with image data For example, based on the processing image data corresponding to the pixels 85a to 85d included in the target block 85 of the processing image data, the main image data corresponding to the blacked out pixel 85b and the blacked out pixel in the target block 85 of the main image data.
- the main image data corresponding to 85d is replaced with the same processing image data (for example, processing image data corresponding to the pixel 85d of the processing image data).
- the correction unit 33b performs replacement as follows. That is, the correcting unit 33b replaces the main image data of the target block in which the whiteout or blackout has occurred in the main image data with the processing image data acquired in the reference blocks around the target block in the processing image data. To do. For example, among the reference blocks 81 to 84 and 86 to 89 around the target block 85 of the processing image data, the target block 85 of the processing image data corresponding to the black crushing pixel (the pixels 85b and 85d of the main image data).
- the main image data corresponding to the black crushing pixels (the pixels 85b and 85d of the main image data) is obtained.
- the image data is replaced with the same processing image data (for example, processing image data corresponding to the pixel 86c of the processing image data).
- the correction unit 33b is the largest for the same subject element (mountain) as the subject element (for example, mountain) that is overexposed or blackout.
- the target block that is over-exposed or black-out in the main image data by the same processing image data acquired in one reference block selected from the reference block of the set imaging condition (in this example, the fourth imaging condition) Replace a plurality of main image data.
- the reference block 88 included in one reference block selected from the reference blocks in which the fourth imaging condition is set in the range of the reference blocks 81 to 84 and 86 to 89 around the target block 85 of the processing image data, for example, the reference block 88 Based on the processing image data corresponding to the pixels 88a to 88d, the main image data corresponding to the blackened pixel 85b of the main image data and the main image data corresponding to the blackened pixel 85d of the main image data are the same.
- the image data is replaced with the processing image data (for example, the processing image data corresponding to the pixel 88b of the processing image data).
- the correcting unit 33b may replace the blackout pixel 85d with a part of pixels of the reference block of the processing image data.
- the correcting unit 33b uses the processing image data among the processing image data corresponding to the four pixels acquired in one reference block of the processing image data according to (i-2) or (ii). You may select the pixel in a block of interest in which whiteout or blackout has occurred and a pixel with a short interval. Specifically, the correcting unit 33b determines the interval between the blackout pixel 85b of the main image data and the pixel 86a of the processing image data, and the interval between the blackout pixel 85b of the main image data and the pixel 86b of the processing image data.
- the black-out pixel 85b of the main image data is replaced with the pixel 86a of the processing image data having a short interval from the black-out pixel 85b of the main image data.
- the interval is a distance between the centers of the blackened pixel 85b of the main image data and the pixel 86a of the processing image data. Is the interval. Further, the interval may be the interval between the centers of gravity of the blackened pixels 85b of the main image data and the pixels 86a of the processing image data.
- the correction unit 33b may replace the main image data in which whiteout or blackout has occurred using the processing image data corresponding to the adjacent pixels. For example, when the reference block 86 is selected, the correction unit 33b sets the same main image data corresponding to the blackout pixel 85b of the main image data and the main image data corresponding to the blackout pixel 85d of the main image data. Is replaced by the processing image data (processing image data corresponding to the pixel 86a or the pixel 86c of the processing image data).
- the correction unit 33b generates image data generated based on the processing image data corresponding to the four pixels acquired in one reference block of the processing image data according to (i-2) or (ii). It may be used to replace the main image data in which whiteout or blackout occurs. For example, when the reference block 88 is selected, the correcting unit 33b sets the same main image data corresponding to the blackout pixel 85b of the main image data and the main image data corresponding to the blackout pixel 85d of the main image data. (For example, the average value of the processing image data corresponding to the pixels 88a to 88d included in the reference block 88 of the processing image data).
- the average value of the processing image data it may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. For example, since the pixel 88b is closer to the blackout pixel 85d than the pixel 88d, the pixel 88b is weighted such that the contribution rate of the processing image data corresponding to the pixel 88b is higher than the contribution rate of the processing image data corresponding to the pixel 88d. .
- an intermediate value of the processing image data corresponding to the pixels 88a to 88d is calculated, and this intermediate value is used.
- the main image data corresponding to the black crushing pixel 85b and the pixel 85d may be replaced.
- the correction unit 33b uses the processing image data acquired in a plurality of blocks of the processing image data as the first correction processing, and performs whiteout or blackout in the target block of the main image data. All of the main image data that has been crushed is replaced. Here, a plurality of reference block candidates of the processing image data for replacing the black-out pixels (85b, 85d) of the main image data are extracted. Finally, the pixels in one block are used for replacement. As the embodiment of the treatment (1-2), for example, any one of the following (i) to (iv) is used.
- the correction unit 33b refers to the main image data of the block of interest that is whiteout or blackout in the main image data, and a plurality of references of the processing image data that correspond to the positions around the whiteout or blackout area. Replacement is performed by the processing image data acquired in the block. Even if there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is converted into a plurality of reference blocks of the processing image data described above. Replacement is performed with the same processing image data acquired.
- the target block 85 of the processing image data corresponding to the black crushing pixels (the pixels 85b and 85d of the main image data)
- the main image data corresponding to the blackout pixel 85b and the main image data corresponding to the blackout pixel 85d are the same processing image data (for example, processing image data corresponding to the pixel 88b of the processing image data).
- the area of the black crushing pixel 85b and the black crushing pixel 85d of the main image data replaced with the pixel 88b of the processing image data is smaller than the area of the pixel 88 of the processing image data.
- the correction unit 33 b when no whiteout or blackout occurs in the target block 85 of the processing image data, the correction unit 33 b includes the target block 85 of the processing image data and the target block.
- processing image data acquired by a plurality of blocks including reference blocks 81 to 84 and 86 to 89 located around 85 it corresponds to a whiteout or blackout pixel of the target block 85 of the main image data.
- the main image data may be replaced.
- the correction unit 33b applies to the same subject element (mountain) as the subject element (for example, a mountain) that is whiteout or blackout when whiteout or blackout occurs in the target block of the processing image data.
- whiteout or blackout occurs due to the same processing image data acquired from a plurality of reference blocks selected from the reference blocks selected from the reference blocks of the most frequently set imaging conditions (fourth imaging condition in this example).
- four reference blocks selected from the reference blocks in which the fourth imaging condition is set in the range of the reference blocks 81 to 84 and 86 to 89 around the target block 85 of the processing image data for example, the reference blocks 86 and 88.
- the main image data corresponding to the blackened pixel 85b of the main image data, and the book corresponding to the blackened pixel 85d of the main image data is replaced with the same processing image data (for example, processing image data corresponding to the pixel 86c).
- the correction unit 33 b when no whiteout or blackout occurs in the target block 85 of the processing image data, the correction unit 33 b includes the target block 85 of the processing image data and the target block.
- processing image data acquired by a plurality of blocks including reference blocks 81 to 84 and 86 to 89 located around 85 it corresponds to a whiteout or blackout pixel of the target block 85 of the main image data.
- the main image data may be replaced.
- the correction unit 33b includes whiteout in the processing image data among the processing image data corresponding to the plurality of pixels acquired by the plurality of reference blocks of the processing image data according to (i) or (ii).
- the main image data in which whiteout or blackout has occurred may be replaced using processing image data corresponding to pixels adjacent to the pixels in the target block in which blackout has occurred.
- the correction unit 33b obtains main image data corresponding to the blackout pixel 85b of the main image data and main image data corresponding to the blackout pixel 85d of the main image data.
- the same processing image data (the processing image data corresponding to the pixel 86a or the pixel 86c of the processing image data reference block 86 or the pixel 86c or the pixel 88a of the processing image data reference block 88).
- the correction unit 33b generates image data generated based on the processing image data corresponding to the plurality of pixels acquired by the plurality of reference blocks of the processing image data selected in (i) or (ii) above. It is also possible to replace the main image data in which whiteout or blackout occurs in the block of interest. For example, when the reference blocks 86 and 88 are selected, the correction unit 33b obtains main image data corresponding to the blackout pixel 85b of the main image data and main image data corresponding to the blackout pixel 85d of the main image data.
- the average value of the processing image data it may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. For example, since the pixel 86a is closer to the blackout pixel 85b than the pixel 86b, the pixel 86a is weighted such that the contribution rate of the processing image data corresponding to the pixel 86a is higher than the contribution rate of the processing image data corresponding to the pixel 86b. .
- the processing image corresponding to the pixels 86a to 86d and the pixels 88a to 88d may be calculated, and the main image data corresponding to the blackout pixel 85b and the pixel 85d may be replaced by the intermediate value.
- the correction unit 33b uses the processing image data acquired in one block of the processing image data, and performs whiteout or blackout in the target block of the main image data. All of the main image data that has been crushed is replaced.
- the embodiment of the treatment (2-1) for example, any one of the following (i) to (iii) is used.
- the correcting unit 33b sets the main image data of the block of interest that is whiteout or blackout in the main image data to one block of processing image data that corresponds to the position closest to the whiteout or blackout area. Replacement is performed by the processing image data acquired in (the target block or the reference block). When there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is processed image data corresponding to the closest position described above. Are replaced by different processing image data acquired in one block.
- the correction unit 33b performs replacement as follows.
- the correction unit 33b obtains the main image data of the target block in which the whiteout or blackout has occurred in the main image data from the target block of the processing image data at the position corresponding to the target block of the main image data.
- Replace with image data For example, based on the processing image data corresponding to the pixels 85a to 85d included in the target block 85 of the processing image data, the main image data corresponding to the black crushing pixel 85b in the target block 85 of the main image data is processed correspondingly.
- the image data for processing corresponding to the pixel 85b of the target block 85 of the target image data is replaced with the main image data corresponding to the black crushing pixel 85d in the target block 85 of the main image data, and the target block of the corresponding processing image data
- the image data for processing corresponding to 85 pixels 85d is replaced.
- the correction unit 33b performs replacement as follows. That is, the correction unit 33b replaces the main image data of the target block in which the whiteout or blackout has occurred in the main image data with different image data acquired in the reference blocks around the target block in the processing image data. .
- the target block 85 of the processing image data corresponding to the black crushing pixel the pixels 85b and 85d of the main image data. Based on the processing image data corresponding to the pixels 86a to 86d included in the reference block 86 of the processing image data at the adjacent position, the following replacement is performed.
- the main image data corresponding to the pixel 85b of the blackened main image data is replaced with the processing image data corresponding to the pixel 86a of the reference block 86 of the processing image data, and corresponds to the pixel 85d of the blackened main image data.
- the main image data to be replaced is replaced with the processing image data corresponding to the pixel 86c of the reference block 86 of the processing image data.
- the correction unit 33b By using different processing image data acquired by one reference block selected from reference blocks selected from the reference blocks of the imaging conditions (fourth imaging condition in this example) set most frequently for the same subject element (mountain), In the image data, a plurality of main image data of a block of interest that is overexposed or underexposed is replaced.
- the image data for processing corresponding to the pixels 86a to 86d to be replaced is replaced as follows. That is, the main image data corresponding to the blackened pixel 85b of the main image data is replaced with the processing image data corresponding to the pixel 86b of the reference block 86 of the processing image data, and the blackened pixel 85d of the main image data is supported.
- the main image data to be replaced is replaced with the processing image data corresponding to the pixel 86d of the reference block 86 of the processing image data.
- the correction unit 33b generates image data generated based on the processing image data corresponding to the four pixels acquired in one reference block of the processing image data according to (i-2) or (ii). It may be used to replace the whiteout or blackout main image data.
- the correction unit 33b corresponds to the main image data corresponding to the black-out pixel 85b of the main image data corresponding to the pixels 86a and 86b included in the reference block 86 of the processing image data. Replace with the average value of the processing image data.
- the main image data corresponding to the black-out pixel 85d of the main image data is replaced with the average value of the processing image data corresponding to the pixels 86c and 86d included in the reference block 86 of the processing image data.
- the average value of the processing image data it may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. For example, since the pixel 86a is closer to the blackout pixel 85b than the pixel 86b, the pixel 86a is weighted such that the contribution rate of the processing image data corresponding to the pixel 86a is higher than the contribution rate of the processing image data corresponding to the pixel 86b. .
- the correction unit 33b uses the processing image data acquired in a plurality of blocks of the processing image data, and performs whiteout or blackout in the target block of the main image data. All of the main image data that has been crushed is replaced.
- the mode of the treatment (2-2) for example, any of the following modes (i) to (iii) is used.
- the correction unit 33b refers to the main image data of the block of interest that is whiteout or blackout in the main image data, and a plurality of references of the processing image data that correspond to the positions around the whiteout or blackout area. Replacement is performed by the processing image data acquired in the block.
- the main image data of the whiteout or blackout pixels is used for the different processing acquired in the above-mentioned plurality of blocks. Replace with image data.
- the target block 85 of the processing image data corresponding to the black crushing pixels (the pixels 85b and 85d of the main image data)
- the main image data corresponding to the blackout pixel 85b is replaced with the processing image data corresponding to the pixel 86a in the reference block 86 of the processing image data
- the main image data corresponding to the blackout pixel 85d is used for processing.
- the image data is replaced by the processing image data corresponding to the pixel 88b of the reference block 88 of the image data.
- the correction unit 33 b when no whiteout or blackout occurs in the target block 85 of the processing image data, the correction unit 33 b includes the target block 85 of the processing image data and the target block.
- processing image data acquired by a plurality of blocks including reference blocks 81 to 84 and 86 to 89 located around 85 it corresponds to a whiteout or blackout pixel of the target block 85 of the main image data.
- the main image data may be replaced.
- the correction unit 33b applies to the same subject element (mountain) as the subject element (for example, a mountain) that is whiteout or blackout when whiteout or blackout occurs in the target block of the processing image data.
- whiteout or blackout occurs due to different processing image data acquired by a plurality of reference blocks selected from the reference block selected from the reference block of the imaging condition that is set most frequently (fourth imaging condition in this example).
- the main image data of the target block in which the whiteout or blackout has occurred in the main image data is a plurality of reference blocks of the imaging conditions that are set most frequently among the reference blocks around the target block in the processing image data.
- two reference blocks selected from the reference blocks in which the fourth imaging condition is set in the range of the reference blocks 81 to 84 and 86 to 89 around the target block 85 of the processing image data for example, the reference blocks 86 and 88.
- the main image data to be replaced is replaced with the processing image data corresponding to the pixel 88b of the reference block 88 of the processing image data.
- the correction unit 33 b when no whiteout or blackout occurs in the target block 85 of the processing image data, the correction unit 33 b includes the target block 85 of the processing image data and the target block.
- processing image data acquired by a plurality of blocks including reference blocks 81 to 84 and 86 to 89 located around 85 it corresponds to a whiteout or blackout pixel of the target block 85 of the main image data.
- the main image data may be replaced.
- the correction unit 33b generates image data generated based on the processing image data corresponding to the plurality of pixels acquired by the plurality of reference blocks of the processing image data selected in (i) or (ii). It is also possible to replace the main image data in which whiteout or blackout occurs in the block of interest. For example, when the reference blocks 86 and 88 are selected, the correcting unit 33b replaces the main image data corresponding to the black-out pixels 85b and the black-out pixels 85d of the main image data as follows. That is, the main image data corresponding to the black-out pixel 85b of the main image data is replaced with the average value of the processing image data corresponding to the pixels 86a to 86d included in the reference block 86 of the processing image data. Further, the main image data corresponding to the blackened pixel 85d of the main image data is replaced with the average value of the processing image data corresponding to the pixels 88a to 88d included in the reference block 88 of the processing image data.
- the average value of the processing image data it may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. For example, since the pixel 86a is closer to the blackout pixel 85b than the pixel 86b, the pixel 86a is weighted such that the contribution rate of the processing image data corresponding to the pixel 86a is higher than the contribution rate of the processing image data corresponding to the pixel 86b. .
- the processing image corresponding to the pixels 86a to 86d and the pixels 88a to 88d may be calculated, and the main image data corresponding to the blackout pixel 85b and the pixel 85d may be replaced by the intermediate value.
- the control unit 34 determines which mode of the first correction processing is performed based on, for example, a setting state (including an operation menu setting) by the operation member 36. Note that the control unit 34 may determine which aspect of the first correction processing is performed depending on the imaging scene mode set in the camera 1 and the type of the detected subject element.
- the processing image data is acquired by the imaging unit 32, but the processing image data may be acquired by an imaging unit other than the imaging unit 32.
- an imaging unit other than the imaging unit 32 may be provided in the camera 1, and the processing image data may be acquired by an imaging unit other than the imaging unit 32.
- the processing image data may be acquired by an imaging unit of a camera other than the camera 1. Further, the processing image data may be acquired by a sensor other than the image sensor such as a photometric sensor. In these cases, it is preferable that the range of the scene captured by acquiring the processing image data is the same as the range of the scene captured by acquiring the main image data.
- the first correction process can be performed if the range of the object scene imaged in (1) overlaps at least partially with the object field range acquired by acquiring the main image data.
- the processing image data can be acquired almost simultaneously with the main image data, and processing when capturing a moving subject is performed.
- the processing image data may be recorded in the recording unit 37, acquired by reading from the recording unit 37, and the first correction process may be performed.
- the timing of recording the processing image data in the recording unit may be immediately before or after the acquisition of the main image data, or may be recorded in advance before the acquisition of the main image data.
- the correction unit 33b of the image processing unit 33 further performs image processing on the main image data, focus detection processing for capturing the main image data, subject detection (detecting subject elements) processing for capturing the main image data, and book Prior to the processing for setting the imaging conditions for imaging the image data, the following second correction processing is performed as necessary. Note that the correction unit 33b performs the second correction process after replacing the overexposed or blackened pixels as described above. Note that the image data at the position of the blackout pixel 85b (or 85d) replaced by another pixel is assumed to have been imaged under the imaging conditions for imaging the replaced pixel (for example, 86a), and the following second correction processing is performed. Can be done.
- the imaging conditions of the values (average value, intermediate value) between the imaging conditions of each block may be handled.
- the image was captured with ISO sensitivity 800 between ISO sensitivity 100 and ISO sensitivity 1600. It may be handled as data. 1.
- the correction unit 33b of the image processing unit 33 is a boundary portion of a region when the image processing on the main image data obtained by applying different imaging conditions between the divided regions is predetermined image processing.
- a second correction process is performed on the main image data positioned at the position as a pre-process of the image process.
- the predetermined image processing is processing for calculating main image data at a target position to be processed in an image with reference to main image data at a plurality of reference positions around the target position. For example, pixel defect correction processing, Color interpolation processing, contour enhancement processing, noise reduction processing, and the like are applicable.
- the second correction process is performed to alleviate the discontinuity that occurs in the image after image processing due to the difference in imaging conditions between the divided areas.
- the plurality of reference positions around the target position have main image data to which the same imaging conditions as the main image data at the target position are applied, and the target position
- the main image data may be mixed with main image data to which different imaging conditions are applied.
- the second correction processing is performed as follows.
- FIG. 9A is an enlarged view of a region of interest 90 at the boundary between the first region 61 and the fourth region 64 in the live view image 60a of FIG.
- the image data from the pixels on the image sensor 32a corresponding to the first area 61 for which the first image capturing condition is set is shown in white, and the image data on the image sensor 32a corresponding to the fourth area 64 for which the fourth image capturing condition is set.
- the image data from the pixel is shaded.
- the image data from the target pixel P is located on the first region 61 and in the vicinity of the boundary 91 between the first region 61 and the fourth region 64, that is, the boundary portion.
- FIG. 9B is an enlarged view of the target pixel P and the reference pixels Pr1 to Pr8.
- the position of the target pixel P is the target position, and the positions of the reference pixels Pr1 to Pr8 surrounding the target pixel P are reference positions.
- the first imaging condition is set for the reference pixels Pr1 to Pr6 and the target pixel P corresponding to the first area 61, and the fourth imaging condition is set for the reference pixels Pr7 and Pr8 corresponding to the fourth area 64.
- the reference symbol Pr is given when the reference pixels Pr1 to Pr8 are collectively referred to.
- the generation unit 33c of the image processing unit 33 normally performs image processing by directly referring to the main image data of the reference pixel Pr without performing the second correction processing.
- the imaging condition applied to the target pixel P (referred to as the first imaging condition) is different from the imaging condition applied to the reference pixels Pr around the target pixel P (referred to as the fourth imaging condition).
- the correction unit 33b performs the second correction process on the main image data of the fourth imaging condition in the main image data of the reference pixel Pr of the main image data as in the following (Example 1) to (Example 3). Do.
- the generation unit 33c performs image processing for calculating the main image data of the target pixel P with reference to the main image data of the reference pixel Pr after the second correction processing.
- the correction unit 33b of the image processing unit 33 differs only in ISO sensitivity between the first imaging condition and the fourth imaging condition, the ISO sensitivity of the first imaging condition is 100, and the ISO sensitivity of the fourth imaging condition is In the case of 800, 100/800 is applied as the second correction process to the main image data of the reference pixels Pr7 and Pr8 of the fourth imaging condition in the main image data of the reference pixel Pr. Thereby, the difference between the main image data due to the difference in the imaging conditions is reduced. Note that when the amount of incident light on the target pixel P and the amount of incident light on the reference pixel Pr are the same, the difference between the main image data becomes small. If the images are different, the difference in the main image data may not be reduced. The same applies to the examples described later.
- the correction unit 33b of the image processing unit 33 differs only in the shutter speed between the first imaging condition and the fourth imaging condition, and the shutter speed of the first imaging condition is 1/1000 second.
- the correction unit 33b of the image processing unit 33 differs only in the frame rate between the first imaging condition and the fourth imaging condition (the charge accumulation time is the same), and the first imaging condition has a frame rate of 30 fps.
- the frame rate of the four imaging conditions is 60 fps
- the acquisition of the frame image acquired under the first imaging condition (30 fps) and acquisition of the main image data of the fourth imaging condition (60 fps) out of the main image data of the reference pixel Pr Employing the main image data of the frame image with close timing is defined as the second correction process.
- the difference between the main image data due to the difference in the imaging conditions is reduced.
- interpolation calculation is performed on the main image data of the frame image acquired under the first imaging condition (30 fps) and the frame image having the acquisition start timing close. This may be the second correction process.
- the correction unit 33b of the image processing unit 33 captures the imaging condition (first imaging condition) applied to the pixel of interest P and the imaging condition (first imaging condition) applied to all the reference pixels Pr around the pixel of interest P. 4 imaging conditions) is the same, the second correction process is not performed on the main image data of the reference pixel Pr. That is, the generation unit 33c performs image processing for calculating the main image data of the target pixel P by referring to the main image data of the reference pixel Pr as it is. As described above, even if there are some differences in the imaging conditions, the imaging conditions are regarded as the same.
- the pixel defect correction process is one of image processes performed during imaging.
- the image pickup element 32a which is a solid-state image pickup element, may produce pixel defects in the manufacturing process or after manufacturing, and output abnormal level image data. Therefore, the generation unit 33c of the image processing unit 33 corrects the main image data output from the pixel in which the pixel defect has occurred, thereby making the main image data in the pixel position in which the pixel defect has occurred inconspicuous.
- the generation unit 33c of the image processing unit 33 uses, for example, a pixel at the position of a pixel defect recorded in advance in a non-illustrated nonvolatile memory in one frame image as a target pixel P (processing target pixel), and sets the target pixel P as a target pixel P. Pixels around the pixel of interest P (eight pixels in this example) included in the central region of interest 90 (for example, 3 ⁇ 3 pixels) are set as reference pixels Pr.
- the generation unit 33c of the image processing unit 33 calculates the maximum value and the minimum value of the main image data in the reference pixel Pr, and when the main image data output from the target pixel P exceeds these maximum value or minimum value, the target pixel Max and Min filter processing for replacing the main image data output from P with the maximum value or the minimum value is performed. Such a process is performed for all pixel defects whose position information is recorded in a non-volatile memory (not shown).
- the correction unit 33b of the image processing unit 33 includes the reference pixel Pr when the pixel to which the fourth imaging condition different from the first imaging condition applied to the target pixel P is included is included in the reference pixel Pr.
- the second correction process is performed on the main image data to which the four imaging conditions are applied.
- the generation unit 33c of the image processing unit 33 performs the Max and Min filter processing described above.
- color interpolation processing is one of image processing performed at the time of imaging. As illustrated in FIG. 3, in the imaging chip 111 of the imaging device 100, green pixels Gb and Gr, a blue pixel B, and a red pixel R are arranged in a Bayer array.
- the generation unit 33c of the image processing unit 33 lacks the main image data having a color component different from the color component of the color filter F arranged at each pixel position, and thus is insufficient with reference to the main image data at the surrounding pixel positions.
- Color interpolation processing for generating main image data of color components is performed.
- FIG. 10A is a diagram illustrating the arrangement of main image data output from the image sensor 32a. Corresponding to each pixel position, it has one of R, G, and B color components according to the rules of the Bayer array.
- G color interpolation> First, general G color interpolation will be described.
- the generation unit 33c of the image processing unit 33 that performs the G color interpolation refers to the main image data of the four G color components at the reference positions around the target position, with the positions of the R color component and the B color component in turn as the target position.
- the main image data of the G color component at the target position is generated. For example, the G color component at the target position indicated by the thick frame in FIG.
- the generation unit 33c of the image processing unit 33 sets, for example, (aG1 + bG2 + cG3 + dG4) / 4 as the main image data of the G color component at the target position (second row, second column).
- a to d are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
- the first imaging condition is applied to the left and upper regions with respect to the thick line
- the fourth imaging condition is applied to the right and lower regions with respect to the thick line. It shall be.
- the main image data G1 to G4 of the G color component in FIG. 10B are reference positions for image processing of the pixel at the target position (second row and second column).
- the first imaging condition is applied to the target position (second row, second column).
- the first imaging condition is applied to the main image data G1 to G3.
- the fourth imaging condition is applied to the main image data G4. Therefore, the correction unit 33b of the image processing unit 33 performs the second correction process on the main image data G4. Thereafter, the generation unit 33c of the image processing unit 33 calculates the main image data of the G color component at the target position (second row and second column).
- the generation unit 33c of the image processing unit 33 generates the main image data of the G color component at the position of the B color component and the position of the R color component in FIG. 10A, as shown in FIG. 10C.
- the main image data of the G color component can be obtained at each pixel position.
- FIG. 11A is a diagram obtained by extracting the main image data of the R color component from FIG.
- the generation unit 33c of the image processing unit 33 illustrated in FIG. 11B is based on the main image data of the G color component illustrated in FIG. 10C and the main image data of the R color component illustrated in FIG.
- the main image data of the color difference component Cr is calculated.
- the generation unit 33c of the image processing unit 33 generates the main image data of the color difference component Cr at the target position indicated by the thick frame (second row and second column) in FIG. Reference is made to the main image data Cr1 to Cr4 of the four color difference components located in the vicinity of the second column).
- the generation unit 33c of the image processing unit 33 sets, for example, (eCr1 + fCr2 + gCr3 + hCr4) / 4 as main image data of the color difference component Cr at the target position (second row and second column).
- e to h are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
- the generation unit 33c of the image processing unit 33 generates the main image data of the color difference component Cr at the target position indicated by the thick frame (second row and third column) in FIG. Reference is made to the main image data Cr2, Cr4 to Cr6 of the four color difference components located in the vicinity of the second row and the third column).
- the generation unit 33c of the image processing unit 33 sets, for example, (qCr2 + rCr4 + sCr5 + tCr6) / 4 as main image data of the color difference component Cr at the target position (second row and third column).
- q to t are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
- the main image data of the color difference component Cr is generated for each pixel position.
- the first imaging condition is applied to the left and upper regions with respect to the thick line
- the fourth imaging condition is applied to the right and lower regions with respect to the thick line. It shall be applied.
- the first imaging condition and the fourth imaging condition are different.
- the position indicated by the thick frame is the target position of the color difference component Cr.
- the main image data Cr1 to Cr4 of the color difference components in FIG. 11B are reference positions for image processing of the pixel at the target position (second row and second column).
- the first imaging condition is applied to the target position (second row, second column).
- the first imaging condition is applied to the main image data Cr1, Cr3, and Cr4.
- the fourth imaging condition is applied to the main image data Cr2. Therefore, the correction unit 33b of the image processing unit 33 performs the second correction process on the main image data Cr2. Thereafter, the generation unit 33c of the image processing unit 33 calculates the main image data of the color difference component Cr at the position of interest (second row and second column).
- the position indicated by the thick frame (second row, third column) is the target position of the color difference component Cr.
- 11C are reference positions for image processing of the pixel at the target position (second row and third column).
- the fourth imaging condition is applied to the target position (second row, third column).
- the first imaging condition is applied to the main image data Cr4 and Cr5.
- the fourth imaging condition is applied to the main image data Cr2 and Cr6 among the reference positions. Therefore, the correction unit 33b of the image processing unit 33 performs the second correction process on the main image data Cr4 and Cr5, respectively.
- the generation unit 33c of the image processing unit 33 calculates the main image data of the color difference component Cr at the position of interest (second row and third column).
- the generation unit 33c of the image processing unit 33 obtains the main image data of the color difference component Cr at each pixel position, and then adds the main image data of the G color component shown in FIG. 10C corresponding to each pixel position. Thus, the main image data of the R color component can be obtained at each pixel position.
- FIG. 12A is a diagram obtained by extracting the main image data of the B color component from FIG.
- the generation unit 33c of the image processing unit 33 is illustrated in FIG. 12B based on the main image data of the G color component illustrated in FIG. 10C and the main image data of the B color component illustrated in FIG.
- the main image data of the color difference component Cb is calculated.
- the generation unit 33c of the image processing unit 33 uses, for example, (uCb1 + vCb2 + wCb3 + xCb4) / 4 as the main image data of the color difference component Cb at the target position (third row, third column).
- u to x are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
- the generation unit 33c of the image processing unit 33 generates the main image data of the color difference component Cb at the target position indicated by the thick frame (third row and fourth column) in FIG. Reference is made to the main image data Cb2, Cb4 to Cb6 of the four color difference components located in the vicinity of the third row and the fourth column).
- the generation unit 33c of the image processing unit 33 uses, for example, (yCb2 + zCb4 + ⁇ Cb5 + ⁇ Cb6) / 4 as main image data of the color difference component Cb at the target position (third row, fourth column).
- y, z, ⁇ , and ⁇ are weighting coefficients provided according to the distance between the reference position and the target position and the image structure.
- the main image data of the color difference component Cb is generated for each pixel position.
- the first imaging condition is applied to the left and upper regions with respect to the thick line
- the fourth imaging condition is applied to the right and lower regions with respect to the thick line. It shall be applied.
- the first imaging condition and the fourth imaging condition are different.
- the position indicated by the thick frame is the target position of the color difference component Cb.
- the main image data Cb1 to Cb4 of the color difference component in FIG. 12B are reference positions for image processing of the pixel at the target position (third row, third column).
- the fourth imaging condition is applied to the position of interest (third row, third column).
- the first imaging condition is applied to the main image data Cb1 and Cb3.
- the fourth imaging condition is applied to the main image data Cb2 and Cb4 among the reference positions. Therefore, the correction unit 33b of the image processing unit 33 performs the second correction process on the data Cb1 and Cb3, respectively. Thereafter, the generation unit 33c of the image processing unit 33 calculates the main image data of the color difference component Cb at the position of interest (third row, third column). In FIG. 12C, the position indicated by the thick frame (third row, fourth column) is the target position of the color difference component Cb.
- the main image data Cb2 and Cb4 to Cb6 of the color difference components in FIG. 12C are reference positions for image processing of the pixel at the target position (third row, fourth column).
- the fourth imaging condition is applied to the target position (third row, fourth column).
- the fourth imaging condition is applied to the main image data Cb2, Cb4 to Cb6 at all reference positions. Therefore, the generation unit 33c of the image processing unit 33 refers to the target image (3 The main image data of the color difference component Cb in the fourth row) is calculated.
- the generation unit 33c of the image processing unit 33 obtains the main image data of the color difference component Cb at each pixel position, and then adds the main image data of the G color component shown in FIG. 10C corresponding to each pixel position.
- the main image data of the B color component can be obtained at each pixel position.
- G color interpolation for example, when the main image data of the G color component is generated at the target position indicated by the thick frame (second row, second column) in FIG.
- the four main G-color component image data G1 to G4 are referred to, the number of reference G-color component main image data may be changed depending on the image structure.
- the correction unit 33b performs the first correction process, the second correction process, and the interpolation process, thereby generating an image by correcting the black collapse even when the black collapse pixels 85b and 85d are generated. can do.
- the generation unit 33c of the image processing unit 33 performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in an image of one frame.
- the kernel size of the sharpening filter which is an example of the linear filter is N ⁇ N pixels
- the position of the target pixel P is the target position
- the positions of (N 2 ⁇ 1) reference pixels Pr surrounding the target pixel P are Reference position.
- the kernel size may be N ⁇ M pixels.
- the generation unit 33c of the image processing unit 33 performs filter processing for replacing the main image data in the target pixel P with the linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixel of interest from left to right.
- the correction unit 33b of the image processing unit 33 includes the reference pixel Pr when the pixel to which the fourth imaging condition different from the first imaging condition applied to the target pixel P is included is included in the reference pixel Pr.
- the second correction process is performed on the main image data to which the four imaging conditions are applied.
- the generation unit 33c of the image processing unit 33 performs the linear filter processing described above.
- the generation unit 33c of the image processing unit 33 performs, for example, a known linear filter calculation using a kernel of a predetermined size centered on the pixel of interest P (processing target pixel) in an image of one frame.
- the kernel size of the smoothing filter which is an example of the linear filter is N ⁇ N pixels
- the position of the target pixel P is the target position
- the positions of the (N 2 ⁇ 1) reference pixels Pr surrounding the target pixel P are Reference position.
- the kernel size may be N ⁇ M pixels.
- the generation unit 33c of the image processing unit 33 performs filter processing for replacing the main image data in the target pixel P with the linear filter calculation result on each horizontal line, for example, from the upper horizontal line to the lower horizontal line of the frame image. This is done while shifting the pixel of interest from left to right.
- the correction unit 33b of the image processing unit 33 includes the reference pixel Pr when the pixel to which the fourth imaging condition different from the first imaging condition applied to the target pixel P is included is included in the reference pixel Pr.
- the second correction process is performed on the main image data to which the four imaging conditions are applied.
- the generation unit 33c of the image processing unit 33 performs the linear filter processing described above.
- the setting unit 34b may set the entire area of the imaging surface of the imaging element 32a as the processing imaging area, or may set a partial area of the imaging surface of the imaging element 32a as the processing imaging area. Good.
- the setting unit 34b captures the main image data in an area of a predetermined number of blocks sandwiching at least the boundary between the first area 61 and the sixth area 66. An imaging condition that is different from the imaging condition set at the time of setting is set.
- the setting unit 34b sets a predetermined area sandwiching at least the boundary between the first area 61 and the sixth area 66 from the processing image data captured by setting the entire area of the imaging surface of the imaging element 32a as the processing imaging area.
- Image data relating to the number of blocks may be extracted and used as processing image data.
- the setting unit 34b sets the fourth imaging condition, and sets the first area 61 and the fourth area 61 of the main image data from the processing image data captured using the entire area of the imaging surface of the imaging element 32a.
- Image data relating to a predetermined number of blocks across the boundary is extracted and generated as processing image data.
- the imaging area for processing is not limited to those set in correspondence with the areas set in the main image data (the first area to the sixth area in the above example).
- a processing imaging area may be set in advance in a partial area of the imaging surface of the imaging element 32a.
- the main subject is positioned when the person is imaged at the center of the screen as in portraits. It is possible to generate image data for processing in an area where there is a high possibility of this.
- the size of the processing imaging area may be changeable based on the user's operation, or may be fixed at a preset size.
- the first correction processing pixels such as whiteout or blackout are replaced with processing image data.
- the method of replacing with other focus detection signals is the same as the method of replacing image data of whiteout or blackout pixels, and thus the details are omitted.
- the image data replaced by the first correction process may be used.
- the lens movement control unit 34d of the control unit 34 performs focus detection processing using signal data (image data) corresponding to a predetermined position (focus point) on the imaging screen.
- the lens movement control unit 34d of the control unit 34 sets focus detection of at least one region when different imaging conditions are set for the divided regions and the focus point of the AF operation is located at the boundary portion of the divided regions.
- the second correction process is performed as a pre-process for the focus detection process on the signal data.
- the second correction process is performed in order to suppress a decrease in the accuracy of the focus detection process due to a difference in imaging conditions between areas of the imaging screen divided by the setting unit 34b.
- the focus point focus detection signal data for detecting the image shift amount (phase difference) in the image is located at the boundary of the divided area, different imaging conditions are applied to the focus detection signal data.
- Signal data may be mixed.
- the second correction is performed so as to suppress the difference between the signal data due to the difference in the imaging conditions, rather than detecting the image shift amount (phase difference) using the signal data to which the different imaging conditions are applied as it is. Based on the idea that it is preferable to detect the amount of image shift (phase difference) using the processed signal data, the second correction process is performed as follows.
- the lens movement control unit 34 d (generation unit) of the control unit 34 detects image shift amounts (phase differences) of a plurality of subject images due to light beams that have passed through different pupil regions of the imaging optical system 31, whereby the imaging optical system 31.
- the defocus amount of is calculated.
- the lens movement control unit 34d of the control unit 34 adjusts the focus of the imaging optical system 31 by moving the focus lens of the imaging optical system 31 to a position where the defocus amount is zero (allowable value or less), that is, a focus position. To do.
- FIG. 13 is a diagram illustrating the position of the focus detection pixel on the imaging surface of the imaging device 32a.
- focus detection pixels are discretely arranged along the X-axis direction (horizontal direction) of the imaging chip 111.
- fifteen focus detection pixel lines 160 are provided at predetermined intervals.
- the focus detection pixels constituting the focus detection pixel line 160 output a photoelectric conversion signal for focus detection.
- normal imaging pixels are provided at pixel positions other than the focus detection pixel line 160.
- the imaging pixel outputs a live view image or a photoelectric conversion signal for recording.
- FIG. 14 is an enlarged view of a part of the focus detection pixel line 160 corresponding to the focus point 80A shown in FIG.
- a red pixel R, a green pixel G (Gb, Gr), and a blue pixel B, a focus detection pixel S1, and a focus detection pixel S2 are illustrated.
- the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B are arranged according to the rules of the Bayer arrangement described above.
- the square area illustrated for the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B indicates the light receiving area of the imaging pixel.
- Each imaging pixel receives a light beam passing through the exit pupil of the imaging optical system 31 (FIG. 1). That is, the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B each have a square-shaped mask opening, and light passing through these mask openings reaches the light-receiving portion of the imaging pixel. .
- the shapes of the light receiving regions (mask openings) of the red pixel R, the green pixel G (Gb, Gr), and the blue pixel B are not limited to a quadrangle, and may be, for example, a circle.
- the semicircular region exemplified for the focus detection pixel S1 and the focus detection pixel S2 indicates a light receiving region of the focus detection pixel. That is, the focus detection pixel S1 has a semicircular mask opening on the left side of the pixel position in FIG. 14, and the light passing through the mask opening reaches the light receiving portion of the focus detection pixel S1. On the other hand, the focus detection pixel S2 has a semicircular mask opening on the right side of the pixel position in FIG. 14, and light passing through the mask opening reaches the light receiving portion of the focus detection pixel S2. As described above, the focus detection pixel S1 and the focus detection pixel S2 respectively receive a pair of light beams passing through different areas of the exit pupil of the imaging optical system 31 (FIG. 1).
- the position of the focus detection pixel line 160 in the imaging chip 111 is not limited to the position illustrated in FIG. Further, the number of focus detection pixel lines 160 is not limited to the example of FIG. Further, the shape of the mask opening in the focus detection pixel S1 and the focus detection pixel S2 is not limited to a semicircular shape. For example, a rectangular light receiving region (mask opening) in the imaging pixel R, the imaging pixel G, and the imaging pixel B is used. Part) may be a rectangular shape divided in the horizontal direction.
- the focus detection pixel line 160 in the imaging chip 111 may be a line in which focus detection pixels are arranged along the Y-axis direction (vertical direction) of the imaging chip 111.
- An imaging device in which imaging pixels and focus detection pixels are two-dimensionally arranged as shown in FIG. 14 is known, and detailed illustration and description of these pixels are omitted.
- the focus detection pixels S ⁇ b> 1 and S ⁇ b> 2 each receive one of a pair of focus detection light beams, the so-called 1PD structure.
- the focus detection pixels may be configured to receive both of a pair of light beams for focus detection, that is, a so-called 2PD structure.
- the photoelectric conversion signal obtained by the focus detection pixel can be used as a recording photoelectric conversion signal.
- the lens movement control unit 34d of the control unit 34 passes through different regions of the imaging optical system 31 (FIG. 1) based on the focus detection photoelectric conversion signals output from the focus detection pixel S1 and the focus detection pixel S2. An image shift amount (phase difference) between the pair of images by the pair of light beams is detected. Then, the defocus amount is calculated based on the image shift amount (phase difference).
- Such defocus amount calculation by the pupil division phase difference method is well known in the field of cameras, and thus detailed description thereof is omitted.
- the focus point 80A (FIG. 13) is selected by the user at a position corresponding to the attention area 90 at the boundary between the first area 61 and the fourth area 64 in the live view image 60a illustrated in FIG. It shall be.
- FIG. 15 is an enlarged view of the focus point 80A.
- a white background pixel indicates that the first imaging condition is set, and a shaded pixel indicates that the fourth imaging condition is set.
- the position surrounded by the frame 170 corresponds to the focus detection pixel line 160 (FIG. 13).
- the lens movement control unit 34d of the control unit 34 normally performs the focus detection process using the signal data from the focus detection pixels indicated by the frame 170 without performing the second correction process.
- the lens movement control unit 34d of the control unit 34 The second correction process is performed on the signal data of the fourth imaging condition among the signal data surrounded by 170 as in the following (Example 1) to (Example 3).
- the lens movement control unit 34d of the control unit 34 performs focus detection processing using the signal data after the second correction processing.
- the lens movement control unit 34d of the control unit 34 differs only in ISO sensitivity between the first imaging condition and the fourth imaging condition, the ISO sensitivity of the first imaging condition is 100, and the ISO sensitivity of the fourth imaging condition. Is 800/100, the signal data of the fourth imaging condition is multiplied by 100/800 as the second correction process. Thereby, the difference between the signal data due to the difference in the imaging conditions is reduced.
- the difference in the signal data is reduced, but the first imaging condition is originally When the amount of incident light on the applied pixel and the amount of incident light on the pixel to which the fourth imaging condition is applied are different, the difference in signal data may not be reduced. The same applies to the examples described later.
- the lens movement control unit 34d of the control unit 34 differs only in the shutter speed between the first imaging condition and the fourth imaging condition, and the shutter speed of the first imaging condition is 1/1000 second.
- the shutter speed is 1/100 second
- the lens movement control unit 34d of the control unit 34 differs only in the frame rate (the charge accumulation time is the same) between the first imaging condition and the fourth imaging condition, and the frame rate of the first imaging condition is 30 fps.
- the frame rate of the fourth imaging condition is 60 fps
- the signal data of the frame image acquired at the first imaging condition (30 fps) and the frame image whose acquisition start timing is close are used for the signal data of the fourth imaging condition (60 fps).
- This is the second correction process. Thereby, the difference between the signal data due to the difference in the imaging conditions is reduced.
- interpolation calculation is performed on the signal data of the frame image acquired under the first imaging condition (30 fps) and the acquisition start timing is similar. This may be the second correction process.
- the lens movement control unit 34d of the control unit 34 does not perform the second correction process when the imaging conditions applied in the signal data surrounded by the frame 170 are the same. That is, the lens movement control unit 34d of the control unit 34 performs focus detection processing using the signal data from the focus detection pixels indicated by the frame 170 as they are.
- the imaging conditions are regarded as the same.
- the example in which the second correction process is performed on the signal data of the fourth imaging condition in the signal data according to the first imaging condition is described.
- the signal of the first imaging condition in the signal data is described.
- the second correction process may be performed on the data according to the fourth imaging condition.
- the determination may be made based on the ISO sensitivity.
- the signal data obtained under the imaging condition with the higher ISO sensitivity is obtained under the imaging condition with the lower ISO sensitivity unless the signal data is saturated. It is desirable to perform the second correction process on the signal data. That is, when the ISO sensitivity differs between the first imaging condition and the fourth imaging condition, it is desirable to perform the second correction process on the darker signal data so as to reduce the difference from the brighter signal data.
- the difference between both signal data after the second correction process is obtained. It may be made smaller.
- the focus detection process using the pupil division phase difference method is exemplified.
- the contrast detection method in which the focus lens of the imaging optical system 31 is moved to the in-focus position based on the contrast of the subject image. Can be done in the same way.
- the control unit 34 moves the focus lens of the imaging optical system 31 and outputs signal data output from the imaging pixels of the imaging element 32a corresponding to the focus point at each position of the focus lens. Based on this, a known focus evaluation value calculation is performed. Then, the position of the focus lens that maximizes the focus evaluation value is obtained as the focus position.
- the control unit 34 normally performs the focus evaluation value calculation using the signal data output from the imaging pixel corresponding to the focus point without performing the second correction process. However, when the signal data corresponding to the focus point is mixed with the signal data to which the first imaging condition is applied and the signal data to which the fourth imaging condition is applied, the control unit 34 determines the signal corresponding to the focus point. The second correction process as described above is performed on the signal data of the fourth imaging condition in the data. Then, the control unit 34 performs a focus evaluation value calculation using the signal data after the second correction process. As described above, the correction unit 33b performs the first correction process, the second correction process, and the interpolation process, so that even when the black crushing pixels 85b and 85d are generated, the black crushing is corrected and the focus adjustment is performed.
- the focus can be adjusted by moving the lens even if there is a blackened pixel 85b or 85d.
- the focus adjustment process is performed after the second correction process.
- the second correction process may not be performed, and the focus adjustment may be performed using the image data obtained by the first correction process.
- the setting unit 34b may set the entire imaging area of the imaging element 32a as the processing imaging area, or may set a partial area of the imaging plane of the imaging element 32a as the processing imaging area. May be set as When setting a partial region of the imaging surface as the processing imaging region, the setting unit 34b sets at least a range including the frame 170, a region corresponding to the range including the focus point, and the imaging surface of the imaging element 32a. The vicinity of the center is set as the processing imaging area.
- the setting unit 34b corresponds to a range including the frame 170 and a range including the focus point from the processing image data captured by setting the entire imaging surface of the imaging element 32a as the processing imaging region.
- the processing image data may be generated by extracting a region to be processed.
- FIG. 16A illustrates a template image representing an object to be detected
- FIG. 16B illustrates a live view image 60a and a search range 190. is there.
- the object detection unit 34a of the control unit 34 detects an object (for example, a bag 63a which is one of the subject elements in FIG. 5) from the live view image.
- the object detection unit 34a of the control unit 34 may set the range in which the object is detected as the entire range of the live view image 60a. However, in order to reduce the detection process, a part of the live view image 60a may be used as the search range 190. Good.
- the search range 190 includes the boundary of the divided regions
- a second correction process is performed on the data as a pre-process of the subject detection process.
- the second correction process is performed in order to suppress a decrease in accuracy of the subject element detection process due to a difference in imaging conditions between areas of the imaging screen divided by the setting unit 34b.
- the image data of the search range 190 may include main image data to which different imaging conditions are applied.
- the second correction processing is performed so as to suppress the difference between the main image data due to the difference in the imaging conditions, rather than performing detection of the subject element using the main image data to which the different imaging conditions are applied as it is. Based on the idea that it is preferable to detect the subject element using the main image data, the second correction process is performed as follows.
- the object detection unit 34a of the control unit 34 sets the search range 190 in the vicinity of the region including the person 61a. In addition, you may set the area
- the object detection unit 34a of the control unit 34 uses the main image data constituting the search range 190 as it is without performing the second correction process. Subject detection processing. However, if the main image data to which the first imaging condition is applied and the main image data to which the fourth imaging condition is applied are mixed in the image data in the search range 190, the object detection unit 34a of the control unit 34 is used. Is the case where the focus correction process is performed on the main image data of the fourth imaging condition in the main image data in the search range 190, and the second correction process is performed as in (Example 1) to (Example 3) described above. Do.
- the object detection unit 34a of the control unit 34 performs subject detection processing using the main image data after the second correction processing.
- the imaging conditions are regarded as the same.
- the example in which the second correction processing is performed on the main image data of the fourth imaging condition in the main image data according to the first imaging condition has been described.
- the first imaging of the main image data is performed.
- the second correction process may be performed on the main image data of the condition according to the fourth imaging condition.
- the second correction process for the main image data in the search range 190 described above may be applied to a search range used for detecting a specific subject such as a human face or an area used for determination of an imaging scene.
- the second correction processing for the main image data in the search range 190 described above is not limited to the search range used in the pattern matching method using the template image, and a search when detecting a feature amount based on the color or edge of the image. The same applies to the range.
- the control unit 34 includes a case where main image data to which the first imaging condition is applied and main image data to which the fourth imaging condition is applied are mixed in a search range set in a frame image acquired later.
- the second correction process is performed on the main image data of the fourth imaging condition in the main image data in the search range as in (Example 1) to (Example 3) described above. Then, the control unit 34 performs a tracking process using the main image data after the second correction process.
- the control unit 34 detects the motion vector.
- the second correction process is performed as described above (Example 1) to (Example 3) on the main image data of the fourth imaging condition in the main image data of the detection region to be used. Then, the control unit 34 detects a motion vector using the main image data after the second correction process.
- the correction unit 33b performs the first correction process, the second correction process, and the interpolation process, so that the black subject is corrected and the above-described subject is corrected even when the black-crushed pixels 85b and 85d are generated. Detection or the like can be performed. Therefore, even if there is a blackened pixel 85b or 85d, the subject matter can be detected.
- the subject detection processing is performed after the second correction processing is performed. However, subject detection may be performed based on the image data obtained by the first correction processing without performing the second correction processing.
- the setting unit 34b may set the entire imaging area of the imaging element 32a as the processing imaging area, or may set a partial area of the imaging plane of the imaging element 32a as the processing imaging area. May be set as When setting a partial region of the imaging surface as a processing imaging region, the setting unit 34b includes at least a range including the search range 190, a region corresponding to a range including a detection range used for detecting a motion vector, The vicinity of the center of the imaging surface of the imaging element 32a is set as a processing imaging area.
- the setting unit 34b sets the entire area of the imaging surface of the image sensor 32a as a processing imaging area, and uses detection for detecting a range including the search range 190 and a motion vector from the captured image data.
- Each processing image data may be generated by extracting a range including the range.
- the exposure condition is determined by newly performing photometry, A second correction process is performed as a pre-process for setting the exposure condition on the main image data in at least one region.
- the second correction process is performed in order to suppress a decrease in accuracy of the process for determining the exposure condition due to a difference in imaging conditions between areas of the imaging screen divided by the setting unit 34b.
- the main image data to which different imaging conditions are applied may be mixed in the main image data in the photometric range.
- the second correction processing is performed so as to suppress the difference between the main image data due to the difference in the imaging conditions, rather than performing the exposure calculation processing using the main image data to which the different imaging conditions are applied as it is. Based on the idea that it is preferable to perform exposure calculation processing using the main image data, the second correction processing is performed as follows.
- the setting unit 34b of the control unit 34 performs exposure calculation using the main image data constituting the photometry range as it is without performing the second correction process.
- the setting unit 34b of the control unit 34 In the case where the focus detection process and the subject detection process are performed on the image data of the fourth imaging condition in the main image data in the photometry range, the second correction process is performed as in the above (Example 1) to (Example 3). I do.
- the setting unit 34b of the control unit 34 performs an exposure calculation process using the main image data after the second correction process.
- the imaging conditions are regarded as the same.
- the example in which the second correction processing is performed on the main image data of the fourth imaging condition in the main image data according to the first imaging condition has been described.
- the first imaging of the main image data is performed.
- the second correction process may be performed on the main image data of the condition according to the fourth imaging condition.
- the photometric range when performing the exposure calculation process described above but also the photometric (colorimetric) range used when determining the white balance adjustment value and the necessity of emission of the auxiliary photographing light by the light source that emits the auxiliary photographing light are determined. The same applies to the photometric range performed at the time, and further to the photometric range performed at the time of determining the light emission amount of the photographing auxiliary light by the light source.
- the correction unit 33b performs the first correction process, the second correction process, and the interpolation process, so that even if the blackout pixels 85b and 85d are generated, the blackout is corrected and the shooting condition is satisfied. Settings can be made. For this reason, the photographing condition can be set even if there is a blackened pixel 85b or 85d.
- the shooting conditions are set after performing the second correction process. However, the shooting conditions may be set based on the image data obtained by the first correction process without performing the second correction process. .
- the setting unit 34b may set the entire imaging area of the imaging element 32a as the processing imaging area, or may set a partial area of the imaging plane of the imaging element 32a as the processing imaging area. May be set as When setting a partial region of the imaging surface as the processing imaging region, the setting unit 34b captures at least the region corresponding to the range including the photometric range and the vicinity of the center of the imaging surface of the imaging element 32a. Set as area.
- the setting unit 34b extracts the range that includes the photometry range from the processing image data that is captured by setting the entire imaging surface of the imaging element 32a as the processing imaging region, and the processing image Data may be generated.
- the imaging control unit 34c causes the imaging unit 32 to capture the processing image data at a timing different from the timing at which the main image data is captured.
- the imaging control unit 34c causes the imaging unit 32 to capture the processing image data when the live view image is displayed or when the operation member 36 is operated.
- the imaging control unit 34c instructs the imaging unit 32 to capture the processing image data
- the imaging control unit 34c outputs information about the imaging conditions set in the processing image data by the setting unit 34b.
- description will be made separately on imaging of processing image data when a live view image is displayed and imaging of processing image data when an operation member 36 is operated.
- the imaging control unit 34c causes the imaging unit 32 to capture image data for processing after an operation for instructing the display start of the live view image is performed by the user.
- the imaging control unit 34c causes the imaging unit 32 to capture the processing image data for each predetermined period while the live view image is displayed.
- the imaging control unit 34c for example, at the timing of capturing an even frame or the next timing of capturing a 10-frame live view image, instead of the live view image capturing instruction
- a signal instructing imaging of the processing image data is output to the imaging unit 32.
- the imaging control unit 34c causes the imaging unit 32 to capture the processing image data under the imaging conditions set by the setting unit 34b.
- FIG. 17A shows a case where imaging of a live view image and imaging of processing image data are alternately performed every other frame. It is assumed that the first imaging condition to the third imaging condition are set for the first area 61 to the third area 63 (FIG. 7A) by the user's operation, respectively. At this time, the first processing image data D1 in which the first imaging condition is set and the second imaging condition in which the second imaging condition is set are used for processing each of the first area 61 to the third area 63 of the main image data. The second processing image data D2 and the third processing image data D3 for which the third imaging condition is set are captured.
- the imaging control unit 34c instructs the imaging unit 32 to capture the Nth frame live view image LV1, and the control unit 34 causes the display unit 35 to display the live view image LV1 obtained by the imaging.
- the imaging control unit 34c instructs the imaging unit 32 to image the first processing image data D1 to which the first imaging condition is applied.
- the imaging control unit 34c records the captured first processing image data D1 on a predetermined recording medium (not shown) by the recording unit 37, for example.
- the control unit 34 causes the display unit 35 to display the live view image LV1 captured at the timing of capturing the Nth frame as the (N + 1) th frame live view image. That is, the display of the live view image LV1 of the previous frame is continued.
- the imaging control unit 34c instructs the imaging unit 32 to image the live view image LV2 of the (N + 2) frame.
- the control unit 34 switches from the display of the live view image LV1 on the display unit 35 to the display of the live view image LV2 obtained by imaging the (N + 2) th frame.
- the imaging control unit 34c causes the imaging unit 32 to capture the second processing image data D2 to which the second imaging condition is applied, and the captured second processing image data D2 is captured. Record. Also in this case, the control unit 34 continues to display the live view image LV2 captured at the imaging timing of the (N + 2) th frame as the live view image of the (N + 3) th frame on the display unit 35.
- the imaging control unit 34c causes the imaging unit 32 to capture the live view image LV3, and the control unit 34 captures the captured live view.
- the image LV3 is displayed on the display unit 35.
- the imaging control unit 34c causes the imaging unit 32 to perform imaging of the third processing image data D3 to which the third imaging condition is applied.
- the control unit 34 continues to display the live view image LV3 in the (N + 4) th frame on the display unit 35 in the same manner as described above.
- the control unit 34 repeatedly performs the processes in the Nth frame to the (N + 5) th frame.
- the imaging control unit 34c captures image data for processing (FIG. 17).
- the imaging unit 32 may perform imaging by applying the imaging conditions newly set in the (N + 1) th (N + 1), (N + 3), and (N + 5) frames).
- the imaging control unit 34c may cause the imaging unit 32 to capture the processing image data before starting to display the live view image. For example, when the user turns on the camera 1 or performs an operation for instructing to start displaying a live view image, a signal for instructing the imaging unit 32 to image the processing image data is output. The imaging control unit 34c instructs the imaging unit 32 to capture a live view image when imaging of the first processing image data to the third processing image data is completed.
- the imaging control unit 34c captures the first image data D1 for the first imaging condition and the second imaging condition at the timing for imaging the first to third frames.
- the second processing image data D2 and the third processing image data D3 under the third imaging condition are respectively imaged.
- the imaging control unit 34c causes the live view images LV1, LV2, LV3,... To be captured, and the control unit 34 causes the display unit 35 to sequentially display the live view images LV1, LV2, LV3,.
- the imaging control unit 34c may cause the imaging unit 32 to capture the processing image data at the timing when the user performs an operation to end the display of the live view image while the live view image is displayed. That is, when an operation signal corresponding to an operation for instructing the end of display of the live view image is input from the operation member 36, the imaging control unit 34c outputs a signal for instructing the imaging unit 32 to end the imaging of the live view image. To do. When the imaging unit 32 finishes capturing the live view image, the imaging control unit 32c outputs a signal instructing imaging of the processing image data to the imaging unit 32.
- the imaging control unit 34c when an operation for ending the display of the live view image is performed in the Nth frame, the imaging control unit 34c performs the (N + 1) th to (N + 3) th frames.
- the first processing image data D1 under the first imaging condition, the second processing image data D2 under the second imaging condition, and the third processing image data D3 under the third imaging condition are each captured.
- the control unit 34 may cause the display unit 35 to display the live view image LV1 captured in the Nth frame during the period of the (N + 1) th to (N + 3) th frames, or display the live view image. It is not necessary to perform.
- the imaging control unit 34c may cause the processing image data to be captured for all the frames of the live view image.
- the setting unit 34b sets different imaging conditions for each frame in the entire area of the imaging surface of the imaging element 32a.
- the control unit 34 displays the generated processing image data on the display unit 35 as a live view image.
- the imaging control unit 34c may cause the image data for processing to be captured when a change occurs in the composition of the image being captured while the live view image is being displayed. For example, the imaging control is performed when the position of the subject element detected by the setting unit 34b of the control unit 34 based on the live view image is shifted by a predetermined distance or more compared to the position of the subject element detected in the previous frame.
- the unit 34c may instruct to capture the processing image data.
- the operation of the operation member 36 for capturing the processing image data includes, for example, a half-press of the release button by the user, that is, an operation instructing preparation for imaging, or a full-press of the release button.
- An operation, that is, an operation for instructing the main imaging is exemplified.
- (2-1) Release button half-press When the user performs a half-press operation of the release button, that is, an operation for instructing preparation for imaging, a half-press operation signal is output from the operation member 36. This half-press operation signal is output from the operation member 36 during a period when the release button is half-pressed by the user.
- the imaging control unit 34c of the control unit 34 inputs the half-press operation signal corresponding to the start of the operation for instructing the preparation for imaging from the operation member 36
- the imaging control unit 34c outputs a signal for instructing the imaging unit 32 to capture the processing image data. Output. That is, the imaging unit 32 captures the processing image data in response to the start of an operation for instructing the user to prepare for imaging.
- the imaging control unit 34c captures the processing image data in the imaging unit 32 at the timing when the half-press operation of the release button by the user is completed, for example, the timing when the user has finished the transition from the half-press operation to the full-press operation. You may let them. That is, the imaging control unit 34c may output a signal for instructing imaging to the imaging unit 32 at a timing when an operation signal corresponding to an operation for instructing preparation for imaging is not input from the operation member 36.
- the imaging control unit 34c may cause the imaging unit 32 to capture the processing image data while the release button is half-pressed by the user. In this case, the imaging control unit 34c can output a signal instructing imaging to the imaging unit 32 at every predetermined period. Thus, the processing image data can be captured while the release button is half-pressed by the user.
- the imaging control unit 34c may output a signal instructing imaging to the imaging unit 32 in accordance with the timing at which the live view image is captured. In this case, the imaging control unit 34c instructs the imaging of the processing image data at the timing of imaging the even-numbered frame, for example, at the next timing after imaging the 10-frame live view image, of the frame rate of the live view image. A signal to be output may be output to the imaging unit 32. Note that if the processing image data is captured while the live view image is being displayed, the processing image data may not be captured based on the half-press operation of the release button.
- (2-2) Fully-pressing operation of release button When the user performs a full-pressing operation of the release button, that is, an operation for instructing main imaging, a full-pressing operation signal is output from the operation member 36.
- the imaging control unit 34c of the control unit 34 outputs a signal instructing main imaging to the imaging unit 32 when a full-press operation signal corresponding to an operation instructing main imaging is input from the operation member 36.
- the image pickup control unit 34c After the main image data is picked up by the main image pickup, the image pickup control unit 34c outputs a signal instructing to pick up the processing image data. That is, after the user performs a full-press operation instructing imaging, the imaging unit 32 captures the processing image data after capturing the main image data by the main imaging.
- the imaging control unit 34c may cause the imaging unit 32 to capture the processing image data before capturing the main image data.
- the imaging control unit 34c records the processing image data acquired before imaging the main image data on a recording medium (not shown) by the recording unit 37, for example. Thereby, after the main image data is captured, the recorded processing image data can be used for generating the main image data. Further, when the processing image data is captured while the live view image is displayed, the processing image data may not be captured based on the half-press operation of the release button.
- the operation of the operation member 36 for capturing the processing image data is not limited to a half-press operation or a full-press operation of the release button.
- the imaging control unit 34c may instruct imaging of the processing image data.
- operations related to imaging there are, for example, an operation for changing the imaging magnification, an operation for changing the aperture, and an operation related to focus adjustment (for example, selection of a focus point).
- the imaging control unit 34c causes the imaging unit 32 to capture the processing image data.
- main imaging is performed under new settings, it is possible to generate processing image data imaged under the same conditions as the main imaging.
- the imaging control unit 34c may capture the processing image data when a menu operation is performed on the menu screen. This is because, when an operation related to imaging is performed from the menu screen, there is a high possibility that new settings will be made for the actual imaging. In this case, the imaging unit 32 captures the processing image data while the menu screen is open.
- the processing image data may be captured at predetermined intervals, or may be captured at a frame rate when capturing a live view image.
- the imaging control unit 34c When an operation not related to imaging, for example, an operation for reproducing and displaying an image, an operation during reproduction display, or an operation for clock adjustment is performed, the imaging control unit 34c performs processing image data Does not output a signal for instructing to take an image. That is, the imaging control unit 34c does not cause the imaging unit 32 to capture the processing image data. Thereby, it is possible to prevent the image data for processing from being imaged when the possibility that a new setting for the main imaging is performed is low or the possibility that the main imaging is performed is low.
- the imaging control unit 34c causes the imaging unit 32 to store the processing image data when the dedicated button is operated by the user. Instruct imaging.
- the imaging control unit 34c performs a predetermined cycle during the period in which the dedicated button is operated. Each time the image data for processing may be imaged by the imaging unit 32, or the image data for processing may be imaged when the operation of the dedicated button is finished. As a result, the processing image data can be captured at a timing desired by the user. Further, when the camera 1 is turned on, the imaging control unit 34c may instruct the imaging unit 32 to capture the processing image data.
- the processing image data may be captured by applying all of the exemplified methods, or the processing image data may be captured by applying at least one method.
- the image data for processing may be captured by applying a method selected by the user from among the methods. For example, the selection by the user may be performed by a menu operation on a menu screen displayed on the display unit 35.
- the imaging control unit 34c images the processing image data used for the detection / setting process at various timings similar to the timing for generating the processing image data used for the image processing described above. That is, the same processing image data can be used as processing image data used for detection / setting processing, and can be used as processing image data used for image processing.
- the processing image data used for the detection / setting process is generated at a timing different from the timing of generating the processing image data used for the image processing will be described.
- the imaging control unit 34c causes the imaging unit 32 to capture the processing image data before imaging the main image data.
- the result detected using the latest processing image data imaged immediately before the main imaging or the result set using the latest processing image data imaged immediately before the main imaging is used for the main imaging. It can be reflected.
- the imaging control unit 34c stores the processing image data. Let's take an image. In this case, the imaging control unit 34c may generate one frame of processing image data, or may generate a plurality of frames of processing image data.
- the processing image data is used for the imaging condition setting processing such as the image processing, the focus detection processing, the subject detection processing, and the exposure.
- the processing image data is used for all the processing. It is not limited to the case of using. That is, the present embodiment includes an image process, a focus detection process, a subject detection process, and an imaging condition setting process that are used for at least one process. Which process the image data for processing is used for may be configured so that the user can select and determine from the menu screen displayed on the display unit 35. Further, the above-described processing image data may be generated based on an output signal from a photometric sensor 38 provided separately from the image sensor 32a.
- FIG. 18 is a flowchart for explaining the flow of processing for setting an imaging condition for each area and imaging.
- the control unit 34 activates a program that executes the process shown in FIG.
- step S10 the control unit 34 causes the display unit 35 to start live view display, and proceeds to step S20.
- the control unit 34 instructs the imaging unit 32 to start acquiring a live view image, and causes the display unit 35 to sequentially display the acquired live view image.
- the same imaging condition is set for the entire imaging chip 111, that is, the entire screen.
- the lens movement control unit 34d of the control unit 34 performs focus detection processing to focus on the subject element corresponding to the predetermined focus point. Control the AF operation.
- the lens movement control unit 34d performs the focus detection process after the first correction process and the second correction process, or the first correction process or the second correction process, as necessary. If the setting for performing the AF operation is not performed during live view display, the lens movement control unit 34d of the control unit 34 performs the AF operation when the AF operation is instructed later.
- step S20 the object detection unit 34a of the control unit 34 detects the subject element from the live view image and proceeds to step S30.
- the object detection unit 34a performs the subject detection process after the first correction process and the second correction process, or the first correction process or the second correction process, as necessary.
- step S30 the setting unit 34b of the control unit 34 divides the screen of the live view image into regions including subject elements, and proceeds to step S40.
- step S ⁇ b> 40 the control unit 34 displays an area on the display unit 35. As illustrated in FIG. 6, the control unit 34 highlights an area that is a target for setting (changing) the imaging condition among the divided areas. In addition, the control unit 34 displays the imaging condition setting screen 70 on the display unit 35 and proceeds to step S50. Note that, when the display position of another main subject on the display screen is tapped with the user's finger, the control unit 34 sets the region including the main subject as a region for setting (changing) the imaging condition. Change and highlight.
- step S50 the control unit 34 determines whether an AF operation is necessary.
- the control unit 34 for example, when the focus adjustment state changes due to the movement of the subject, when the position of the focus point is changed by a user operation, or when execution of an AF operation is instructed by a user operation Then, affirmative determination is made in step S50, and the process proceeds to step S70. If the focus adjustment state does not change, the position of the focus point is not changed by the user operation, and the execution of the AF operation is not instructed by the user operation, the control unit 34 makes a negative determination in step S50 and proceeds to step 60. .
- step S70 the control unit 34 performs the AF operation and returns to step S40.
- the lens movement control unit 34d performs a focus detection process that is an AF operation after the first correction process and the second correction process, or the first correction process or the second correction process, as necessary. Do.
- the control unit 34 that has returned to step S40 repeats the same processing as described above based on the live view image acquired after the AF operation.
- step S60 the setting unit 34b of the control unit 34 sets an imaging condition for the highlighted area in accordance with a user operation, and proceeds to step S80. Note that the display transition of the display unit 35 and the setting of the imaging conditions according to the user operation in step S60 are as described above.
- the setting unit 34b of the control unit 34 performs the exposure calculation process after the first correction process and the second correction process, or the first correction process or the second correction process, as necessary.
- step S80 the control unit 34 determines whether or not to acquire processing image data during display of the live view image. If the setting is made to capture the processing image data while the live view image is displayed, the control unit 34 makes a positive determination in step S80 and proceeds to step S90. If the setting for capturing the processing image data is not performed during the display of the live view image, the control unit 34 makes a negative determination in step S80 and proceeds to step S100 described later.
- step S90 the imaging control unit 34c of the control unit 34 instructs the imaging unit 32 to perform imaging of the processing image data for each set imaging condition at a predetermined period during imaging of the live view image. Then, the process proceeds to step S100.
- the processing image data captured at this time is stored in a storage medium (not shown) by the recording unit 37, for example. Thereby, the recorded processing image data can be used later.
- step S100 the control unit 34 determines the presence / absence of a release operation. When the release button constituting the operation member 36 or the display icon for instructing imaging is operated, the control unit 34 makes a positive determination in step S100 and proceeds to step S110. When the release operation is not performed, the control unit 34 makes a negative determination in step S100 and returns to step S60.
- step S110 the control unit 34 performs imaging processing of the processing image data and the main image data. That is, the imaging control unit 34c captures the processing image data for each of the different imaging conditions set in step S60, and performs the actual imaging with the imaging conditions set for each of the areas in step S60. To obtain the main image data and proceed to step S120. Note that, when processing image data used for detection / setting processing is captured, the processing image data is captured before capturing the main image data. If the processing image data is captured in step S90, the processing image data may not be captured in step S110.
- step S120 the imaging control unit 34c of the control unit 34 sends an instruction to the image processing unit 33, and uses the processing image data acquired in step S90 or step S110 for the main image data obtained by the imaging. Then, predetermined image processing is performed, and the process proceeds to step S130.
- Image processing includes the pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing.
- the correction unit 33b of the image processing unit 33 performs the first correction process and the second correction process, or the first correction process on the main image data located at the boundary of the region as necessary. Alternatively, image processing is performed after performing the second correction processing.
- step S130 the control unit 34 sends an instruction to the recording unit 37, records the image data after the image processing on a recording medium (not shown), and proceeds to step S140.
- step S140 the control unit 34 determines whether an end operation has been performed. When the end operation is performed, the control unit 34 makes a positive determination in step S140 and ends the process illustrated in FIG. If the end operation is not performed, the control unit 34 makes a negative determination in step S140 and returns to step S20. When returning to step S20, the control unit 34 repeats the above-described processing.
- the main imaging is performed under the imaging conditions set in step S60, and the main image data is processed using the processing image data acquired in step S90 or S110.
- the focus detection process, the subject detection process, and the imaging condition setting process are obtained while the live view image is displayed in step S90. Processing may be performed based on the processed image data.
- the multilayer image sensor 100 is illustrated as the image sensor 32a.
- the imaging condition can be set for each of a plurality of blocks in the image sensor (imaging chip 111)
- the image sensor 32a is not necessarily configured as a multilayer image sensor. do not have to.
- the imaging conditions of the first area 61 can be appropriately generated using the processing image data imaged under the fourth imaging condition different from the above. Specifically, the brightness, contrast, hue, etc. of the image between the shaded portions of the blocks 82, 85 and 87 in FIG. 7B and the shaded portions of the blocks 83, 86, 88 and 89, etc. It is possible to generate main image data that suppresses discontinuity and discomfort in images such as differences in image quality.
- the main image data can be appropriately generated using the processing image data imaged under the fourth imaging condition different from the above. Specifically, the brightness, contrast, hue, etc. of the image between the shaded portions of the blocks 82, 85 and 87 in FIG. 7B and the shaded portions of the blocks 83, 86, 88 and 89, etc. It is possible to generate main image data that suppresses discontinuity and discomfort in images such as differences in image quality.
- the main image data can be appropriately generated.
- the main image data can be acquired without missing a photo opportunity. Then, the main image data can be appropriately generated based on the processing image data acquired after the main imaging.
- the processing image data can be acquired by the photometric sensor 38 while the main image data is acquired by the imaging unit 32.
- the processing image data acquired before imaging the main image data can be used to generate the main image data acquired later.
- the image data of the subject imaged in the first area 61 is generated based on the image data of the object imaged in the fourth area 64 larger than the area of the first area 61, the image data is generated appropriately. can do.
- Processing can be appropriately performed in areas where imaging conditions are different. That is, it is possible to appropriately generate an image based on the image data generated in each region.
- the signal processing chip 112 is disposed between the imaging chip 111 and the memory chip 113. Thereby, since each chip is laminated
- the first area 61 Focusing can be appropriately performed using the signal data of the processing image captured under the fourth imaging condition different from the imaging condition described above. Specifically, differences in image brightness, contrast, and the like between the shaded portions of blocks 82, 85, and 87 in FIG. 7B and the shaded portions of blocks 83, 86, 88, and 89 are shown. It is possible to generate signal data of the main image in which the discontinuity of the image is suppressed. As a result, it is possible to suppress a decrease in focus detection accuracy due to a difference in imaging conditions for each block, and thus appropriate focusing can be performed.
- the signal data of the main image can be appropriately generated using the signal data of the processing image captured under the fourth imaging condition that is different from the imaging condition. Specifically, differences in image brightness, contrast, and the like between the shaded portions of blocks 82, 85, and 87 in FIG. 7B and the shaded portions of blocks 83, 86, 88, and 89 are shown. It is possible to generate signal data of the main image in which the discontinuity of the image is suppressed. As a result, it is possible to suppress a decrease in focus detection accuracy due to a difference in imaging conditions for each block, and thus appropriate focusing can be performed.
- the processing image can be acquired by the photometric sensor 38 while acquiring the main image data by the imaging unit 32, so that the focus can be appropriately adjusted.
- the signal data of the processing image acquired before capturing the main image data can be used for focus adjustment when capturing the main image data.
- the signal data can be appropriately generated based on the signal data of the image captured in the fourth region 64 that is larger than the area of the first region 61. Therefore, appropriate focusing can be performed.
- the main image data can be appropriately generated using the processing image data imaged under different imaging conditions. Specifically, the brightness, contrast, hue, etc. of the image between the shaded portions of the blocks 82, 85 and 87 in FIG. 7B and the shaded portions of the blocks 83, 86, 88 and 89, etc. It is possible to generate the main image data in which the discontinuity of the image such as the difference is suppressed. As a result, it can suppress that the detection accuracy of a subject element falls by the difference in the imaging conditions for every block.
- the main image data can be acquired without missing a photo opportunity.
- the subject element can be appropriately detected from the processing image data acquired after the main imaging.
- the processing image can be acquired by the photometric sensor 38 while the main image data is acquired by the imaging unit 32. Therefore, the subject element can be detected appropriately.
- the processing image data acquired before capturing the main image data can be used for detection of the subject element when the main image data is captured.
- the main image data can be appropriately generated based on the processing image data captured in the fourth area 64 larger than the area of the first area 61. As a result, it can suppress that the detection accuracy of a subject element falls by the difference in the imaging conditions for every block.
- the imaging conditions of the first area 61 can be appropriately generated using the processing image data imaged under different imaging conditions. Specifically, the brightness, contrast, hue, etc. of the image between the shaded portions of the blocks 82, 85 and 87 in FIG. 7B and the shaded portions of the blocks 83, 86, 88 and 89, etc. It is possible to generate the main image data in which the discontinuity of the image such as the difference is suppressed. As a result, it is possible to suppress a decrease in exposure condition setting accuracy due to a difference in imaging conditions for each block.
- the processing image data acquired before capturing the main image data can be used for setting an exposure condition when capturing the main image data.
- mode 1 the control unit 34 performs processing such as image processing after performing the above-described preprocessing.
- mode 2 the control unit 34 performs processing such as image processing without performing the above-described preprocessing. For example, when a part of the face detected as the subject element is shaded, the shadowed part of the face is set so that the brightness of the shaded part of the face is comparable to the brightness of the part other than the shadow of the face.
- the setting is performed.
- unintentional color interpolation is performed on the shadow portion due to a difference in imaging conditions. It is possible to avoid unintended color interpolation by configuring the mode 1 and the mode 2 so that the color interpolation process can be performed using the image data as it is without performing the second correction process. Become.
- FIGS. 19A to 19C are diagrams illustrating the arrangement of the first imaging region and the second imaging region on the imaging surface of the imaging device 32a.
- the first imaging area is configured by even columns
- the second imaging area is configured by odd columns. That is, the imaging surface is divided into even columns and odd columns.
- the first imaging area is composed of odd rows and the second imaging area is composed of even rows. That is, the imaging surface is divided into odd rows and even rows.
- the first imaging region is composed of even-numbered blocks in odd columns and odd-numbered blocks in even columns.
- the second imaging region is configured by blocks of even rows in even columns and blocks of odd rows in odd columns. That is, the imaging surface is divided into a checkered pattern.
- a second image based on the photoelectric conversion signal read from the second imaging region is generated.
- the first image and the second image are captured at the same angle of view and include a common subject image.
- the control unit 34 uses the first image for display and the second image as processing image data. Specifically, the control unit 34 causes the display unit 35 to display the first image as a live view image.
- the control unit 34 uses the second image as processing image data. That is, the processing unit 33b performs image processing using the second image, the object detection unit 34a performs subject detection processing using the second image, and the lens movement control unit 34d performs focus detection using the second image. The processing is performed, and the setting unit 34b performs exposure calculation processing using the second image.
- the area for acquiring the first image and the area for acquiring the second image may be changed for each frame.
- the first image from the first imaging region is captured as a live view image
- the second image from the second imaging region is captured as processing image data
- the first image is captured. It is also possible to capture the second image as a live view image as processing image data, and repeat this operation in subsequent frames.
- the control unit 34 captures a live view image under the first imaging condition, and sets the first imaging condition to a condition suitable for display by the display unit 35.
- the first imaging condition is the same for the entire imaging screen.
- the control unit 34 captures the processing image data under the second imaging condition, and sets the second imaging condition to a condition suitable for the focus detection process, the subject detection process, and the exposure calculation process.
- the second imaging condition is also made the same for the entire imaging screen.
- the control unit 34 may change the second imaging condition set in the second imaging area for each frame.
- the second imaging condition of the first frame is a condition suitable for the focus detection process
- the second imaging condition of the second frame is a condition suitable for the subject detection process
- the second imaging condition of the third frame is the exposure calculation process.
- Conditions suitable for In these cases, the second imaging condition in each frame is the same for the entire imaging screen.
- control unit 34 may change the first imaging condition on the imaging screen.
- the setting unit 34b of the control unit 34 sets different first imaging conditions for each region including the subject element divided by the setting unit 34b.
- the control unit 34 makes the second imaging condition the same for the entire imaging screen.
- the control unit 34 sets the second imaging condition to a condition suitable for the focus detection process, the subject detection process, and the exposure calculation process. However, the conditions suitable for the focus detection process, the subject detection process, and the exposure calculation process are set. If they are different, the imaging conditions set in the second imaging area may be different for each frame.
- control unit 34 may change the second imaging condition on the imaging screen while making the first imaging condition the same on the entire imaging screen. For example, a different second imaging condition is set for each region including the subject element divided by the setting unit 34b. Even in this case, if the conditions suitable for the focus detection process, the subject detection process, and the exposure calculation process are different, the imaging conditions set in the second imaging area may be different for each frame.
- control unit 34 makes the first imaging condition different on the imaging screen and makes the second imaging condition different on the imaging screen.
- the setting unit 34b sets different first imaging conditions for each region including the subject element divided, and the setting unit 34b sets different second imaging conditions for each region including the subject element divided.
- the area ratio between the first imaging region and the second imaging region may be different.
- the control unit 34 sets the ratio of the first imaging region to be higher than that of the second imaging region based on the operation by the user or the determination of the control unit 34, or displays the ratio between the first imaging region and the second imaging region. As illustrated in FIGS. 19 (a) to 19 (c), they are set equally, or the ratio of the first imaging area is set lower than that of the second imaging area.
- the first image can be made to have a higher definition than the second image, the resolution of the first image and the second image can be made equal, The image can be made higher in definition than the first image.
- the first image is made higher in definition than the second image by averaging the image signals from the blocks in the second imaging area.
- the second correction processing when performing image processing includes the imaging condition applied at the position of interest (referred to as the first imaging condition) and the imaging condition applied at the reference position around the position of interest.
- the correction unit 33b of the image processing unit 33 uses the main image data (the image data of the fourth imaging condition among the image data of the reference position) of the fourth imaging condition. Correction was performed based on the first imaging condition. That is, the second correction process is performed on the main image data of the fourth imaging condition at the reference position, thereby reducing the discontinuity of the image based on the difference between the first imaging condition and the fourth imaging condition.
- the correction unit 33b of the image processing unit 33 performs the main image data of the first imaging condition (the main image data of the first imaging condition among the main image data of the target position and the main image data of the reference position).
- Image data may be corrected based on the fourth imaging condition. Also in this case, the discontinuity of the image based on the difference between the first imaging condition and the fourth imaging condition can be reduced.
- the correction unit 33b of the image processing unit 33 may correct both the main image data under the first imaging condition and the main image data under the fourth imaging condition. That is, the main image data at the target position under the first imaging condition, the main image data under the first imaging condition among the main image data at the reference position, and the main image data under the fourth imaging condition among the main image data at the reference position.
- the discontinuity of the image based on the difference between the first imaging condition and the fourth imaging condition may be alleviated.
- the second correction process 400/100 is applied as the second correction process to the main image data of the reference pixel Pr, which is the first imaging condition (ISO sensitivity is 100), and the fourth imaging condition (ISO sensitivity is 800/400 is applied to the main image data of the reference pixel Pr which is 800) as the second correction processing.
- the main pixel data of the target pixel undergoes a second correction process that is multiplied by 100/400 after the color interpolation process.
- the main pixel data of the pixel of interest after the color interpolation process can be changed to the same value as when the image is captured under the first imaging condition.
- the degree of the second correction process may be changed depending on the distance from the boundary between the first area and the fourth area. Compared to the case of (Example 1), the rate of increase or decrease of the main image data can be reduced by the second correction process, and noise generated by the second correction process can be reduced. Although the above (Example 1) has been described above, the above (Example 2) can be similarly applied.
- the corrected main image data is obtained by performing a calculation based on the difference between the first imaging condition and the fourth imaging condition. I did it.
- the corrected main image data may be obtained by referring to the correction table.
- the main image data after correction is read by inputting the first imaging condition and the fourth imaging condition as arguments.
- the correction coefficient may be read out by inputting the first imaging condition and the fourth imaging condition as arguments.
- an upper limit and a lower limit of the corrected main image data may be determined.
- the upper limit value and the lower limit value may be determined in advance, or may be determined based on an output signal from the photometric sensor when a photometric sensor is provided separately from the image sensor 32a.
- Modification 5 In the above-described embodiment, the example in which the setting unit 34b of the control unit 34 detects the subject element based on the live view image and divides the screen of the live view image into regions including the subject element has been described.
- the control unit 34 when the control unit 34 includes a photometric sensor in addition to the imaging device 32a, the control unit 34 may divide the region based on an output signal from the photometric sensor.
- the control unit 34 divides the foreground and the background based on the output signal from the photometric sensor.
- the live view image acquired by the image sensor 32b is a foreground area corresponding to an area determined as a foreground from an output signal from a photometric sensor, and an area determined as a background from an output signal from the photometric sensor. Is divided into background areas corresponding to.
- the control unit 34 further sets the first imaging region and the second imaging region with respect to the position corresponding to the foreground region on the imaging surface of the imaging device 32a as illustrated in FIGS. 19 (a) to 19 (c). Deploy. On the other hand, the control unit 34 arranges only the first imaging region on the imaging surface of the imaging device 32a with respect to the position corresponding to the background region of the imaging surface of the imaging device 32a. The control unit 34 uses the first image for display and uses the second image as processing image data.
- the live view image acquired by the image sensor 32b can be divided by using the output signal from the photometric sensor.
- a first image for display and a second image as processing image data can be obtained for the foreground region, and only a first image for display can be obtained for the background region. it can.
- the foreground area and the background area can be newly set by dividing the area using the output from the photometric sensor. it can.
- Modification 6 the generation unit 33c of the image processing unit 33 performs a contrast adjustment process as an example of the second correction process. That is, the generation unit 33c relaxes image discontinuity based on a difference between the first imaging condition and the fourth imaging condition by changing the gradation curve (gamma curve).
- the generation unit 33c compresses the value of the main image data of the fourth imaging condition in the main image data at the reference position to 1/8 by laying down the gradation curve.
- the generation unit 33c expands the value of the main image data of the first imaging condition among the main image data of the target position and the main image data of the reference position by eight times. Also good.
- the modified example 6 as in the above-described embodiment, it is possible to appropriately perform image processing on the main image data respectively generated in regions having different imaging conditions. For example, discontinuity and discomfort appearing in an image after image processing can be suppressed due to a difference in imaging conditions at the boundary between regions.
- the image processing unit 33 does not impair the contour of the subject element in the above-described image processing (for example, noise reduction processing).
- image processing for example, noise reduction processing
- smoothing filter processing is employed when noise reduction is performed.
- the boundary of the subject element may be blurred while the noise reduction effect.
- the generation unit 33c of the image processing unit 33 compensates for blurring of the boundary of the subject element by performing contrast adjustment processing in addition to or together with noise reduction processing, for example.
- the generation unit 33c of the image processing unit 33 sets a curve that draws an S shape as a density conversion (gradation conversion) curve (so-called S-shaped conversion).
- the generation unit 33c of the image processing unit 33 performs contrast adjustment using S-shaped conversion, thereby extending the gradation portions of the bright main image data and the dark main image data, respectively, and generating bright main image data (and dark data). While increasing the number of gradations, the number of gradations is reduced by compressing the intermediate gradation main image data. As a result, the number of main image data having a medium image brightness is reduced and the main image data classified as either bright / dark is increased. As a result, blurring of the boundary of the subject element can be compensated.
- blurring of the boundary of the subject element can be compensated by clearing the contrast of the image.
- Modification 8 In the modification 8, the generation unit 33c of the image processing unit 33 changes the white balance adjustment gain so as to alleviate the discontinuity of the image based on the difference between the first imaging condition and the fourth imaging condition.
- the generating unit 33c of the image processing unit 33 makes the white balance of the main image data of the fourth imaging condition out of the main image data at the reference position close to the white balance of the main image data acquired under the first imaging condition. Then, change the white balance adjustment gain.
- the generation unit 33c of the image processing unit 33 acquires the white balance between the main image data of the first imaging condition and the main image data of the target position among the main image data of the reference position under the fourth imaging condition.
- the white balance adjustment gain may be changed so as to approach the white balance of the image data.
- the white balance adjustment gain is aligned with the adjustment gain of one of the areas with different imaging conditions, thereby changing the first imaging condition and the first imaging condition.
- the discontinuity of the image based on the difference from the four imaging conditions can be alleviated.
- a plurality of image processing units 33 may be provided, and image processing may be performed in parallel. For example, image processing is performed on the main image data imaged in the area B of the imaging unit 32 while image processing is performed on the main image data imaged in the area A of the imaging unit 32.
- the plurality of image processing units 33 may perform the same image processing or different image processing. That is, the same parameters or the like are applied to the main image data of the region A and the region B, or different images are applied to the main image data of the region A and the region B by applying different parameters. Can be processed.
- image processing is performed by one image processing unit on the main image data to which the first imaging condition is applied, and the main image data to which the fourth imaging condition is applied.
- the image processing may be performed by another image processing unit.
- the number of image processing units is not limited to the above two, and for example, the same number as the number of imaging conditions that can be set may be provided. That is, each image processing unit takes charge of image processing for each region to which different imaging conditions are applied. According to the modification 9, it is possible to proceed in parallel with imaging under different imaging conditions for each area and image processing for the main image data of the image obtained for each area.
- the camera 1 has been described as an example.
- a high-function mobile phone 250 (FIG. 21) having a camera function like a smartphone or a mobile device such as a tablet terminal may be used.
- the camera 1 in which the imaging unit 32 and the control unit 34 are configured as a single electronic device has been described as an example.
- the imaging unit 1 and the control unit 34 may be provided separately, and the imaging system 1B that controls the imaging unit 32 from the control unit 34 via communication may be configured.
- the imaging device 1001 including the imaging unit 32 is controlled from the display device 1002 including the control unit 34 will be described with reference to FIG.
- FIG. 20 is a block diagram illustrating the configuration of the imaging system 1B according to the modification 11.
- the imaging system 1 ⁇ / b> B includes an imaging device 1001 and a display device 1002.
- the imaging device 1001 includes a first communication unit 1003 in addition to the imaging optical system 31 and the imaging unit 32 described in the above embodiment.
- the display device 1002 includes a second communication unit 1004 in addition to the image processing unit 33, the control unit 34, the display unit 35, the operation member 36, and the recording unit 37 described in the above embodiment.
- the first communication unit 1003 and the second communication unit 1004 can perform bidirectional image data communication using, for example, a well-known wireless communication technology or optical communication technology. Note that the imaging device 1001 and the display device 1002 may be connected by a wired cable, and the first communication unit 1003 and the second communication unit 1004 may perform bidirectional image data communication.
- the control unit 34 controls the imaging unit 32 by performing data communication via the second communication unit 1004 and the first communication unit 1003. For example, by transmitting and receiving predetermined control data between the imaging device 1001 and the display device 1002, the display device 1002 divides the screen into a plurality of regions based on the images as described above, or the divided regions. A different imaging condition is set for each area, or a photoelectric conversion signal photoelectrically converted in each area is read out.
- the user since the live view image acquired on the imaging device 1001 side and transmitted to the display device 1002 is displayed on the display unit 35 of the display device 1002, the user is at a position away from the imaging device 1001. Remote control can be performed from a certain display device 1002.
- the display device 1002 can be configured by a high-function mobile phone 250 such as a smartphone, for example.
- the imaging device 1001 can be configured by an electronic device including the above-described stacked imaging element 100.
- the object detection part 34a, the setting part 34b, the imaging control part 34c, and the lens movement control part 34d in the control part 34 of the display apparatus 1002 was demonstrated, the object detection part 34a, the setting part 34b, A part of the imaging control unit 34c and the lens movement control unit 34d may be provided in the imaging device 1001.
- the program is supplied to the mobile device such as the camera 1, the high-function mobile phone 250, or the tablet terminal as described above by, for example, infrared communication or short-range wireless communication from the personal computer 205 storing the program as illustrated in FIG. 21. Can be sent to mobile devices.
- the program may be supplied to the personal computer 205 by setting a recording medium 204 such as a CD-ROM storing the program in the personal computer 205 or by a method via the communication line 201 such as a network. You may load. When passing through the communication line 201, the program is stored in the storage device 203 of the server 202 connected to the communication line.
- the program can be directly transmitted to the mobile device via a wireless LAN access point (not shown) connected to the communication line 201.
- a recording medium 204B such as a memory card storing the program may be set in the mobile device.
- the program can be supplied as various forms of computer program products, such as provision via a recording medium or a communication line.
- the image processing unit 32A instead of providing the image processing unit 33 of the first embodiment, the image processing unit 32A has an image processing unit 32c having the same function as the image processing unit 33 of the first embodiment. Is different from the first embodiment in that
- FIG. 22 is a block diagram illustrating the configuration of the camera 1C according to the second embodiment.
- the camera 1 ⁇ / b> C includes an imaging optical system 31, an imaging unit 32 ⁇ / b> A, a control unit 34, a display unit 35, an operation member 36, and a recording unit 37.
- the imaging unit 32A further includes an image processing unit 32c having the same function as the image processing unit 33 of the first embodiment.
- the image processing unit 32 c includes an input unit 321, a correction unit 322, and a generation unit 323.
- Image data from the image sensor 32 a is input to the input unit 321.
- the correction unit 322 performs preprocessing for correcting the input image data.
- the preprocessing performed by the correction unit 322 is the same as the preprocessing performed by the correction unit 33b in the first embodiment.
- the generation unit 323 performs image processing on the input image data and the pre-processed image data to generate an image.
- the image processing performed by the generation unit 323 is the same as the image processing performed by the generation unit 33c in the first embodiment.
- FIG. 23 is a diagram schematically showing the correspondence between each block and a plurality of correction units 322 in the present embodiment.
- one square of the imaging chip 111 represented by a rectangle represents one block 111a.
- one square of an image processing chip 114 described later represented by a rectangle represents one correction unit 322.
- the correction unit 322 is provided for each block 111a.
- the correction unit 322 is provided for each block which is the minimum unit of the area where the imaging condition can be changed on the imaging surface.
- the hatched block 111a and the hatched correction unit 322 have a correspondence relationship.
- the hatched correction unit 322 performs preprocessing on the image data from the pixels included in the hatched block 111a.
- Each correction unit 322 performs preprocessing on image data from pixels included in the corresponding block 111a.
- the preprocessing of the image data can be processed in parallel by the plurality of correction units 322, so that the processing burden on the correction unit 322 can be reduced, and an appropriate image can be quickly generated from the image data generated in each of the areas with different imaging conditions. Can be generated.
- the block 111a may be referred to as a block 111a to which the pixel belongs.
- the block 111a may be referred to as a unit section, and a plurality of blocks 111a, that is, a plurality of unit sections may be referred to as a composite section.
- FIG. 24 is a cross-sectional view of the multilayer imaging element 100A.
- the multilayer imaging element 100A further includes an image processing chip 114 that performs the above-described preprocessing and image processing in addition to the backside illumination imaging chip 111, the signal processing chip 112, and the memory chip 113. That is, the above-described image processing unit 32c is provided in the image processing chip 114.
- the imaging chip 111, the signal processing chip 112, the memory chip 113, and the image processing chip 114 are stacked, and are electrically connected to each other by a conductive bump 109 such as Cu.
- a plurality of bumps 109 are arranged on the mutually facing surfaces of the memory chip 113 and the image processing chip 114.
- the bumps 109 are aligned with each other, and the memory chip 113 and the image processing chip 114 are pressurized, so that the aligned bumps 109 are joined and electrically connected.
- the imaging conditions can be set (changed).
- the control unit 34 causes the correction unit 322 of the image processing unit 32c to perform the first correction process as necessary.
- control unit 34 includes a boundary of a region based on a plurality of subject elements in a block that is a minimum unit for setting an imaging condition, and when there is whiteout or blackout in the image data of this block.
- the correction unit 322 causes the following first correction process to be performed as one of the pre-processes performed before the image process, the focus detection process, the subject detection process, and the process for setting the imaging conditions.
- the correction unit 322 performs one block of the processing image data according to any one of the following (i) to (iv) as the first correction process. Using the processing image data acquired in (target block or reference block), all of the main image data in which whiteout or blackout occurs in the target block of the main image data is replaced.
- the correction unit 322 sets the main image data of the target block that is over-exposed or under-exposed in the main image data as one block of processing image data corresponding to the position closest to the over-exposed or under-exposed region. Replacement is performed by the processing image data acquired in (the target block or the reference block). Even when there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is processed image data corresponding to the closest position described above. Are replaced by the same processing image data acquired in one block (target block or reference block).
- the correction unit 322 applies a subject element (mountain) that is the same as a subject element (for example, a mountain) that is whiteout or blackout when whiteout or blackout occurs in the target block of the processing image data.
- a subject element for example, a mountain
- whiteout or blackout occurs due to the same processing image data acquired in one reference block selected from the reference blocks selected from the reference blocks of the most frequently set imaging conditions (fourth imaging condition in this example).
- the correction unit 322 is a processing image corresponding to a plurality of pixels (four pixels in the example of FIG. 8B) acquired in one block of the processing image data according to (i) or (ii).
- the processing image data corresponding to the pixel adjacent to the pixel in the target block in which whiteout or blackout has occurred in the processing image data is used to replace the main image data in which whiteout or blackout has occurred. May be.
- the correction unit 322 converts the processing image data corresponding to a plurality of pixels (four pixels in the example of FIG. 8) acquired in one reference block of the processing image data according to (i) or (ii) above.
- the image data generated based on the image data may be replaced with the main image data in which whiteout or blackout occurs.
- the first average value may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. This is the same as the embodiment.
- the intermediate value of the processing image data corresponding to the plurality of pixels is calculated, and this intermediate value is calculated. Similar to the first embodiment, the main image data corresponding to the whiteout or blackout pixels may be replaced by the value.
- the correction unit 322 uses a plurality of blocks of the processing image data as the first correction process according to any one of the following (i) to (iv). Using the processing image data acquired in step 1, all of the main image data in which whiteout or blackout occurs in the target block of the main image data is replaced.
- the correction unit 322 refers to the main image data of the block of interest that is whiteout or blackout in the main image data, and a plurality of references of the processing image data corresponding to the positions around the whiteout or blackout area. Replacement is performed by the processing image data acquired in the block. Even if there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is converted into a plurality of reference blocks of the processing image data described above. Replacement is performed with the same processing image data acquired.
- the correction unit 322 applies a subject element (mountain) that is the same as a subject element (for example, a mountain) that is whiteout or blackout when whiteout or blackout occurs in the target block of the processing image data.
- whiteout or blackout occurs due to the same processing image data acquired from a plurality of reference blocks selected from the reference blocks selected from the reference blocks of the most frequently set imaging conditions (fourth imaging condition in this example).
- the correction unit 322 causes whiteout or blackout in the processing image data among the processing image data corresponding to the plurality of pixels acquired by the plurality of reference blocks according to (i) or (ii) above.
- the image data for processing corresponding to the pixels adjacent to the pixels in the target block may be used to replace the main image data in which whiteout or blackout has occurred.
- the correction unit 322 uses the image data generated based on the processing image data corresponding to the plurality of pixels acquired by the plurality of reference blocks according to (i) or (ii) above, The main image data in which whiteout or blackout has occurred may be replaced.
- the first average value may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. This is the same as the embodiment.
- the intermediate value of the processing image data corresponding to the plurality of pixels is calculated, Similar to the first embodiment, the intermediate value may replace the main image data corresponding to the whiteout or blackout pixels.
- the correction unit 322 performs one block of the processing image data as the first correction processing according to any one of the following (i) to (iii). Using the processing image data acquired in step 1, all of the main image data in which whiteout or blackout occurs in the target block of the main image data is replaced.
- the correction unit 322 refers to the main image data of the block of interest that is whiteout or blackout in the main image data, and a plurality of references of the processing image data corresponding to the positions around the whiteout or blackout area. Replacement is performed with the processing image data acquired in one of the blocks. If there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is converted into one reference block of the processing image data described above. Replacement is performed with the acquired different processing image data.
- the correction unit 322 applies a subject element (mountain) that is the same as a subject element (for example, a mountain) that is whiteout or blackout when whiteout or blackout occurs in the target block of the processing image data.
- a subject element for example, a mountain
- whiteout or blackout occurs due to different processing image data acquired from one reference block selected from the reference blocks selected from the reference blocks of the most frequently set imaging conditions (fourth imaging condition in this example).
- the correction unit 322 uses the image data generated based on the processing image data corresponding to the plurality of pixels acquired by one reference block according to (i) or (ii) above, The main image data in which whiteout or blackout has occurred may be replaced.
- the first average value may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. This is the same as the embodiment.
- the intermediate value of the processing image data corresponding to the plurality of pixels is calculated, Similar to the first embodiment, the intermediate image may replace the main image data corresponding to the blackout pixels.
- the correction unit 322 uses a plurality of blocks in the processing image data as the first correction process according to any one of the following (i) to (iv). Using the processing image data acquired in step 1, all of the main image data in which whiteout or blackout occurs in the target block of the main image data is replaced.
- the correction unit 322 refers to the main image data of the block of interest that is whiteout or blackout in the main image data, and a plurality of references of the processing image data corresponding to the positions around the whiteout or blackout area. Replacement is performed by the processing image data acquired in the block. When there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is converted into a plurality of reference blocks of the processing image data described above. Replacement is performed with the acquired different processing image data.
- the correction unit 322 applies a subject element (mountain) that is the same as a subject element (for example, a mountain) that is whiteout or blackout when whiteout or blackout occurs in the target block of the processing image data.
- whiteout or blackout occurs due to different processing image data acquired by a plurality of reference blocks selected from the reference block selected from the reference block of the imaging condition that is set most frequently (fourth imaging condition in this example).
- Replace a plurality of main image data of the block of interest When there are a plurality of whiteout or blackout pixels in the target block of the main image data, the main image data of the whiteout or blackout pixels is converted into a plurality of reference blocks of the processing image data described above. Replacement is performed with the acquired different processing image data.
- the correction unit 322 uses the image data generated based on the processing image data corresponding to the plurality of pixels acquired by the plurality of reference blocks according to (i) or (ii) above, The main image data in which whiteout or blackout has occurred may be replaced.
- the first average value may be replaced by a weighted average value weighted according to the distance from the whiteout or blackout pixel instead of the simple average. This is the same as the embodiment.
- the intermediate value of the processing image data corresponding to the plurality of pixels is calculated, Similar to the first embodiment, the intermediate image may replace the main image data corresponding to the blackout pixels.
- control unit 34 determines which mode is to be corrected based on, for example, the settings (including the operation menu settings) by the operation member 36. Note that, depending on the imaging scene mode set in the camera 1 and the type of the detected subject element, the control unit 34 may determine which mode of correction is performed.
- the control unit 34 further performs the following second correction processing on the correction unit 322 as necessary before image processing, focus detection processing, subject detection (subject element detection) processing, and processing for setting imaging conditions. Make it.
- the correction unit 322 does not perform the second correction process, and the generation unit 323 performs image processing using main image data of a plurality of reference pixels Pr that are not subjected to the second correction processing.
- the imaging condition applied in the target pixel P is set as the first imaging condition.
- the imaging conditions applied to some of the plurality of reference pixels Pr are the first imaging conditions, and the imaging conditions applied to the remaining reference pixels Pr are the second imaging conditions.
- the correction unit 322 corresponding to the block 111a to which the reference pixel Pr to which the second imaging condition is applied belongs is as follows for the main image data of the reference pixel Pr to which the second imaging condition is applied ( The second correction process is performed as in Example 1) to (Example 3).
- the generation unit 323 refers to the main image data of the reference pixel Pr to which the first imaging condition is applied and the main image data of the reference pixel Pr after the second correction process, and generates the main image data of the target pixel P. Perform image processing to calculate.
- the correction unit 322 corresponding to the block 111a to which the reference pixel Pr to which the second imaging condition is applied belongs only to the ISO sensitivity between the first imaging condition and the second imaging condition, and the ISO of the first imaging condition is different.
- the sensitivity is 100
- the ISO sensitivity of the second imaging condition is 800
- 100/800 is applied as the second correction process to the main image data of the reference pixel Pr.
- the correction unit 322 corresponding to the block 111a to which the reference pixel Pr to which the second imaging condition is applied is different only in the frame rate between the first imaging condition and the second imaging condition (the charge accumulation time is the same).
- the frame rate of the first imaging condition is 30 fps and the frame rate of the second imaging condition is 60 fps
- the first imaging is performed on the main image data of the reference pixel Pr, that is, the main image data of the second imaging condition (60 fps).
- Employing the main image data of the frame image acquired at the condition (30 fps) and the frame image whose acquisition start timing is close is defined as the second correction process. Thereby, the difference between the main image data due to the difference in the imaging conditions is reduced.
- interpolation calculation is performed on the main image data of the frame image acquired under the first imaging condition (30 fps) and the acquisition start timing is similar. This may be the second correction process.
- the correction unit 322 corresponding to the block 111a to which the reference pixel Pr to which the first imaging condition is applied belongs to the above-described (Example 1) to (Example) for the main image data of the reference pixel Pr.
- the second correction process is performed as in 3).
- the generation unit 323 Similar to the generation unit 33c of the image processing unit 33 in the first embodiment, image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing is performed.
- FIG. 25 shows main image data (hereinafter referred to as first image data) from each pixel included in a partial region (hereinafter referred to as first imaging region 141) of the imaging surface to which the first imaging condition is applied.
- first image data main image data
- second image data main image data from each pixel included in a partial region (hereinafter referred to as second imaging region 142) of the imaging surface to which the second imaging condition is applied.
- the first image data captured under the first imaging condition is output from each pixel included in the first imaging area 141, and the first image data captured under the second imaging condition is output from each pixel included in the second imaging area 142.
- the second image data is output.
- the first image data is output to the correction unit 322 corresponding to the block 111 a to which the pixel that generated the first image data belongs, among the correction units 322 provided in the processing chip 114.
- the plurality of correction units 322 respectively corresponding to the plurality of blocks 111a to which the pixels that generate the respective first image data belong are referred to as first processing units 151.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first image data as necessary.
- the second image data is output to the correction unit 322 corresponding to the block 111a to which the pixel that generated the second image data belongs among the correction units 322 provided in the processing chip 114.
- the plurality of correction units 322 respectively corresponding to the plurality of blocks 111a to which the respective pixels that generate the respective second image data belong are referred to as second processing units 152.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second image data as necessary.
- the first correction process for example, when the target block of the main image data is included in the first imaging region 141, the first correction process, that is, the replacement process described above by the first processing unit 151 as shown in FIG. Done.
- the image data in which whiteout or blackout occurs in the target block of the main image data is replaced with the second image data from the reference block included in the second imaging area 142 of the processing image data.
- the first processing unit 151 receives, for example, the second image data from the reference block of the processing image data as information 182 from the second processing unit 152.
- the second image data from the reference pixel Pr included in the second imaging region 142 is the second image data as shown in FIG.
- the second correction process described above is performed by the second processing unit 152.
- the second processing unit 152 receives, from the first processing unit 151, for example, information 181 about the first imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition.
- the first image data from the reference pixel Pr included in the first imaging region 141 is the second correction process described above in the first processing unit 151. Is done.
- the first processing unit 151 receives information on the second imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition from the second processing unit 152.
- the first processing unit 151 does not perform the second correction process on the first image data from the reference pixel Pr.
- the second processing unit 152 does not perform the second correction process on the second image data from the reference pixel Pr.
- both the image data of the first imaging condition and the image data of the second imaging condition may be corrected by the first processing unit 151 and the second processing unit 152, respectively. That is, the first imaging condition image data of the first imaging condition, the first imaging condition image data of the reference position image data, and the second imaging condition image data of the reference position image data respectively.
- the discontinuity of the image based on the difference between the first imaging condition and the second imaging condition may be alleviated.
- 400/100 is applied as the second correction process to the image data of the reference pixel Pr, which is the first imaging condition (ISO sensitivity is 100), and the second imaging condition (ISO sensitivity is 800).
- 400/800 is applied to the image data of the reference pixel Pr as the second correction process.
- the difference between the image data due to the difference in the imaging conditions is reduced.
- the pixel data of the pixel of interest undergoes a second correction process that is multiplied by 100/400 after the color interpolation process.
- the pixel data of the pixel of interest after the color interpolation process can be changed to the same value as when the image is captured under the first imaging condition. Furthermore, in the above (Example 1), the degree of the second correction process may be changed depending on the distance from the boundary between the first area and the second area. Compared to the case of (Example 1), the rate at which the image data increases or decreases by the second correction process can be reduced, and the noise generated by the second correction process can be reduced. Although the above (Example 1) has been described above, the above (Example 2) can be similarly applied.
- the generation unit 323 performs image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing based on the image data from the first processing unit 151 and the second processing unit 152, and performs image processing.
- image processing such as pixel defect correction processing, color interpolation processing, contour enhancement processing, and noise reduction processing based on the image data from the first processing unit 151 and the second processing unit 152, and performs image processing.
- the later image data is output.
- the first processing unit 151 may perform the second correction process on the first image data from all the pixels included in the first imaging region 141 when the target pixel P is located in the second imaging region 142. Of the pixels included in the first imaging region 141, only the first image data from pixels that may be used for interpolation of the pixel of interest P in the second imaging region 142 may be subjected to the second correction process. Similarly, the second processing unit 152 performs the second correction process on the second image data from all the pixels included in the second imaging region 142 when the target pixel P is located in the first imaging region 141. Of course, only the second image data from pixels that may be used for interpolation of the pixel of interest P in the first imaging region 141 among the pixels included in the second imaging region 142 may be subjected to the second correction process.
- the lens movement control unit 34d of the control unit 34 uses the signal data (image data) corresponding to a predetermined position (focus point) on the imaging screen to focus. Perform detection processing. Note that when different imaging conditions are set for the divided areas and the focus point of the AF operation is located at the boundary portion of the divided areas, that is, the focus point is divided into two in the first area and the second area. In this embodiment, the following 2-2. As will be described below, the lens movement control unit 34d of the control unit 34 causes the correction unit 322 to perform the second correction process on the signal data for focus detection in at least one region.
- the correction unit 322 performs the second correction process.
- the lens movement control unit 34d of the control unit 34 performs focus detection processing using the signal data from the focus detection pixels indicated by the frame 170 as they are.
- the lens movement of the control unit 34 performs the first operation on the correction unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs among the pixels in the frame 170 as in the following (Example 1) to (Example 3). 2 Correction processing is performed. Then, the lens movement control unit 34d of the control unit 34 performs focus detection processing using the pixel signal data to which the first imaging condition is applied and the signal data after the second correction processing.
- the ISO sensitivity of the second imaging condition is 800 at 100
- 100/800 is applied to the signal data of the second imaging condition as the second correction process. Thereby, the difference between the signal data due to the difference in the imaging conditions is reduced.
- the correction unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs differs only in the frame rate (the charge accumulation time is the same) between the first imaging condition and the second imaging condition.
- the frame rate of the first imaging condition is 30 fps and the frame rate of the second imaging condition is 60 fps
- acquisition of the frame image acquired under the first imaging condition (30 fps) and acquisition of the signal data of the second imaging condition (60 fps) is started.
- the second correction process is to adopt signal data of frame images that are close in timing. Thereby, the difference between the signal data due to the difference in the imaging conditions is reduced.
- interpolation calculation is performed on the signal data of the frame image acquired under the first imaging condition (30 fps) and the acquisition start timing is similar. This may be the second correction process.
- the imaging conditions are regarded as the same.
- the second correction process is performed on the signal data of the second imaging condition in the signal data.
- the second correction process is performed on the signal data of the first imaging condition in the signal data. Two correction processes may be performed.
- the difference between both signal data after the second correction process is reduced. You may make it do.
- FIG. 26 is a diagram schematically showing processing of the first signal data and the second signal data related to the focus detection processing.
- the first signal data imaged under the first imaging condition is output from each pixel included in the first imaging area 141, and the image data is captured under the second imaging condition from each pixel included in the second imaging area 142.
- Second signal data is output.
- the first signal data from the first imaging area 141 is output to the first processing unit 151.
- the second signal data from the second imaging region 142 is output to the second processing unit 152.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first signal data of the main image data as necessary.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second signal data of the main image data as necessary. .
- the first correction process for example, when the target block of the main image data is included in the first imaging region 141, the first correction process, that is, the replacement process described above by the first processing unit 151 as shown in FIG. Done.
- the first signal data in which whiteout or blackout occurs in the target block of the main image data is replaced with the second signal data from the reference block included in the second imaging region 142 of the processing image data.
- the first processing unit 151 receives the second signal data from the reference block of the processing image data, for example, as the information 182 from the second processing unit 152.
- the second processing unit 152 performs processing.
- the second processing unit 152 performs the second correction process described above on the second signal data from the pixels included in the second imaging region 142.
- the second processing unit 152 receives, from the first processing unit 151, for example, information 181 about the first imaging condition necessary for reducing the difference between the signal data due to the difference in the imaging condition.
- the first processing unit 151 does not perform the second correction process on the first signal data.
- the first processing unit 151 performs processing.
- the first processing unit 151 performs the second correction process described above on the first signal data from the pixels included in the first imaging region 141. Note that the first processing unit 151 receives information about the second imaging condition necessary for reducing the difference between the signal data due to the difference in the imaging condition from the second processing unit 152.
- the second processing unit 152 does not perform the second correction process on the second signal data.
- the first processing unit 151 and the second processing unit 152 perform processing.
- the first processing unit 151 performs the above-described second correction processing on the first signal data from the pixels included in the first imaging region 141
- the second processing unit 152 receives the signals from the pixels included in the second imaging region 142.
- the second correction process described above is performed on the second signal data.
- the lens movement control unit 34d performs focus detection processing based on the signal data from the first processing unit 151 and the second processing unit 152, and moves the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result.
- a drive signal for moving is output.
- the object detection unit 34a of the control unit 34 causes the correction unit 322 to perform the second correction process on the image data of at least one region within the search range 190.
- the correction unit 322 does not perform the second correction process.
- the object detection unit 34a of the control unit 34 performs subject detection processing using image data constituting the search range 190 as it is.
- the object detection unit 34 a of the control unit 34 As described above (Example 1) to (Example 3), the focus detection process is performed on the correction unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs among the images in the range 190. To perform the second correction process. Then, the object detection unit 34a of the control unit 34 performs subject detection processing using the image data of the pixels to which the first condition is applied and the image data after the second correction processing.
- FIG. 27 is a diagram schematically showing processing of the first image data and the second image data related to the subject detection processing.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first image data of the main image data as necessary.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second image data of the main image data as necessary. .
- the first correction process for example, when the target block of the main image data is included in the first imaging region 141, the first correction process, that is, the replacement process described above by the first processing unit 151 as shown in FIG. Done.
- the first image data in which whiteout or blackout occurs in the target block of the main image data is replaced with the second image data from the reference block included in the second imaging region 142 of the processing image data.
- the first processing unit 151 receives, for example, the second image data from the reference block of the processing image data as information 182 from the second processing unit 152.
- the second correction process performed by the first processing unit 151 and / or the second processing unit 152 is the same as the second correction process for FIG. 26 described above as the case of performing the focus detection process.
- the object detection unit 34a performs processing for detecting a subject element based on the image data from the first processing unit 151 and the second processing unit 152, and outputs a detection result.
- the setting unit 34b of the control unit 34 Among them, the second correction process as described above (Example 1) to (Example 3) is performed as the case where the focus detection process is performed on the correction unit 322 corresponding to the block 111a to which the pixel to which the second imaging condition is applied belongs. To do. Then, the setting unit 34b of the control unit 34 performs an exposure calculation process using the image data after the second correction process.
- FIG. 28 is a diagram schematically illustrating processing of the first image data and the second image data according to setting of imaging conditions such as exposure calculation processing.
- the first correction process described above for example, when the target block of the main image data is included in the first imaging region 141, the first correction process described above, that is, the replacement process is performed by the first processing unit 151 as shown in FIG. Done.
- the first image data in which whiteout or blackout occurs in the target block of the main image data is replaced with the second image data from the reference block included in the second imaging region 142 of the processing image data.
- the first processing unit 151 receives, for example, the second image data from the reference block of the processing image data as information 182 from the second processing unit 152.
- the second correction process performed by the first processing unit 151 and / or the second processing unit 152 is the same as the second correction process for FIG. 26 described above as the case of performing the focus detection process.
- the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on the image data from the first processing unit 151 and the second processing unit 152, and the imaging screen by the imaging unit 32 is displayed based on the calculation result. Then, the image is divided into a plurality of areas including the detected subject element, and the imaging conditions are reset for the plurality of areas.
- the following operational effects can be obtained. (1) Since preprocessing (first correction processing and second correction processing) for image data can be performed in parallel by a plurality of correction units 322, the processing load on the correction unit 322 can be reduced.
- the preprocessing of the image data can be performed in parallel by the plurality of correction units 322, the processing burden on the correction unit 322 can be reduced, and the preprocessing by the plurality of correction units 322 is performed in a short time by the parallel processing.
- the time until the start of the focus detection process in the lens movement control unit 34d can be shortened, which contributes to speeding up of the focus detection process.
- the preprocessing of the image data can be performed in parallel by the plurality of correction units 322, the processing burden on the correction unit 322 can be reduced, and the preprocessing by the plurality of correction units 322 is performed in a short time by the parallel processing.
- the time until the start of the subject detection process in the object detection unit 34a can be shortened, which contributes to the speedup of the subject detection process.
- the preprocessing of the image data can be performed in parallel by the plurality of correction units 322, the processing burden on the correction unit 322 can be reduced, and the preprocessing by the plurality of correction units 322 is performed in a short time by the parallel processing.
- the time until the start of the imaging condition setting process in the setting unit 34b can be shortened, which contributes to speeding up of the imaging condition setting process.
- the control unit 34 uses the first image for display and the second image for detection.
- An imaging condition set in the first imaging area for capturing the first image is referred to as a first imaging condition
- an imaging condition set in the second imaging area for capturing the second image is referred to as a second imaging condition.
- the control unit 34 may make the first imaging condition different from the second imaging condition.
- FIG. 29 is a diagram schematically illustrating processing of the first image data and the second image data.
- the first image data captured under the first imaging condition is output from each pixel included in the first imaging area 141
- the first image data captured under the second imaging condition is output from each pixel included in the second imaging area 142.
- Second image data is output.
- the first image data from the first imaging area 141 is output to the first processing unit 151.
- the second image data from the second imaging region 142 is output to the second processing unit 152.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first image data as necessary.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second image data as necessary.
- the first processing unit 151 adds the second image data to the first image data from the reference pixel Pr included in the first imaging area. Does not perform correction processing.
- the second processing unit 152 sets the second image data used for the focus detection process, the subject detection process, and the exposure calculation process. 2 Correction processing is not performed. However, the second processing unit 152 performs the second correction process for reducing the difference between the image data due to the difference between the first imaging condition and the second imaging condition for the second image data used for the interpolation of the first image data. Do.
- the second processing unit 152 outputs the second image data after the second correction processing to the first processing unit 151 as indicated by an arrow 182. Note that the second processing unit 152 may output the second image data after the second correction processing to the generation unit 323 as indicated by a dashed arrow 183.
- the second processing unit 152 receives, from the first processing unit 151, for example, information 181 about the first imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition.
- the generation unit 323 performs pixel defect correction processing, color interpolation processing, and contour enhancement processing based on the first image data from the first processing unit 151 and the second image data that has been second corrected by the second processing unit 152. And image processing such as noise reduction processing, and output image data after image processing.
- the object detection unit 34a performs processing for detecting a subject element based on the second image data from the second processing unit 152, and outputs a detection result.
- the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on the second image data from the second processing unit 152, and based on the calculation result, the imaging screen by the imaging unit 32 is detected. While dividing into a plurality of regions including elements, imaging conditions are reset for the plurality of regions.
- the lens movement control unit 34d performs focus detection processing based on the second signal data from the second processing unit 152, and moves the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result. A drive signal is output.
- the first imaging condition set in the first imaging area differs depending on the area of the imaging screen, and the second imaging condition set in the second imaging area is the same throughout the second imaging area of the imaging screen. A case will be described with reference to FIG.
- first image data captured under a first imaging condition that varies depending on the region of the imaging screen is output, and from each pixel included in the second imaging region 142, the imaging screen
- the second image data captured under the same second imaging condition is output in the entire second imaging region.
- the first image data from the first imaging area 141 is output to the first processing unit 151.
- the second image data from the second imaging region 142 is output to the second processing unit 152.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first image data as necessary.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second image data as necessary.
- the first imaging condition set in the first imaging area 141 differs depending on the area of the imaging screen. That is, the first imaging condition differs depending on the partial area in the first imaging area 141.
- the first processing unit 151 applies the first image data from the reference pixel Pr to the first image data.
- 1-2 The second correction process similar to the second correction process described in (1) is performed.
- the first processing unit 151 does not perform the second correction process on the first image data from the reference pixel Pr. .
- the second processing unit 152 since the second imaging condition set in the second imaging area 142 is the same for the entire second imaging area of the imaging screen, the second processing unit 152 performs focus detection processing, subject detection processing, and exposure. The second correction process is not performed on the second image data used for the calculation process. For the second image data used for the interpolation of the first image data, the second processing unit 152 determines whether there is a difference between the image data due to the difference between the imaging condition for the pixel of interest P included in the first imaging area 141 and the second imaging condition. A second correction process is performed to reduce the difference. The second processing unit 152 outputs the second image data after the second correction processing to the first processing unit 151 (arrow 182).
- the second processing unit 152 may output the second image data after the second correction processing to the generation unit 323 (arrow 183).
- the second processing unit 152 uses the information 181 about the imaging condition for the target pixel P included in the first imaging area 141 necessary for reducing the difference between the image data due to the difference in the imaging condition, for example, the first process. Received from the unit 151.
- the generation unit 323 performs pixel defect correction processing, color interpolation processing, and contour enhancement processing based on the first image data from the first processing unit 151 and the second image data that has been second corrected by the second processing unit 152. And image processing such as noise reduction processing, and output image data after image processing.
- the object detection unit 34a performs processing for detecting a subject element based on the second image data from the second processing unit 152, and outputs a detection result.
- the setting unit 34b performs an imaging condition calculation process such as an exposure calculation process based on the second image data from the second processing unit 152, and based on the calculation result, the imaging screen by the imaging unit 32 is detected. While dividing into a plurality of regions including elements, imaging conditions are reset for the plurality of regions.
- the lens movement control unit 34d performs focus detection processing based on the second signal data from the second processing unit 152, and moves the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result. A drive signal is output.
- the first imaging condition set in the first imaging area 141 is the same throughout the first imaging area 141 of the imaging screen, and the second imaging condition set in the second imaging area 142 is the same. The case where it differs depending on the area of the imaging screen will be described with reference to FIG.
- first image data captured under the same first imaging condition is output in the entire first imaging area 141 of the imaging screen, and is included in the second imaging area 142.
- second image data captured under a fourth imaging condition that varies depending on the area of the imaging screen is output.
- the first image data from the first imaging area 141 is output to the first processing unit 151.
- the second image data from the second imaging region 142 is output to the second processing unit 152.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first image data as necessary.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second image data as necessary.
- the first processing unit 151 since the first imaging condition set in the first imaging area 141 is the same for the entire first imaging area 141 of the imaging screen, the first processing unit 151 includes the reference included in the first imaging area 141. The second correction process is not performed on the first image data from the pixel Pr.
- the second processing unit 152 performs the second correction process on the second image data as follows. I do. For example, the second processing unit 152 performs the second correction process on the second image data imaged under a certain imaging condition in the second image data, thereby obtaining the second image data after the second correction process and The difference from the second image data imaged under another imaging condition different from the certain imaging condition described above is reduced.
- the second processing unit 152 depends on the difference between the imaging condition for the target pixel P included in the first imaging region 141 and the second imaging condition.
- a second correction process is performed to reduce the difference between the image data.
- the second processing unit 152 outputs the second image data after the second correction processing to the first processing unit 151 (arrow 182).
- the second processing unit 152 may output the second image data after the second correction processing to the generation unit 323 (arrow 183).
- the second processing unit 152 uses, for example, the first processing unit 151 as the information 181 about the imaging condition for the pixel of interest P included in the first region necessary for reducing the difference between the image data due to the difference in the imaging condition. Receive from.
- the generation unit 323 performs pixel defect correction processing, color interpolation processing, and contour enhancement processing based on the first image data from the first processing unit 151 and the second image data that has been second corrected by the second processing unit 152. And image processing such as noise reduction processing, and output image data after image processing.
- the object detection unit 34a performs subject correction on the basis of the second image data imaged under a certain imaging condition and the second image data imaged under another imaging condition, which has been subjected to the second correction process by the second processing unit 152. Is detected, and the detection result is output.
- the setting unit 34b performs an exposure calculation process based on the second image data captured under a certain imaging condition and the second image data captured under another imaging condition, which has been subjected to the second correction process by the second processing unit 152.
- the imaging condition calculation process is performed. Based on the calculation result, the setting unit 34b divides the imaging screen by the imaging unit 32 into a plurality of areas including the detected subject element, and resets the imaging conditions for the plurality of areas.
- the lens movement control unit 34d focuses based on the second signal data imaged under a certain imaging condition and the second signal data imaged under another imaging condition, which has been subjected to the second correction process by the second processing unit 152.
- the lens movement control unit 34d that performs the detection process outputs a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result.
- the first imaging condition set in the first imaging area 141 varies depending on the area of the imaging screen
- the second imaging condition set in the second imaging area 142 varies depending on the area of the imaging screen. Will be described with reference to FIG.
- first image data captured under a first imaging condition that varies depending on the region of the imaging screen is output, and from each pixel included in the second imaging region 142, the imaging screen
- the second image data imaged under different second imaging conditions depending on the area is output.
- the first image data from the first imaging area 141 is output to the first processing unit 151.
- the second image data from the second imaging region 142 is output to the second processing unit 152.
- the first processing unit 151 performs the first correction process and the second correction process, or the first correction process or the second correction process on the first image data as necessary.
- the second processing unit 152 performs the first correction process and the second correction process, or the first correction process or the second correction process on the second image data as necessary.
- the first imaging condition set in the first imaging area 141 differs depending on the area of the imaging screen. That is, the first imaging condition differs depending on the partial area in the first imaging area 141.
- the first processing unit 151 applies the first image data from the reference pixel Pr to the first image data.
- 1-2 The second correction process similar to the second correction process described in (1) is performed.
- the first processing unit 151 does not perform the second correction process on the first image data from the reference pixel Pr. .
- the second processing unit 152 since the second imaging condition set in the second imaging area 142 differs depending on the area of the imaging screen, the second processing unit 152 performs the above-described 3. processing on the second image data.
- the second correction process is performed as in the example.
- the generation unit 323 performs pixel defect correction processing, color interpolation processing, and contour enhancement processing based on the first image data from the first processing unit 151 and the second image data that has been second corrected by the second processing unit 152. And image processing such as noise reduction processing, and output image data after image processing.
- the object detection unit 34a performs subject correction based on the second image data imaged under a certain imaging condition and the second image data imaged under another imaging condition, which has been subjected to the second correction process by the second processing unit 152. Is detected, and the detection result is output.
- the setting unit 34b performs an exposure calculation process based on the second image data captured under a certain imaging condition and the second image data captured under another imaging condition, which has been subjected to the second correction process by the second processing unit 152.
- the imaging condition calculation process is performed. Based on the calculation result, the setting unit 34b divides the imaging screen by the imaging unit 32 into a plurality of areas including the detected subject element, and resets the imaging conditions for the plurality of areas.
- the lens movement control unit 34d focuses based on the second signal data imaged under a certain imaging condition and the second signal data imaged under another imaging condition, which has been subjected to the second correction process by the second processing unit 152.
- the lens movement control unit 34d that performs the detection process outputs a drive signal for moving the focus lens of the imaging optical system 31 to the in-focus position based on the calculation result.
- one of the correction units 322 corresponds to one of the blocks 111a (unit division).
- one of the correction units 322 may correspond to one of the composite blocks (composite sections) having a plurality of blocks 111a (unit sections).
- the correction unit 322 sequentially corrects image data from pixels belonging to the plurality of blocks 111a included in the composite block. Even if a plurality of correction units 322 are provided corresponding to each composite block having a plurality of blocks 111a, the second correction processing of image data can be performed in parallel by the plurality of correction units 322. The burden can be reduced, and an appropriate image can be generated in a short time from the image data generated in each of the regions with different imaging conditions.
- the generation unit 323 is provided inside the imaging unit 32A.
- the generation unit 323 may be provided outside the imaging unit 32A. Even if the generation unit 323 is provided outside the imaging unit 32A, the same operational effects as the above-described operational effects can be obtained.
- the multilayer imaging element 100A includes image processing that performs the above-described preprocessing and image processing in addition to the backside illumination imaging chip 111, the signal processing chip 112, and the memory chip 113.
- a chip 114 is further provided.
- the image processing chip 114 may be provided in the signal processing chip 112 without providing the image processing chip 114 in the multilayer imaging device 100A.
- the second processing unit 152 receives, from the first processing unit 151, information on the first imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition. did.
- the first processing unit 151 receives information about the second imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition from the second processing unit 152.
- the second processing unit 152 may receive information about the first imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition from the driving unit 32b or the control unit 34.
- the first processing unit 151 may receive information about the second imaging condition necessary for reducing the difference between the image data due to the difference in the imaging condition from the driving unit 32b or the control unit 34.
- the imaging optical system 31 described above may include a zoom lens or tilt lens.
- the lens movement control unit 34d adjusts the angle of view by the imaging optical system 31 by moving the zoom lens in the optical axis direction. That is, the image by the imaging optical system 31 can be adjusted by moving the zoom lens, such as obtaining an image of a wide range of subjects or obtaining a large image of a far subject. Further, the lens movement control unit 34d can adjust the distortion of the image by the imaging optical system 31 by moving the tilt lens in a direction orthogonal to the optical axis. Based on the idea that it is preferable to use the preprocessed image data as described above in order to adjust the state of the image by the imaging optical system 31 (for example, the state of the angle of view or the state of distortion of the image). The pre-processing described above may be performed.
- the correction unit 33b corrects the main image data of the target block in which overexposure or blackout occurs in the main image data based on the processing image data. Went.
- the correction unit 33b may use a block in which no overexposure or blackout occurs in the main image data as the target block.
- the first correction process may be performed also on the main image data in which the outputs of the pixels 85b and 85d are not whiteout or blackout.
- the first correction process uses part of the main image data using the processing image data.
- the outputs of the pixels 85b and 85d may be corrected using the imaging condition for capturing the processing image data as the first correction process. Further, when the output of the pixels 85b and 85d is not overexposure or blackout, the signal values (pixel values) of the processing image data are used as the first correction process, and the pixels 85b and 85d that output the main image data are output. May be corrected.
- the correction is performed by correcting the signal value of each of the pixels 85b and 85d of the block 85 in which the first imaging condition is set in imaging of the main image data, thereby setting the corrected signal value and the fourth imaging condition. It is only necessary that the difference between the signal value of the first pixel is smaller (smoothed) than the difference between the signal value before correction and the signal value of the pixel of the block for which the fourth imaging condition is set.
- the control unit 34 uses the image data (pixel signal value) of the block 85 that outputs the main image data to generate the image data of the block 85. Is corrected using the pixel value of the processing image data. That is, when the image data of the block 85 that has output the main image data has whiteout or blackout, the control unit 34 selects the pixel that has output the pixel of the processing image data, and selects the selected pixel. Image data in which whiteout or blackout occurs is replaced with image data (pixel signal value). As a condition for using the pixel value of the processing image data, the image data (pixel signal value) of the block 85 that outputs the main image data may be equal to or higher than the first threshold value or lower than the second threshold value.
- control unit 34 uses the image data (pixel value of the pixel) of the block 85 when the image data of the block 85 has no whiteout or blackout.
- the first correction process correction is performed using the imaging condition for imaging the processing image data, or the main image data is output using the signal value (pixel value) of the processing image data.
- the outputs of the pixels 85b and 85d may be corrected.
- the control unit 34 selects the pixel that has output the processing image data even if the image data of the block 85 that has output the main image data has no whiteout or blackout, and the image data ( Image data in which whiteout or blackout occurs in the pixel signal value) may be replaced.
- the control unit 34 may perform subject recognition and perform the first correction process based on the recognition result.
- the setting unit 34b sets an imaging condition different from the first imaging condition, and the object detection unit 34a performs subject recognition. Then, a signal value of a pixel obtained by imaging under the first imaging condition and imaging the same subject as an area to be corrected (for example, the pixels 85b and 85d) may be used.
- the fourth imaging condition is set by the first correction process for the subject imaged with the first imaging condition set. The image is corrected as if it were captured.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Power Engineering (AREA)
- Electromagnetism (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
- Exposure Control For Cameras (AREA)
- Solid State Image Pick-Up Elements (AREA)
Abstract
La présente invention concerne un dispositif de capture d'image qui comporte : une unité de capture d'image ayant une première région de capture d'image pour capturer une image dans des premières conditions de capture d'image, et une seconde région de capture d'image pour capturer une image dans des secondes conditions de capture d'image qui sont différentes des premières conditions de capture d'image ; et une unité de détection qui détecte un sujet, dont une image a été capturée dans la première région de capture d'image, ledit sujet étant détecté sur la base de la lumière, dont une image a été capturée dans les secondes conditions de capture d'image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018509353A JPWO2017170718A1 (ja) | 2016-03-31 | 2017-03-29 | 撮像装置、被写体検出装置、および電子機器 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016071976 | 2016-03-31 | ||
JP2016-071976 | 2016-03-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017170718A1 true WO2017170718A1 (fr) | 2017-10-05 |
Family
ID=59964614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/012959 WO2017170718A1 (fr) | 2016-03-31 | 2017-03-29 | Dispositif de capture d'image, dispositif de détection de sujet et appareil électronique |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2017170718A1 (fr) |
WO (1) | WO2017170718A1 (fr) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015022900A1 (fr) * | 2013-08-12 | 2015-02-19 | 株式会社ニコン | Dispositif électronique, procédé de commande de dispositif électronique, et programme de commande |
-
2017
- 2017-03-29 WO PCT/JP2017/012959 patent/WO2017170718A1/fr active Application Filing
- 2017-03-29 JP JP2018509353A patent/JPWO2017170718A1/ja active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015022900A1 (fr) * | 2013-08-12 | 2015-02-19 | 株式会社ニコン | Dispositif électronique, procédé de commande de dispositif électronique, et programme de commande |
Also Published As
Publication number | Publication date |
---|---|
JPWO2017170718A1 (ja) | 2019-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7372034B2 (ja) | 撮像装置、および画像処理装置 | |
JP6604384B2 (ja) | 撮像装置 | |
JP2022174185A (ja) | 撮像装置 | |
WO2017170716A1 (fr) | Dispositif de capture d'image, dispositif de traitement d'image et appareil électronique | |
WO2017057492A1 (fr) | Dispositif d'imagerie et dispositif de traitement d'image | |
JP2022115944A (ja) | 撮像装置 | |
WO2017170717A1 (fr) | Dispositif de capture d'image, dispositif de mise au point, et appareil électronique | |
WO2017170726A1 (fr) | Dispositif de capture d'image et appareil électronique | |
WO2017057494A1 (fr) | Dispositif d'imagerie et dispositif de traitement d'images | |
WO2017170718A1 (fr) | Dispositif de capture d'image, dispositif de détection de sujet et appareil électronique | |
WO2017170719A1 (fr) | Dispositif de capture d'image, et appareil électronique | |
JP6589988B2 (ja) | 撮像装置 | |
JP6604385B2 (ja) | 撮像装置 | |
JP6589989B2 (ja) | 撮像装置 | |
WO2017170724A1 (fr) | Dispositif de capture d'image, dispositif de réglage de lentille et appareil électronique | |
WO2017170725A1 (fr) | Dispositif d'imagerie, appareil de détection de sujet d'imagerie et dispositif électronique | |
JP6551533B2 (ja) | 撮像装置および画像処理装置 | |
WO2017057267A1 (fr) | Dispositif d'imagerie et dispositif de détection de mise au point | |
WO2017057493A1 (fr) | Dispositif d'imagerie et dispositif de traitement d'image | |
WO2017057280A1 (fr) | Dispositif d'imagerie et dispositif de détection de sujet | |
WO2017057268A1 (fr) | Dispositif d'imagerie et son procédé de commande |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2018509353 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17775252 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17775252 Country of ref document: EP Kind code of ref document: A1 |