WO2012086127A1 - 画像処理装置、撮像装置、及び画像処理方法 - Google Patents
画像処理装置、撮像装置、及び画像処理方法 Download PDFInfo
- Publication number
- WO2012086127A1 WO2012086127A1 PCT/JP2011/006528 JP2011006528W WO2012086127A1 WO 2012086127 A1 WO2012086127 A1 WO 2012086127A1 JP 2011006528 W JP2011006528 W JP 2011006528W WO 2012086127 A1 WO2012086127 A1 WO 2012086127A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- luminance
- brightness
- light source
- value
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 62
- 238000003672 processing method Methods 0.000 title claims description 7
- 238000009826 distribution Methods 0.000 claims abstract description 37
- 238000001514 detection method Methods 0.000 claims abstract description 33
- 238000003384 imaging method Methods 0.000 claims description 86
- 230000003287 optical effect Effects 0.000 claims description 64
- 230000010354 integration Effects 0.000 claims description 21
- 230000002093 peripheral effect Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 28
- 230000007704 transition Effects 0.000 description 27
- 229920006395 saturated elastomer Polymers 0.000 description 22
- 238000000034 method Methods 0.000 description 10
- 238000000605 extraction Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000002411 adverse Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
- H04N1/4072—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the present invention relates to a technique for reducing unnecessary light from a photographed image by image processing after photographing when a subject is bright at the time of photographing and a large unnecessary light (flare) is generated in the photographed image.
- unnecessary light in the captured image becomes visible and may adversely affect the captured image.
- unnecessary light may be larger than when an aspheric lens is used.
- unnecessary light such as a double image may be generated.
- Patent Document 1 As a conventional technique for reducing such unnecessary light by image processing, there is a technique described in Patent Document 1 or 2, for example.
- unnecessary light is estimated based on a convolution integral between a PSF (Point Spread Function) image of an unnecessary order (0th order light or secondary light) of a diffraction lens and a captured image. Then, the unnecessary light is reduced from the captured image by subtracting the estimated unnecessary light component from the captured image.
- PSF Point Spread Function
- Patent Document 2 a plurality of images with different exposure times are taken, unnecessary light is estimated from an image with a short exposure time, and the estimated unnecessary light component is subtracted from the image with a long exposure time. Unnecessary light is reduced from long images.
- the size of the estimated unnecessary light component becomes smaller than the actual size. Further, unnecessary light generated without depending on the order becomes more prominent as the subject is brighter. That is, when the subject is bright and the brightness of the captured image is saturated, it is difficult to appropriately reduce unnecessary light.
- Patent Document 2 since it is necessary to capture a plurality of images with different exposure times, it is possible to accurately estimate unnecessary light in the case of moving image shooting in which the subject moves and the captured image fluctuates depending on the shooting timing. May be difficult. Further, since it is necessary to calculate a plurality of images, the calculation cost also increases.
- the present invention has been made to solve the above-described problem, and an output image in which unnecessary light is appropriately reduced as compared with the photographed image using a single photographed image obtained by photographing a bright subject. It is an object of the present invention to provide an image processing apparatus, an imaging apparatus, and an image processing method.
- an image processing apparatus includes a luminance saturation position detection unit that detects a luminance saturation position that is a position where a luminance value is larger than a predetermined value in a captured image.
- a luminance gradient detection unit that detects a luminance gradient in the periphery of the luminance saturation position, an image at the luminance saturation position, a PSF (Point Spread Function) image corresponding to the luminance saturation position, and the luminance gradient.
- an unnecessary light subtracting unit that subtracts a luminance value of unnecessary light from the captured image.
- an imaging apparatus includes the above-described image processing apparatus, and an imaging unit that includes an optical system and an imaging element and outputs the captured image.
- the present invention can be realized not only as such an image processing apparatus, but also as an image processing method in which the operation of a characteristic component included in such an image processing apparatus is used as a step.
- the present invention can also be realized as a program that causes a computer to execute each step included in the image processing method.
- Such a program can be distributed via a non-temporary recording medium such as a CD-ROM (Compact Disc Only Memory) or a transmission medium such as the Internet.
- FIG. 1 is a schematic diagram showing a configuration example of an optical system in an embodiment of the present invention.
- FIG. 2A is a diagram showing a PSF image of the optical system according to the embodiment of the present invention.
- FIG. 2B is a diagram showing a PSF image of the optical system according to the embodiment of the present invention.
- FIG. 3A is a diagram showing a luminance transition of the PSF image of the optical system according to the embodiment of the present invention.
- FIG. 3B is a diagram showing a luminance transition of the PSF image of the optical system according to the embodiment of the present invention.
- FIG. 4 is a diagram showing a subject in the embodiment of the present invention.
- FIG. 5A is a diagram showing a captured image in the embodiment of the present invention.
- FIG. 5B is a diagram showing a captured image in the embodiment of the present invention.
- FIG. 6A is a diagram showing a luminance transition of a captured image in the embodiment of the present invention.
- FIG. 6B is a diagram showing a luminance transition of the captured image in the embodiment of the present invention.
- FIG. 7A is a block diagram illustrating a configuration of the imaging device according to the embodiment of the present invention.
- FIG. 7B is a block diagram showing an example of the configuration of the light source image estimation unit in the embodiment of the present invention.
- FIG. 8 is a diagram showing the luminance saturation position of the captured image in the embodiment of the present invention.
- FIG. 9A is a diagram showing a light source image model in the embodiment of the present invention.
- FIG. 9B is a diagram showing a luminance transition of the light source image model in the embodiment of the present invention.
- FIG. 10 is a diagram showing a transition of the differential value of the luminance value in the peripheral portion of the luminance saturation position of the captured image in the embodiment of the present invention.
- FIG. 11 is a graph showing the relationship between the luminance gradient of the captured image and the maximum luminance value of the light source image in the embodiment of the present invention.
- FIG. 12A is a diagram showing an output image in the embodiment of the present invention.
- FIG. 12B is a diagram showing a luminance transition of the output image in the embodiment of the present invention.
- FIG. 13A is a diagram showing another example of the subject in the embodiment of the present invention.
- FIG. 13B is a diagram showing another example of a captured image in the embodiment of the present invention.
- FIG. 13C is a diagram showing another example of an output image in the embodiment of the present invention.
- FIG. 13D is a diagram showing another example of the luminance transition of the output image in the embodiment of the present invention.
- FIG. 14A is a diagram showing another example of the subject in the embodiment of the present invention.
- FIG. 14B is a diagram showing another example of a captured image in the embodiment of the present invention.
- FIG. 14C is a diagram showing another example of the output image in the embodiment of the present invention.
- FIG. 14D is a diagram showing another example of the luminance transition of the output image in the embodiment of the present invention.
- FIG. 15A is a flowchart showing image processing in the embodiment of the present invention.
- FIG. 15B is a flowchart illustrating an example of a light source image estimation process in the embodiment of the present invention.
- An image processing apparatus includes a luminance saturation position detection unit that detects a luminance saturation position that is a position where a luminance value is larger than a predetermined value in a captured image, and a periphery of the luminance saturation position. Based on the luminance gradient detection unit for detecting the luminance gradient of the image, the image at the luminance saturation position, the PSF image corresponding to the luminance saturation position, and the luminance gradient, the luminance value increases as the luminance gradient increases.
- the light source image estimation unit for estimating the luminance distribution on the imaging surface formed by the subject imaged at the luminance saturation position and the luminance distribution are used to subtract the luminance value of unnecessary light from the captured image. An unnecessary light subtracting unit.
- the luminance distribution on the imaging surface formed by the subject imaged at the luminance saturation position can be estimated so that the luminance value increases as the luminance gradient of the captured image increases. That is, even when the correct luminance distribution of the subject cannot be obtained from the photographed image because the luminance is saturated, the luminance distribution on the imaging surface can be accurately estimated using the luminance gradient.
- an output image in which unnecessary light is appropriately reduced from the captured image is obtained as one captured image. It is possible to generate by using. That is, even when the brightness of the subject image is saturated in a captured image obtained by capturing a bright subject, unnecessary light can be appropriately reduced from the captured image.
- the light source image estimation unit performs a light source image by performing convolution integration of the image at the luminance saturation position and the PSF image corresponding to the luminance saturation position.
- a light source image model creating unit for creating a model, and light source image gain adjustment for estimating a luminance distribution on the imaging surface by adjusting the luminance of the light source image model so that the luminance value increases as the luminance gradient increases.
- a light source image model can be created by performing convolution integration between an image at a luminance saturation position and a PSF image.
- the light source image gain adjustment unit is detected using a predetermined relationship between the luminance gradient of the captured image and the maximum luminance value on the imaging surface.
- the brightness maximum value on the imaging surface corresponding to the brightness gradient is estimated, and the brightness value of the light source image model is adjusted using the estimated brightness maximum value.
- the maximum luminance value can be accurately estimated using a predetermined relationship between the luminance gradient of the captured image and the maximum luminance value on the imaging surface.
- the captured image is an image captured using an optical system including a diffractive optical element, and the luminance inclination detection unit is configured to detect the luminance saturation position.
- a luminance gradient on the opposite side to the optical axis of the optical system is detected in the peripheral portion.
- the luminance gradient on the side opposite to the optical axis of the optical system in the peripheral portion of the luminance saturation position.
- large unnecessary light is generated on the optical axis side of the luminance saturation position. That is, the luminance gradient on the side opposite to the optical axis is less affected by unnecessary light than the luminance gradient on the optical axis side. Therefore, by detecting the luminance gradient opposite to the optical axis, it is possible to suppress the influence of unnecessary light on the detected luminance gradient. As a result, the luminance distribution on the imaging surface can be estimated more accurately.
- an image processing apparatus may be configured as an integrated circuit.
- an optical system including a diffractive optical element is used for a state where a subject image is saturated and unnecessary light is visible when a bright subject is photographed. A description will be given using the captured image.
- FIG. 1 schematically shows a configuration example of an optical system in an embodiment of the present invention.
- the optical system 200 includes a lens 201 having a negative power and a diffractive lens 202 having a positive power.
- the optical axis 210 of the optical system 200 intersects the imaging surface 209 of the imaging element 208.
- the diffractive lens 202 corresponds to a diffractive optical element.
- the diffractive lens 202 includes a first member 203 and a second member 204 made of different materials.
- One surface of the first member 203 is formed in an aspheric shape.
- a diffraction grating 206 is formed in an annular shape around the optical axis.
- the surface of the diffraction grating 206 is covered with an aspherical shape by the second member 204.
- an image of the subject is formed on the image pickup surface 209 of the image pickup device 208.
- the image of the subject formed on the imaging surface is captured by the image sensor 208 as a captured image.
- the image sensor 208 is configured by a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor), or the like.
- the diaphragm 211 adjusts light rays incident on the imaging surface 209 of the imaging element 208.
- the grating thickness d of the diffraction grating 206 is obtained by the equation (1).
- n1 represents the refractive index of the first member 203
- n2 represents the refractive index of the second member 204.
- ⁇ is a wavelength.
- the optical system 200 is an optical system for imaging applications. Therefore, ⁇ takes a value in the visible wavelength range of about 400 nm to 700 nm.
- M is the diffraction order.
- m 1. That is, the grating thickness d of the diffraction grating 206 is designed so as to increase the diffraction efficiency of the first-order diffracted light.
- the diffractive lens 202 can be obtained over the entire visible wavelength range. It is known that a high first-order diffraction efficiency can be realized with (see, for example, Reference Document (Japanese Patent No. 4077508)).
- the first member 203 and the second member 204 in which n1 and n2 are such a combination are used.
- FIG. 2A shows a PSF image in the vicinity of an angle of view of 45 degrees of the optical system 200 of FIG.
- FIG. 2B shows a PSF image obtained by multiplying the brightness of the PSF image of FIG. 2A by 50 times so that the distribution of unnecessary light components in the PSF image is easy to see.
- the left direction of the image is the optical axis direction. That is, there is an image position corresponding to the position where the optical axis and the imaging surface intersect in the left direction of the image.
- PSF is a function representing the response of the optical system to a point light source.
- a PSF image is an image representing PSF. That is, the PSF image corresponds to an image obtained by photographing a point light source.
- FIG. 3A shows the luminance transition in the horizontal direction of the image around the luminance maximum position of the PSF image of FIG. 2A.
- the vertical axis represents the luminance value
- the horizontal axis represents the image position.
- FIG. 3B shows a luminance transition when the scale of the vertical axis in FIG. 3A is enlarged.
- a large unnecessary light is generated in a photographed image photographed using an optical system including a diffractive optical element.
- the unnecessary light is unnecessary light that appears in an image.
- unnecessary light is light that is not originally desired to appear in an image. This unnecessary light degrades the image quality.
- Unnecessary light includes not only diffracted light other than the design order such as 0th order or 2nd order diffracted light (hereinafter also referred to as “unnecessary diffracted light”) but also unnecessary light of the first order diffracted light that is the designed order. .
- the unnecessary light of the first-order diffracted light is unnecessary light generated due to the influence of each annular zone of the diffraction grating. That is, even in the first-order diffracted light that is the designed order, large unnecessary light is generated in principle as compared with the aspherical lens.
- this large unnecessary light is conspicuous at a position closer to the optical axis than the position of the maximum luminance of the PSF image.
- the brightness of the subject image position is saturated in a photographed image obtained by photographing a bright subject, large unnecessary light is generated at a position on the optical axis side of the subject image position. Unnecessary light adversely affects the captured image.
- FIG. 4 shows a subject in the embodiment of the present invention.
- 5A and 5B show captured images in the embodiment of the present invention.
- FIGS. 5A and 5B show captured images when the bright subject shown in FIG. 4 is taken with the optical system of FIG.
- FIGS. 5A and 5B show a fluorescent lamp image captured at an angle of view of around 45 degrees.
- the fluorescent lamp is assumed to be a light source having a uniform rectangular luminance as shown in FIG.
- the actual captured image shown in FIGS. 5A and 5B is simulated by convolution integration of the subject image shown in FIG. 4 and the PSF image shown in FIG. 2A.
- FIG. 5A shows a photographed image when the maximum luminance value of the fluorescent lamp is 0.7.
- FIG. 5B shows a captured image when the maximum luminance value of the fluorescent lamp is 1.3.
- the luminance saturation level of the captured image is 1.0.
- the luminance of the captured image is saturated at an image position corresponding to a position where the luminance value exceeds 1.0 on the imaging surface 209.
- 6A and 6B show the luminance transition in the horizontal direction of the image in the vicinity of the center of the captured image of the fluorescent lamp in each of FIGS. 5A and 5B.
- the vertical axis represents the luminance value
- the horizontal axis represents the image position.
- the actual luminance distribution on the imaging surface 209 of the image sensor 208 is impaired at an image position where the luminance of the captured image is saturated (hereinafter referred to as “luminance saturation position”). That is, a luminance distribution different from the actual luminance distribution on the imaging surface 209 is formed at the luminance saturation position of the captured image. Therefore, when the method of Patent Document 1 is used, unnecessary light is estimated from the erroneous brightness of the subject obtained from the luminance saturation position. As a result, with the method of Patent Document 1, it is difficult to appropriately estimate unnecessary light, and it is difficult to reduce unnecessary light from a captured image.
- FIGS. 5B and 6B an image processing apparatus capable of reducing unnecessary light generated when the brightness of a captured image is saturated, and The imaging device will be described.
- each of the embodiments described below shows a preferred specific example of the present invention. That is, the numerical values, shapes, materials, constituent elements, arrangement and connection forms of constituent elements, steps, order of steps, and the like shown in the following embodiments are examples of the present invention and are not intended to limit the present invention. .
- the present invention is specified based on the description of the scope of claims. Therefore, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept of the present invention are not necessarily required to achieve the object of the present invention, but are more preferable. It is described as a component constituting the form.
- FIG. 7A is a block diagram showing the configuration of the imaging apparatus 100 according to the embodiment of the present invention.
- FIG. 7B is a block diagram showing an example of the configuration of the light source image estimation unit in the embodiment of the present invention.
- the imaging device 100 includes an imaging unit 101 and an image processing device 102.
- the imaging unit 101 includes the optical system 200 and the imaging element 208 shown in FIG.
- the imaging unit 101 images a subject and outputs a captured image It (x, y).
- x and y represent image positions in the horizontal and vertical directions, respectively.
- the image position is a position on the captured image.
- the image position indicates the position of each pixel constituting the captured image.
- the image processing apparatus 102 reduces an unnecessary light component from a captured image and outputs an output image with little deterioration in image quality due to unnecessary light.
- the image processing apparatus 102 regards a subject image having a predetermined luminance threshold Is or more as a light source image among subject images in the captured image, and estimates a luminance distribution on the imaging surface 209 formed by the light source. . Then, the image processing apparatus 102 subtracts the luminance value of unnecessary light from the captured image based on the estimated luminance distribution on the imaging surface 209, thereby generating an output image with less image quality degradation due to unnecessary light than the captured image. Generate.
- the luminance value of unnecessary light is the luminance value of an image (unnecessary light image) formed by unnecessary light.
- the image processing apparatus 102 is not necessarily provided in the imaging apparatus 100.
- the image processing apparatus 102 may acquire a captured image from an imaging apparatus that includes the imaging unit 101.
- the image processing apparatus 102 includes a luminance saturation position detection unit 110, a luminance inclination detection unit 112, a light source image estimation unit 105, and an unnecessary light subtraction unit 106.
- the luminance saturation position detection unit 110 detects a luminance saturation position that is a position where the luminance value is larger than a predetermined value in the captured image. That is, the luminance saturation position detection unit 110 detects an image position having a luminance value larger than the threshold value as a luminance saturation position in the captured image.
- the luminance gradient detection unit 112 detects the luminance gradient at the periphery of the luminance saturation position.
- the luminance gradient indicates a spatial change rate of the luminance value in the captured image.
- the light source image estimation unit 105 uses the light source on the imaging surface 209 so that the luminance value increases as the luminance gradient increases, based on the image at the luminance saturation position, the PSF image corresponding to the luminance saturation position, and the luminance gradient. Estimate the image.
- a light source image is a luminance distribution formed by a light source. The light source corresponds to a subject imaged at a luminance saturation position.
- the light source image estimation unit 105 includes a light source image model creation unit 111, and a light source image gain adjustment unit 113 having a convolution integration unit 115 and a PSF extraction unit 116. Note that the configuration of the light source image estimation unit 105 illustrated in FIG. 7B is an example, and the light source image estimation unit 105 is not necessarily configured in this manner.
- the unnecessary light subtracting unit 106 subtracts the luminance value of unnecessary light from the captured image using the estimated light source image on the imaging surface 209. That is, the unnecessary light subtracting unit 106 generates an output image with less image degradation due to unnecessary light than the captured image by subtracting the unnecessary light component obtained from the estimated light source image on the imaging surface 209 from the captured image. .
- the luminance saturation position detection unit 110 detects an image position having a luminance value larger than the luminance threshold Is in the captured image output from the imaging unit 101 as a luminance saturation position. Then, the luminance saturation position detection unit 110 stores data indicating the detected luminance saturation position in a memory or the like.
- the luminance threshold value Is is set to about 0.98 when the range of luminance values that can be expressed in the captured image is 0 to 1.0, for example.
- the luminance threshold Is may be set according to the imaging characteristics of the image sensor 208 and the like.
- the luminance saturation position detection unit 110 detects the subject imaged at the image position where the luminance is saturated as a light source.
- the brightness of a very bright subject such as a fluorescent lamp or a lamp is often saturated compared to other subjects. Therefore, assuming that the subject imaged at the luminance saturation position is the light source does not impair the validity.
- FIG. 8 shows the luminance saturation position detected from the captured image of FIG. 5B.
- the light source image model creation unit 111 creates a light source image model by performing convolution integration between the image at the luminance saturation position and the PSF image corresponding to the luminance saturation position.
- the light source image model creation unit 111 includes a convolution integration unit 115 and a PSF extraction unit 116.
- the convolution integration unit 115 creates a light source shape image If (x, y) in which the luminance value at the luminance saturation position is set to a constant value Ic and the luminance values at other image positions are set to zero. Then, the convolution integration unit 115 calculates the convolution integration of the PSF image Ipsf (x, y) and the light source shape image If (x, y) for each image position extracted by the PSF extraction unit 116 in Expression (2). By performing as shown, a light source image model Im (x, y) is created.
- u and v represent image positions in the horizontal and vertical directions, respectively.
- the convolution integration unit 115 calculates Im (x, y) by convolution integration, but it is not always necessary to determine it in this way.
- the convolution integrator 115 may obtain Im (x, y) as follows. First, the convolution integrator 115 performs Fourier transform on If (x, y) and Ipsf (x, y) using FFT (Fast Fourier Transform) or the like. Then, the convolution integrator 115 multiplies the data after Fourier transform in the frequency domain. Finally, the convolution integrator 115 calculates the light source image model Im (x, y) by performing inverse Fourier transform on the data after multiplication. The convolution integrator 115 may determine whether to perform the calculation in the spatial domain or the frequency domain in consideration of the calculation amount.
- the PSF extraction unit 116 extracts a PSF image corresponding to each image position from a plurality of PSF images stored in advance in a memory or the like.
- the PSF extraction unit 116 corresponds to the PSF image of the block to which each image position belongs from among the PSF images stored for each block (for example, 64 ⁇ 64 pixels) that is a set of image positions. It may be extracted as a PSF image.
- a PSF image may be stored for each square block, rectangular block, or ring-shaped block centered on the optical axis. Thereby, the memory amount for storing the PSF image can be reduced.
- the block size is determined in consideration of the balance between the amount of memory installed in the imaging apparatus and the estimation accuracy of the light source image.
- FIG. 9A shows a light source image model created using Expression (2) from the PSF image shown in FIG. 2A and the light source shape image of FIG.
- FIG. 9B shows the luminance transition of the light source image model of FIG. 9A.
- the light source image model is normalized so that the maximum luminance value is 1.0. Normalization may be performed according to the system specifications at the time of implementation.
- the luminance gradient detection unit 112 detects the luminance gradient Dm of the captured image at the peripheral portion of the luminance saturation position detected by the luminance saturation position detection unit 110. Specifically, first, the luminance inclination detection unit 112 calculates the absolute value of the difference between the luminance values of adjacent image positions as a differential value by differential calculation.
- FIG. 10 shows the transition of the differential value of the luminance value at the peripheral portion of the luminance saturation position, which is calculated from the luminance transition of FIG.
- the luminance gradient detector 112 detects the maximum value of the differential value on the optical axis side of the luminance saturation position as the luminance gradient Dm.
- the luminance gradient Dm is not necessarily the maximum value of the differential value around the luminance saturation position.
- the luminance gradient Dm may be an average value of the differential values around the luminance saturation position.
- FIG. 11 shows the relationship between the luminance gradient Dm and the actual maximum luminance value of the light source image on the imaging surface 209.
- the first data 301 indicates the relationship between the luminance gradient Dm and the actual maximum luminance value on the imaging surface 209 when the fluorescent lamp shown in FIG. 4 is imaged. It can be seen from the first data 301 that the luminance gradient Dm and the maximum luminance value are in a substantially proportional relationship.
- the second data 302 indicates the relationship between the luminance gradient Dm and the maximum luminance value when the fluorescent lamp shown in FIG. 4 is photographed with the width of the fluorescent lamp approximately 1 ⁇ 2 times. From the second data 302, it can be seen that even when the width of the fluorescent lamp is halved, the relationship between the luminance gradient Dm and the luminance maximum value does not change significantly.
- the light source image gain adjustment unit 113 can estimate the actual maximum luminance value on the imaging surface 209 of the light source regardless of the light source shape by detecting the luminance gradient Dm in the peripheral portion of the luminance saturation position. It becomes.
- the light source image gain adjusting unit 113 can accurately estimate the maximum luminance value from the luminance gradient Dm if the relationship between the luminance gradient Dm and the maximum luminance value is stored in advance. Then, by adjusting the luminance value of the light source image model using the luminance maximum value estimated in this way, the image processing apparatus 102 can reduce unnecessary light from the captured image.
- the light source image gain adjustment unit 113 calculates the maximum luminance value Imax on the imaging surface 209 from the detected luminance gradient Dm using Equation (3).
- This equation (3) corresponds to a predetermined relationship between the luminance gradient of the captured image and the luminance maximum value on the imaging surface 209.
- the values of these parameters A and B are set so that the maximum luminance value Imax obtained from the luminance gradient Dm is about 10% smaller than the actual maximum luminance value (first data 301 in FIG. 11).
- the values of the parameters A and B may be set so that the maximum luminance value Imax obtained from the luminance gradient Dm is the same as the actual maximum luminance value (first data 301 in FIG. 11).
- the values of the parameters A and B are preferably set such that the maximum luminance value Imax is slightly smaller than the actual maximum luminance value indicated by the first data 301.
- the light source image gain adjustment unit 113 can suppress the estimation of the maximum luminance value larger than the actual value when the light source shape changes.
- the unnecessary light subtracting unit 106 can suppress excessively subtracting unnecessary light components from the captured image.
- the value of parameter B in equation (3) is set smaller than the approximate straight line obtained from the first data 301 in FIG. 11 using the least square method or the like. . Thereby, it is suppressed that the luminance maximum value Imax is estimated to be larger than the actual luminance maximum value.
- the light source image gain adjustment unit 113 creates the estimated light source image Ie (x, y) by multiplying the light source image model Im (x, y) by the luminance maximum value Imax calculated based on the equation (3). In other words, the light source image gain adjustment unit 113 adjusts the luminance value of the light source image model Im (x, y) so that the luminance value at the luminance saturation position matches the luminance maximum value Imax, so that the image on the imaging surface 209 is adjusted. Estimate the light source image.
- the predetermined relationship between the luminance gradient Dm and the maximum luminance value Imax is expressed by a linear function, but may be expressed by an appropriate polynomial such as a quadratic function as necessary.
- the relationship between the luminance gradient Dm and the luminance maximum value Imax may be represented by a table in which the luminance maximum value Imax is associated with the luminance gradient Dm.
- the light source image gain adjustment unit 113 may estimate the maximum luminance value Imax corresponding to the luminance gradient Dm with reference to the table. Note that the amount of memory can be reduced when the relationship between the luminance gradient Dm and the maximum luminance value Imax is stored as a mathematical expression, compared to when the relationship is stored as a table.
- the predetermined relationship between the luminance gradient Dm and the maximum luminance value Imax is stored for each image position.
- the light source image gain adjustment unit 113 can estimate the maximum luminance value more accurately according to the luminance saturation position in the captured image.
- the predetermined relationship between the luminance gradient Dm and the maximum luminance value Imax may be stored for each block (for example, 64 ⁇ 64 pixels) that is a set of image positions. In this case, the amount of memory can be reduced as compared with the case where a predetermined relationship between the luminance gradient Dm and the maximum luminance value Imax is stored for each image position.
- the light source image gain adjustment unit 113 estimates the maximum luminance value on the imaging surface 209 corresponding to the detected luminance gradient by using a predetermined relationship between the luminance gradient and the luminance maximum value. . Then, the light source image gain adjustment unit 113 adjusts the luminance value of the light source image model using the estimated luminance maximum value.
- the estimated light source image Ie (x, y) whose luminance value has been adjusted in this way is obtained by restoring the luminance distribution on the imaging surface 209 formed by the light source, which is lost due to saturation of luminance in the captured image. Can be considered.
- the unnecessary light subtracting unit 106 subtracts the estimated light source image Ie (x, y) from the captured image It (x, y) at an image position other than the luminance saturation position, thereby reducing unnecessary light compared to the captured image. Generate an output image. That is, the unnecessary light subtracting unit 106 subtracts the luminance value at the image position other than the luminance saturation position in the estimated light source image from the luminance value at the corresponding image position of the captured image.
- FIG. 12A shows an example of an output image in the embodiment of the present invention.
- FIG. 12B shows the luminance transition in the horizontal direction of the image near the center of the output image shown in FIG. 12A.
- the solid line 311 represents the luminance transition of the output image
- the dotted line 312 represents the luminance transition of the photographed image It (x, y)
- the broken line 313 represents the unnecessary light obtained from the estimated light source image Ie (x, y).
- It represents the luminance transition (subtraction amount from the photographed image).
- the subtraction amount is 0 at a position where the luminance of It (x, y) is saturated. From FIG. 12A, it can be seen that in the output image, unnecessary light in the captured image is greatly reduced.
- FIG. 13A shows another example of the subject in the embodiment of the present invention. Specifically, FIG. 13A shows a subject whose width is about 1 ⁇ 2 that of the subject shown in FIG. More specifically, FIG. 13A shows a light source in which the width of the fluorescent lamp shown in FIG.
- FIG. 13B shows another example of a captured image in the embodiment of the present invention. Specifically, FIG. 13B shows a captured image It (x, y) when the subject shown in FIG. 13A is captured. In the captured image of FIG. 13B, the luminance at the image position where the subject is imaged is saturated, as in FIG. 5B.
- FIG. 13C shows another example of the output image in the embodiment of the present invention. Specifically, FIG. 13C shows an output image after unnecessary light is reduced from the captured image shown in FIG. 13B.
- FIG. 13D shows the luminance transition in the horizontal direction of the image near the center of the output image of FIG. 13C.
- the vertical axis represents the luminance value
- the horizontal axis represents the image position.
- the solid line 321 represents the luminance transition of the output image
- the dotted line 322 represents the luminance transition of the captured image It (x, y)
- the broken line 323 is an unnecessary obtained from the estimated light source image Ie (x, y). It represents the luminance transition of light (subtraction amount from the photographed image).
- FIG. 14A shows another example of the subject in the embodiment of the present invention.
- FIG. 14A shows a circular light source such as a halogen bulb.
- FIG. 14B shows another example of a captured image in the embodiment of the present invention.
- FIG. 14B shows a captured image It (x, y) when the subject shown in FIG. 14A is captured.
- the luminance at the image position where the subject is imaged is saturated, as in FIG. 5B.
- FIG. 14C shows another example of the output image in the embodiment of the present invention. Specifically, FIG. 14C shows an output image after unnecessary light is reduced from the captured image shown in FIG. 14B.
- FIG. 14D shows the luminance transition in the horizontal direction of the image near the center of the output image of FIG. 14C.
- the vertical axis represents the luminance value
- the horizontal axis represents the image position.
- the solid line 331 represents the luminance transition of the output image
- the dotted line 332 represents the luminance transition of the captured image It (x, y)
- the broken line 333 is an unnecessary line obtained from the estimated light source image Ie (x, y). It represents the luminance transition of light (subtraction amount from the photographed image).
- the light source image gain adjustment unit 113 sets the same parameters A and B as when generating the output image of FIG. 12A (3) Is used to estimate the maximum luminance value. As can be seen from the output image of FIG. 13C or FIG. 14C, even if the shape of the subject changes, unnecessary light in the captured image is greatly reduced in the output image.
- FIG. 15A is a flowchart showing image processing in the embodiment of the present invention.
- FIG. 15A it is assumed that the imaging unit 101 has already created a captured image captured through the optical system 200.
- FIG. 15B is a flowchart showing an example of a light source image estimation process in the embodiment of the present invention.
- the luminance saturation position detection unit 110 detects the luminance saturation position in the captured image (S102).
- the luminance gradient detection unit 112 detects the luminance gradient Dm around the luminance saturation position detected in step S102 by performing a differentiation operation (S103).
- the light source image estimation unit 105 estimates a light source image on the imaging surface 209 based on the detected luminance gradient Dm (S104).
- step S104 An example of details of the processing in step S104 in the present embodiment is as shown in FIG. 15B. Note that the process of step S104 is not necessarily performed as shown in FIG. 15B. That is, in step S104, an image is captured at the luminance saturation position so that the luminance value increases as the luminance gradient increases based on the image at the luminance saturation position, the PSF image corresponding to the luminance saturation position, and the luminance inclination. It is only necessary to estimate the luminance distribution on the imaging surface formed by the subject.
- the flowchart shown in FIG. 15B will be described below.
- the PSF extraction unit 116 extracts a PSF image corresponding to the luminance saturation position detected in step S102 from a plurality of PSF images stored in advance in a memory or the like (S105). Note that the PSF extraction unit 116 may create a PSF image corresponding to the luminance saturation position by performing rotation processing of the PSF image stored in the memory as necessary. Alternatively, the PSF extraction unit 116 may store the same PSF image for each block in advance and read the PSF image of the block to which the pixel saturation position belongs.
- the convolution integration unit 115 performs convolution integration of the two images of the light source shape image created based on the luminance saturation position detected in step S102 and the PSF image extracted in step S103, thereby obtaining a light source image model. Is created (S106). Note that the convolution integration unit 115 may create a light source image model by performing Fourier transform after two Fourier transforms of each of the two images by FFT or the like and performing inverse Fourier transform instead of the convolution integration.
- the light source image gain adjustment unit 113 estimates the luminance distribution of the light source image by adjusting the luminance value of the light source image model so that the luminance value increases as the luminance gradient increases (S107). Specifically, the light source image gain adjustment unit 113 uses a predetermined relationship between the luminance gradient of the captured image and the luminance maximum value on the imaging surface, on the imaging surface corresponding to the detected luminance gradient Dm. The maximum luminance value Dmax is estimated. Then, the light source image gain adjustment unit 113 estimates the luminance distribution of the light source on the imaging surface 209 as a light source image based on the light source image model and the maximum luminance value Dmax.
- the unnecessary light subtracting unit 106 captures the imaging surface 209 estimated in step S106 from the luminance value of the captured image at an image position other than the luminance saturation position of the captured image.
- An output image with reduced unnecessary light is generated by subtracting the luminance value of the light source image at (S108).
- a light source image model is estimated from an image of a luminance saturation position of a captured image and a PSF image even when unnecessary visible light is generated in the captured image of a bright subject.
- the captured image is obtained using the captured image obtained by one shooting. Therefore, unnecessary light can be appropriately reduced.
- the luminance distribution on the imaging surface 209 formed by the subject imaged at the luminance saturation position so that the luminance value increases as the luminance gradient of the captured image increases. Can be estimated. That is, even when the correct luminance distribution of the subject cannot be obtained from the captured image because the luminance is saturated, the luminance distribution on the imaging surface 209 can be accurately estimated using the luminance gradient. By subtracting the luminance value of the unnecessary light from the photographed image using the luminance distribution on the imaging surface 209 estimated in this manner, the unnecessary light is appropriately reduced from the photographed image using one photographed image. Output image can be generated.
- the image processing apparatus 102 it is possible to subtract the luminance value of unnecessary light from the captured image using the luminance distribution on the imaging surface 209 estimated based on the PSF image. In other words, since image processing is not performed by focusing only on unnecessary diffracted light other than the designed order, it is possible to reduce unnecessary light generated from the captured image without depending on the order.
- the maximum luminance value can be accurately estimated by using a predetermined relationship between the luminance gradient of the captured image and the maximum luminance value on the imaging surface 209.
- the luminance value of the light source image model By adjusting the luminance value of the light source image model using the luminance maximum value thus estimated, the luminance distribution on the imaging surface 209 formed by the subject can be accurately estimated. Therefore, it is possible to generate an output image in which unnecessary light is appropriately reduced as compared with the captured image.
- the imaging device 100 and the image processing device 102 have been described based on the embodiments. However, the present invention is not limited to these embodiments. Unless it deviates from the meaning of this invention, what made the various deformation
- the luminance gradient detection unit 112 detects the luminance gradient on the optical axis 210 side in the peripheral portion of the luminance saturation position as shown in FIG.
- the luminance gradient on the side may be detected.
- the luminance gradient detecting unit 112 may detect the maximum value of the differential value on the right side of the luminance saturation position as the luminance gradient.
- large unnecessary light is generated on the optical axis 210 side of the luminance saturation position. That is, the luminance gradient on the side opposite to the optical axis 210 is less affected by unnecessary light than the luminance gradient on the optical axis 210 side.
- the luminance gradient detection unit 112 can suppress the influence of unnecessary light on the detected luminance gradient by detecting the luminance gradient on the side opposite to the optical axis 210.
- the light source image estimation unit 105 can estimate the light source image on the imaging surface 209 more accurately.
- the light source image estimation unit 105 adjusts the luminance value of the light source image model based on the luminance gradient, but it is not always necessary to adjust the luminance value of the light source image model.
- the light source image estimation unit 105 may adjust the luminance value of the PSF image or the luminance value of the image at the luminance saturation position in the captured image based on the luminance gradient. That is, the light source image estimation unit 105 is not necessarily configured as shown in FIG. 7B. Even in such a case, the light source image estimation unit 105 adjusts the luminance value of the PSF image or the luminance value of the image at the luminance saturation position in the captured image so as to increase as the luminance inclination increases. As in the case of adjusting the luminance value of the light source image model, the light source image can be accurately estimated.
- the light source image gain adjustment unit 113 estimates the maximum luminance value from the luminance gradient.
- a coefficient for adjusting the luminance value of the light source image model or the like is used. It may be estimated.
- the image processing apparatus 102 creates an image with reduced unnecessary light for each of the red, green, and blue images in the same manner as described above.
- the image processing apparatus 102 may create a color image by combining red, green, and blue images with reduced unnecessary light.
- the image processing apparatus 102 may estimate the light source image by using different PSF images for each color of red, green, and blue.
- the image processing apparatus 102 calculates a light source image model of only one color such as green, and reduces unnecessary light of each color of red, green, and blue based on the luminance gradient around the luminance saturation position using the light source image model. May be. At this time, the image processing apparatus 102 may determine the ratio of the maximum luminance value of the light source image in advance for each color, and set the amount of unnecessary light to be reduced for each color based on the ratio.
- the image processing apparatus 102 can reduce unnecessary light from the captured image even in a captured image captured using an optical system that does not include a diffractive optical element.
- the luminance gradient at the periphery of the saturated image area is proportional to the maximum luminance value of the saturated light source image.
- the magnitude of the luminance gradient at the periphery of the saturated image area does not change greatly.
- the present invention can also be applied to a photographed image photographed using an optical system that does not include a diffractive optical element.
- the parameters A and B in Expression (3) indicating a predetermined relationship between the luminance gradient of the captured image and the luminance maximum value on the imaging surface 209 may be appropriately set according to the optical system. For example, a plurality of combinations of the luminance gradient of the captured image and the maximum luminance value on the imaging surface 209 are measured in advance, and the parameter A is calculated based on an approximate straight line calculated using the least square method or the like in the measured combinations. And B may be set.
- the image processing apparatus 102 in the above embodiment may be configured by one system LSI (Large Scale Integration).
- the image processing apparatus 102 may be configured by a system LSI including a luminance saturation position detection unit 110, a luminance inclination detection unit 112, a light source image estimation unit 105, and an unnecessary light subtraction unit 106.
- the system LSI is an ultra-multifunctional LSI manufactured by integrating a plurality of components on a single chip. Specifically, a microprocessor, a ROM (Read Only Memory), a RAM (Random Access Memory), etc. It is a computer system comprised including. A computer program is stored in the ROM. The system LSI achieves its functions by the microprocessor operating according to the computer program.
- system LSI may be called IC, LSI, super LSI, or ultra LSI depending on the degree of integration.
- method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible.
- An FPGA Field Programmable Gate Array
- reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- the present invention can be realized not only as an image processing apparatus including such a characteristic processing unit, but also as an image processing method using a characteristic processing unit included in the image processing apparatus as a step. You can also. It can also be realized as a computer program that causes a computer to execute the characteristic steps included in the image processing method. Needless to say, such a computer program can be distributed via a computer-readable non-transitory recording medium such as a CD-ROM or a communication network such as the Internet.
- the present invention is useful for an image processing apparatus that can reduce unnecessary light from a captured image, or an imaging apparatus such as a digital still camera or a digital video camera including the image processing apparatus.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Exposure Control For Cameras (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
以下、本発明の実施の形態について図面を参照しながら説明する。
101 撮像部
102 画像処理装置
105 光源像推定部
106 不要光減算部
110 輝度飽和位置検出部
111 光源像モデル作成部
112 輝度傾斜検出部
113 光源像ゲイン調整部
115 畳込み積分部
116 PSF抽出部
200 光学系
201 レンズ
202 回折レンズ
203 第1部材
204 第2部材
206 回折格子
208 撮像素子
209 撮像面
210 光軸
211 絞り
Claims (8)
- 撮影画像中で輝度値が所定の値より大きな値となる位置である輝度飽和位置を検出する輝度飽和位置検出部と、
前記輝度飽和位置の周辺部の輝度傾斜を検出する輝度傾斜検出部と、
前記輝度飽和位置の画像と、前記輝度飽和位置に対応するPSF(Point Spread Function)画像と、前記輝度傾斜とに基づいて、前記輝度傾斜が大きいほど輝度値が大きくなるように、前記輝度飽和位置に撮像された被写体によって形成される撮像面上の輝度分布を推定する光源像推定部と、
前記輝度分布を用いて、前記撮影画像から不要光の輝度値を減算する不要光減算部とを備える
画像処理装置。 - 前記光源像推定部は、
前記輝度飽和位置の画像と前記輝度飽和位置に対応するPSF画像との畳込み積分を行うことにより光源像モデルを作成する光源像モデル作成部と、
前記輝度傾斜が大きいほど輝度値が大きくなるように前記光源像モデルの輝度値を調整することにより、前記撮像面上の輝度分布を推定する光源像ゲイン調整部とを備える
請求項1に記載の画像処理装置。 - 前記光源像ゲイン調整部は、撮影画像の輝度傾斜と撮像面上の輝度最大値との予め定められた関係を利用して、検出された前記輝度傾斜に対応する前記撮像面上の輝度最大値を推定し、推定した前記輝度最大値を用いて前記光源像モデルの輝度値を調整する
請求項2に記載の画像処理装置。 - 前記撮影画像は、回折光学素子を含む光学系を用いて撮影された画像であり、
前記輝度傾斜検出部は、前記輝度飽和位置の周辺部のうち前記光学系の光軸と反対側の輝度傾斜を検出する
請求項1~3のいずれか1項に記載の画像処理装置。 - 前記画像処理装置は、集積回路として構成されている
請求項1~4のいずれか1項に記載の画像処理装置。 - 請求項1~5のいずれか1項に記載の画像処理装置と、
光学系及び撮像素子を有し、前記撮影画像を出力する撮像部とを備える
撮像装置。 - 撮影画像中で輝度値が所定の値より大きな値となる位置である輝度飽和位置を検出する輝度飽和位置検出ステップと、
前記輝度飽和位置の周辺部の輝度傾斜を検出する輝度傾斜検出ステップと、
前記輝度飽和位置の画像と、前記輝度飽和位置に対応するPSF(Point Spread Function)画像と、前記輝度傾斜とに基づいて、前記輝度傾斜が大きいほど輝度値が大きくなるように、前記輝度飽和位置に撮像された被写体によって形成される撮像面上の輝度分布を推定する光源像推定ステップと、
前記輝度分布を用いて、前記撮影画像から不要光の輝度値を減算する不要光減算ステップとを含む
画像処理方法。 - 請求項7に記載の画像処理方法をコンピュータに実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012513111A JP5000030B1 (ja) | 2010-12-20 | 2011-11-24 | 画像処理装置、撮像装置、及び画像処理方法 |
CN2011800051633A CN102687501A (zh) | 2010-12-20 | 2011-11-24 | 图像处理装置、摄像装置及图像处理方法 |
US13/519,372 US8610797B2 (en) | 2010-12-20 | 2011-11-24 | Image processing device, imaging device, and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-283847 | 2010-12-20 | ||
JP2010283847 | 2010-12-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012086127A1 true WO2012086127A1 (ja) | 2012-06-28 |
Family
ID=46313422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/006528 WO2012086127A1 (ja) | 2010-12-20 | 2011-11-24 | 画像処理装置、撮像装置、及び画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8610797B2 (ja) |
JP (1) | JP5000030B1 (ja) |
CN (1) | CN102687501A (ja) |
WO (1) | WO2012086127A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014010726A1 (ja) * | 2012-07-12 | 2016-06-23 | 株式会社ニコン | 画像処理装置及び画像処理プログラム |
KR20200094062A (ko) * | 2019-01-29 | 2020-08-06 | 한국과학기술원 | 렌즈리스 초분광 영상 이미징 방법 및 그 장치 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102932583B (zh) * | 2012-07-19 | 2015-05-06 | 湖南源信光电科技有限公司 | 折反射全聚焦成像方法 |
KR20150118731A (ko) | 2014-04-15 | 2015-10-23 | 삼성전자주식회사 | 초음파 영상 장치 및 그 제어 방법 |
FR3024261B1 (fr) * | 2014-07-25 | 2017-11-03 | Centre Nat D'etudes Spatiales (Cnes) | Procede et dispositif de traitement d'image numerique d'une scene comportant au moins un objet sur-eclaire |
JP6494328B2 (ja) * | 2015-03-02 | 2019-04-03 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法、画像処理プログラム、および、記憶媒体 |
JP2017118296A (ja) * | 2015-12-24 | 2017-06-29 | キヤノン株式会社 | 撮像装置、画像処理装置、画像処理方法、画像処理プログラム、および、記憶媒体 |
KR102688619B1 (ko) * | 2016-09-30 | 2024-07-26 | 삼성전자주식회사 | 이미지 처리 방법 및 이를 지원하는 전자 장치 |
KR102574649B1 (ko) * | 2018-11-29 | 2023-09-06 | 삼성전자주식회사 | 이미지 처리 방법 및 이를 지원하는 전자 장치 |
KR20220072529A (ko) | 2020-11-25 | 2022-06-02 | 삼성전자주식회사 | 광량을 획득하는 전자 장치 및 방법 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09238357A (ja) * | 1995-12-26 | 1997-09-09 | Olympus Optical Co Ltd | 電子撮像装置 |
JPH11355636A (ja) * | 1998-06-10 | 1999-12-24 | Toshiba Corp | 撮像装置 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0477508A (ja) | 1990-07-13 | 1992-03-11 | Mitsubishi Rayon Co Ltd | アクリロニトリル系共重合体及びその制電繊維 |
JP2004120487A (ja) | 2002-09-27 | 2004-04-15 | Fuji Photo Film Co Ltd | 撮像装置 |
JP4250513B2 (ja) | 2003-12-01 | 2009-04-08 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP4250506B2 (ja) | 2003-10-31 | 2009-04-08 | キヤノン株式会社 | 画像処理方法、画像処理装置、画像処理プログラムおよび撮像システム |
EP1528797B1 (en) | 2003-10-31 | 2015-07-08 | Canon Kabushiki Kaisha | Image processing apparatus, image-taking system and image processing method |
US8265378B2 (en) * | 2004-04-15 | 2012-09-11 | Dolby Laboratories Licensing Corporation | Methods and systems for converting images from low dynamic to high dynamic range |
CN101995594A (zh) | 2005-08-29 | 2011-03-30 | 松下电器产业株式会社 | 衍射光学元件以及使用衍射光学元件的摄像装置 |
JP4844052B2 (ja) * | 2005-08-30 | 2011-12-21 | ソニー株式会社 | 映像信号処理装置と撮像装置および映像信号処理方法とプログラム |
JP4998056B2 (ja) * | 2006-05-11 | 2012-08-15 | セイコーエプソン株式会社 | 撮像装置、撮像システム及び撮像方法 |
JP4325703B2 (ja) * | 2007-05-24 | 2009-09-02 | ソニー株式会社 | 固体撮像装置、固体撮像装置の信号処理装置および信号処理方法、ならびに撮像装置 |
KR20100017626A (ko) * | 2007-06-07 | 2010-02-16 | 소니 주식회사 | 신호 처리 방법 및 신호 처리 장치 |
JP5047055B2 (ja) | 2008-05-19 | 2012-10-10 | キヤノン株式会社 | 画像処理装置、撮像装置及び画像処理方法 |
JP4730412B2 (ja) * | 2008-08-12 | 2011-07-20 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
-
2011
- 2011-11-24 WO PCT/JP2011/006528 patent/WO2012086127A1/ja active Application Filing
- 2011-11-24 US US13/519,372 patent/US8610797B2/en active Active
- 2011-11-24 JP JP2012513111A patent/JP5000030B1/ja active Active
- 2011-11-24 CN CN2011800051633A patent/CN102687501A/zh active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09238357A (ja) * | 1995-12-26 | 1997-09-09 | Olympus Optical Co Ltd | 電子撮像装置 |
JPH11355636A (ja) * | 1998-06-10 | 1999-12-24 | Toshiba Corp | 撮像装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2014010726A1 (ja) * | 2012-07-12 | 2016-06-23 | 株式会社ニコン | 画像処理装置及び画像処理プログラム |
KR20200094062A (ko) * | 2019-01-29 | 2020-08-06 | 한국과학기술원 | 렌즈리스 초분광 영상 이미징 방법 및 그 장치 |
KR102269229B1 (ko) | 2019-01-29 | 2021-06-25 | 한국과학기술원 | 렌즈리스 초분광 영상 이미징 방법 및 그 장치 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012086127A1 (ja) | 2014-05-22 |
JP5000030B1 (ja) | 2012-08-15 |
US8610797B2 (en) | 2013-12-17 |
CN102687501A (zh) | 2012-09-19 |
US20120287307A1 (en) | 2012-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5000030B1 (ja) | 画像処理装置、撮像装置、及び画像処理方法 | |
EP2594062B1 (en) | Flash system for multi-aperture imaging | |
US9495751B2 (en) | Processing multi-aperture image data | |
US8941762B2 (en) | Image processing apparatus and image pickup apparatus using the same | |
US7764319B2 (en) | Image processing apparatus, image-taking system, image processing method and image processing program | |
US20130033579A1 (en) | Processing multi-aperture image data | |
WO2010032409A1 (ja) | 画像処理装置、撮像装置、評価装置、画像処理方法及び光学系評価方法 | |
US20160042522A1 (en) | Processing Multi-Aperture Image Data | |
WO2012127552A1 (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
KR20090027488A (ko) | 영상 복원 장치 및 방법 | |
JP4466015B2 (ja) | 画像処理装置および画像処理プログラム | |
US20150092091A1 (en) | Processing device, image pickup device and processing method | |
US20130329097A1 (en) | Image processing apparatus that corrects for chromatic aberration for taken image, image pickup apparatus, method of correcting for chromatic aberration of magnification therefor, and storage medium | |
JP7204357B2 (ja) | 撮像装置およびその制御方法 | |
JP6578960B2 (ja) | 撮像装置、撮像方法、撮像プログラム、およびその撮像プログラムを記録した記録媒体 | |
US10326951B2 (en) | Image processing apparatus, image processing method, image capturing apparatus and image processing program | |
JP4466017B2 (ja) | 画像処理装置および画像処理プログラム | |
JP4466016B2 (ja) | 画像処理装置および画像処理プログラム | |
JP4995359B1 (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
JP2020038319A (ja) | 撮像装置及びその制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180005163.3 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012513111 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13519372 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11851196 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11851196 Country of ref document: EP Kind code of ref document: A1 |