JPH11331691A - Image pickup device - Google Patents
Image pickup deviceInfo
- Publication number
- JPH11331691A JPH11331691A JP10133137A JP13313798A JPH11331691A JP H11331691 A JPH11331691 A JP H11331691A JP 10133137 A JP10133137 A JP 10133137A JP 13313798 A JP13313798 A JP 13313798A JP H11331691 A JPH11331691 A JP H11331691A
- Authority
- JP
- Japan
- Prior art keywords
- image
- luminance
- imaging
- image sensor
- image pickup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003287 optical effect Effects 0.000 claims abstract description 18
- 238000003384 imaging method Methods 0.000 claims description 64
- 238000000034 method Methods 0.000 claims description 14
- 238000009825 accumulation Methods 0.000 claims description 8
- 235000019557 luminance Nutrition 0.000 description 78
- 230000004075 alteration Effects 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Landscapes
- Exposure Control For Cameras (AREA)
- Cameras In General (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
Abstract
Description
【0001】[0001]
【発明の属する技術分野】本発明は、撮像装置に関する
ものであり、更に詳しくは回折格子から成るレンズを撮
像光学系に有する撮像装置に関するものである。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image pickup apparatus, and more particularly, to an image pickup apparatus having a lens formed of a diffraction grating in an image pickup optical system.
【0002】[0002]
【従来の技術】回折格子による集光作用を有するレンズ
(以下「回折レンズ」という。)には、従来より知られて
いる屈折レンズには無い有用な特長がある。例えば以下
のような特長,が知られている。 :通常の屈折レンズの表面に回折レンズを付けること
ができるため、一つのレンズに回折作用と屈折作用の両
方を持たせることができる。 :屈折レンズでいう分散特性に相当する量が、回折レ
ンズでは逆の値を持つため、回折レンズで色収差を効果
的に補正することができる。2. Description of the Related Art A lens having a light condensing function by a diffraction grating
(Hereinafter, referred to as a “diffractive lens”) has useful features not found in conventionally known refractive lenses. For example, the following features are known. A: Since a diffractive lens can be attached to the surface of a normal refractive lens, one lens can have both a diffractive action and a refracting action. : Since the amount corresponding to the dispersion characteristic of the refractive lens has the opposite value in the diffractive lens, the chromatic aberration can be effectively corrected by the diffractive lens.
【0003】[0003]
【発明が解決しようとする課題】色収差補正は正・負2
枚の屈折レンズの組み合わせで行われるのが一般的であ
るが、上記特長,を有する回折レンズを屈折レンズ
の表面に付ければ、1枚のレンズで色収差補正を行うこ
とが可能である。しかし、回折レンズには上記有用な特
長,がある反面、回折格子の回折効率が波長に依存
するという性質もある。例えばブレーズ形状の回折格子
の場合、特定の波長については回折効率が1となるため
焦点距離は単一のものとなるが、特定の波長以外では複
数の回折光が発生するため焦点距離は複数生じることに
なる。したがって、撮像光学系に回折レンズを用いた場
合、特定の波長については唯一の像が形成されるが、特
定の波長以外では回折効率が波長に依存するため複数の
像が発生することになる。The chromatic aberration correction is positive / negative 2
It is generally performed by a combination of two refractive lenses. However, if a diffractive lens having the above features is attached to the surface of the refractive lens, it is possible to perform chromatic aberration correction with one lens. However, the diffractive lens has the above-mentioned useful features, but also has a property that the diffraction efficiency of the diffraction grating depends on the wavelength. For example, in the case of a blazed diffraction grating, the diffraction efficiency is 1 for a specific wavelength, so that the focal length is single, but a plurality of diffracted lights are generated at wavelengths other than the specific wavelength, so that a plurality of focal lengths are generated. Will be. Therefore, when a diffractive lens is used in the imaging optical system, a single image is formed for a specific wavelength, but a plurality of images are generated at other wavelengths because the diffraction efficiency depends on the wavelength.
【0004】撮像光学系に回折レンズを含んだ撮像装置
において、回折格子により生じる複数の像を排除する例
が、特開平9−238357号公報で提案されている。
この撮像装置は、上記複数の像を排除して一つの像が得
られるように、撮像画像に画像処理を施す構成を採用し
ている。しかし、回折レンズによって複数の像が発生す
るために問題となるのは、被写体の輝度差が非常に大き
い場合である。通常の撮像素子は撮像可能な輝度差(す
なわち輝度のダイナミックレンジ)が小さいため、被写
界に輝度差の大きい領域があると、上記撮像装置ではこ
れに対応することができない。Japanese Patent Application Laid-Open No. 9-238357 has proposed an example in which an image pickup apparatus including a diffraction lens in an image pickup optical system eliminates a plurality of images generated by a diffraction grating.
This imaging apparatus adopts a configuration in which image processing is performed on a captured image so that one image is obtained by eliminating the plurality of images. However, a problem caused by the generation of a plurality of images by the diffractive lens is when the luminance difference of the subject is very large. Since a normal image sensor has a small luminance difference (that is, a dynamic range of luminance) that can be picked up, the image pickup apparatus cannot cope with an area having a large luminance difference in an object scene.
【0005】本発明は、このような状況に鑑みてなされ
たものであって、被写界に輝度差の大きい領域があって
も良好な画像が得られる撮像装置を提供することを目的
とする。The present invention has been made in view of such circumstances, and has as its object to provide an image pickup apparatus capable of obtaining a good image even when there is an area having a large luminance difference in an object scene. .
【0006】[0006]
【課題を解決するための手段】上記目的を達成するため
に、第1の発明の撮像装置は、回折格子から成るレンズ
を撮像光学系に有する撮像装置であって、被写界におけ
る主被写体が適正な明るさで撮像されるように絞り,N
Dフィルタ,機械式シャッタ又は撮像素子の電荷蓄積時
間を設定して得られた撮像信号を、前記主被写体が前記
適正な明るさよりも暗く撮像されるように絞り,NDフ
ィルタ,機械式シャッタ又は撮像素子の電荷蓄積時間を
設定して得られた撮像信号を用いて補正することを特徴
とする。In order to achieve the above object, an image pickup apparatus according to a first aspect of the present invention is an image pickup apparatus having a lens composed of a diffraction grating in an image pickup optical system, wherein a main object in a field is a subject. Aperture so that images can be taken with appropriate brightness, N
An imaging signal obtained by setting a charge accumulation time of a D filter, a mechanical shutter, or an image sensor is squeezed so that the main subject is imaged darker than the appropriate brightness, an ND filter, a mechanical shutter, or an imaging device. The correction is performed using an image pickup signal obtained by setting the charge accumulation time of the element.
【0007】第2の発明の撮像装置は、上記第1の発明
の構成において、前記撮像信号の補正に前記回折格子の
回折効率情報を用いることを特徴とする。According to a second aspect of the present invention, in the configuration of the first aspect, the diffraction efficiency of the diffraction grating is used for correcting the image signal.
【0008】第3の発明の撮像装置は、上記第1の発明
の構成において、前記撮像信号の補正に前記回折格子の
0次又は2次の回折光による結像情報を用いることを特
徴とする。According to a third aspect of the present invention, in the imaging apparatus according to the first aspect, the imaging signal is corrected by using imaging information of the diffraction grating based on zero-order or second-order diffracted light. .
【0009】[0009]
【発明の実施の形態】以下、本発明を実施した撮像装置
を、図面を参照しつつ説明する。図7のブロック図に、
本実施の形態の撮像装置の概略構成を示す。この撮像装
置は、撮像光学系1,撮像素子2,制御部3及びメモリ
ー4を備えている。撮像光学系1は、回折レンズ;絞
り,NDフィルタ,機械式シャッタ等を有しており、撮
像素子{例えばCCD(Charge Coupled Device)}2の受
光面上に被写体像を形成する。撮像素子2は、形成され
た被写体像を光電変換して対応する撮像信号を出力す
る。制御部3は、絞り,NDフィルタ,機械式シャッタ
又は撮像素子2の電荷蓄積時間を所定の状態に設定し、
また、撮像素子2から出力された撮像信号をメモリー4
に貯える。DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, an imaging apparatus embodying the present invention will be described with reference to the drawings. In the block diagram of FIG.
1 shows a schematic configuration of an imaging device according to the present embodiment. This imaging apparatus includes an imaging optical system 1, an imaging element 2, a control unit 3, and a memory 4. The imaging optical system 1 includes a diffraction lens; a diaphragm, an ND filter, a mechanical shutter, and the like, and forms a subject image on a light receiving surface of an imaging element {for example, a CCD (Charge Coupled Device)} 2. The imaging element 2 photoelectrically converts the formed subject image and outputs a corresponding imaging signal. The control unit 3 sets the aperture, the ND filter, the mechanical shutter, or the charge accumulation time of the image sensor 2 to a predetermined state,
The image pickup signal output from the image pickup device 2 is stored in the memory 4.
To save.
【0010】図1に、撮像光学系1に用いられている回
折レンズの回折効率を示す。1次回折光の結像位置像面
では、0次回折光及び2次回折光によるボケた点像が、
1次回折光の点像を大きく覆ってフレアとなる。0次回
折光及び2次回折光によるボケ度合いは、回折レンズの
回折作用によるパワーが大きいほど大きくなり、0次回
折光及び2次回折光によるボケの強度は、図1に示す回
折効率の大きさに比例して大きくなる。FIG. 1 shows the diffraction efficiency of the diffraction lens used in the image pickup optical system 1. On the image plane where the first-order diffracted light is formed, a point image blurred by the zero-order diffracted light and the second-order diffracted light is
The point image of the first-order diffracted light is largely covered, resulting in a flare. The degree of blurring due to the zero-order diffracted light and the second-order diffracted light increases as the power due to the diffraction action of the diffractive lens increases. The intensity of the blur caused by the zero-order diffracted light and the second-order diffracted light is proportional to the magnitude of the diffraction efficiency shown in FIG. It becomes bigger.
【0011】図2に、撮像素子2の分光感度をBGRの
各色について示す。0次回折光及び2次回折光による撮
像素子2の出力強度は、0次回折光及び2次回折光の回
折効率(図1)と、撮像素子2の分光感度(図2)と、を掛
け合わせたものである。表1に、1次回折光による撮像
素子2の出力強度に対する、0次回折光及び2次回折光
による撮像素子2の出力強度の比を示す。表1から分か
るように、G撮像素子の値が最も小さくなっている。つ
まり、0次回折光及び2次回折光の影響は、G撮像素子
で最も小さくなる。また、G撮像素子の感度波長幅をよ
り狭いものにすれば、0次回折光及び2次回折光の影響
はより小さなものとなる。FIG. 2 shows the spectral sensitivity of the image sensor 2 for each color of BGR. The output intensity of the imaging device 2 due to the 0th-order diffraction light and the 2nd-order diffraction light is obtained by multiplying the diffraction efficiency of the 0th-order diffraction light and the 2nd-order diffraction light (FIG. 1) by the spectral sensitivity of the imaging device 2 (FIG. 2). is there. Table 1 shows the ratio of the output intensity of the image sensor 2 due to the zero-order diffracted light and the second-order diffracted light to the output intensity of the image sensor 2 due to the first-order diffracted light. As can be seen from Table 1, the value of the G imaging device is the smallest. That is, the influence of the 0th-order diffracted light and the 2nd-order diffracted light is the smallest in the G imaging device. Further, if the sensitivity wavelength width of the G image pickup device is made narrower, the influence of the 0th-order diffracted light and the 2nd-order diffracted light becomes smaller.
【0012】[0012]
【表1】 [Table 1]
【0013】表2に、撮像素子2の性能{撮像素子2が
撮像可能な輝度差(すなわち輝度のダイナミックレン
ジ)、撮像素子2の撮像信号最大値、そのときの撮像素
子2の信号分解能}を示す。なお、ここで用いる輝度差
(EV)は、光の強度比に対して2を底とするlogをとっ
たEV値であり、視感度は考慮していない。Table 2 shows the performance of the image sensor 2 (a luminance difference (namely, a dynamic range of luminance) that can be imaged by the image sensor 2, a maximum value of an image signal of the image sensor 2, and a signal resolution of the image sensor 2 at that time). Show. Note that the luminance difference used here
(EV) is an EV value obtained by taking a log based on 2 with respect to the light intensity ratio, and does not consider the visibility.
【0014】[0014]
【表2】 [Table 2]
【0015】図3に、被写界の一例を示す。この被写界
では、暗部を背景5として明部である光源7が主被写体
6と共に存在している。主被写体6が適正な明るさとな
る露出レベルの撮像条件(図6中のC1)下で被写界(図
3)を撮像したときの画像を図4に示し、また、図4中
のラインxでの各部の輝度(BV)を図6に示す。図4中
の光源像8から分かるように、光源7近傍の背景5には
光源7の0次回折光及び2次回折光によるフレアが発生
している。なお、図6中、B5は背景5の輝度(背景輝
度=BV1)、B6は主被写体6の輝度(主被写体輝度=
BV3)、B7は光源7の輝度(光源輝度=BV11)、
B8は光源7近傍の背景5の輝度(フレア背景輝度=B
V6)、A7は光源7の0次回折光及び2次回折光によ
る像輝度(フレア輝度)である。FIG. 3 shows an example of an object scene. In the object scene, a light source 7 which is a bright part with a dark part as a background 5 is present together with the main subject 6. FIG. 4 shows an image obtained when the subject (FIG. 3) is imaged under an imaging condition (C1 in FIG. 6) at an exposure level at which the main subject 6 has an appropriate brightness, and a line x in FIG. FIG. 6 shows the luminance (BV) of each part at the time. As can be seen from the light source image 8 in FIG. 4, flare is generated in the background 5 near the light source 7 due to the 0th-order diffracted light and the 2nd-order diffracted light of the light source 7. In FIG. 6, B5 is the luminance of the background 5 (background luminance = BV1), and B6 is the luminance of the main subject 6 (main subject luminance = BV1).
BV3) and B7 are the luminance of the light source 7 (light source luminance = BV11),
B8 is the luminance of the background 5 near the light source 7 (flare background luminance = B
V6) and A7 are image luminance (flare luminance) of the light source 7 due to the 0th-order diffracted light and the 2nd-order diffracted light.
【0016】撮像条件C1下では、絞り,NDフィル
タ,機械式シャッタ又は撮像素子2の電荷蓄積時間の設
定によって、主被写体6は適正な明るさで撮像される
が、光源輝度B7及びフレア背景輝度B8は撮像素子2
の撮像可能輝度差をオーバーフローしてしまうため、輝
度B7,B8に対応する像出力の測定は不可能である。
光源7近傍の背景5の撮像素子出力は、背景輝度B5と
フレア輝度A7との合計であるフレア背景輝度B8に対
応したものとなる。したがって、表1に示すデータを用
いてフレア輝度A7を算出し、そのフレア輝度A7相当
分を光源7近傍の背景5の撮像素子出力(フレア背景輝
度B8に相当)から差し引けば、正しい背景輝度B5の
像出力を得ることができる。しかし、フレア輝度A7を
得るためには正しい光源輝度B7を測定する必要があ
り、また、フレアの大きさを決めるためには光源7の正
しい大きさを測定する必要がある。Under the imaging condition C1, the main subject 6 is imaged with appropriate brightness by setting the aperture, ND filter, mechanical shutter, or charge accumulation time of the imaging device 2, but the light source luminance B7 and the flare background luminance B8 is the image sensor 2
, The image output corresponding to the luminances B7 and B8 cannot be measured.
The image sensor output of the background 5 near the light source 7 corresponds to the flare background luminance B8 which is the sum of the background luminance B5 and the flare luminance A7. Therefore, the flare luminance A7 is calculated using the data shown in Table 1, and the flare luminance A7 equivalent is subtracted from the image sensor output of the background 5 near the light source 7 (corresponding to the flare background luminance B8) to obtain the correct background luminance. An image output of B5 can be obtained. However, to obtain the flare luminance A7, it is necessary to measure the correct light source luminance B7, and to determine the size of the flare, it is necessary to measure the correct size of the light source 7.
【0017】そこで本実施の形態では、背景5と光源7
との輝度差(B7−B5)が撮像素子2の撮像可能輝度差
より大きくても、光源輝度B7等が得られるようにする
ため、光源輝度B7が得られるまで撮像条件をC1,C
2,C3と変えて、露出レベルの異なる3つの画像の撮
像素子出力(撮像信号)を得る構成をとっている(詳細は
図8を用いて後述する。)。絞り,NDフィルタ,機械
式シャッタ又は撮像素子2の電荷蓄積時間の設定変更に
より、撮像条件C1よりも露出レベルが低い(つまり画
像が暗い)撮像条件C2下で2回目の撮像を行うと、フ
レア背景輝度B8を得ることができる。さらに、上記絞
り等の設定変更により、撮像条件C2よりも露出レベル
が低い(つまり画像が更に暗い)撮像条件C3下で3回目
の撮像を行うと、光源輝度B7を得ることができる。こ
のとき得られる撮像信号に相当する画像を図5に示す。
図5中の(フレアを含まない)光源像9が、光源7の1次
回折光により形成される像に相当する。なお、図5中の
主被写体6は、撮像条件C3の撮像可能輝度範囲外にあ
るため、実際には背景5と共に撮像不能となる。Therefore, in this embodiment, the background 5 and the light source 7
Even if the brightness difference (B7-B5) from the image pickup device 2 is larger than the imageable brightness difference of the image sensor 2, the imaging conditions are set to C1 and C until the light source brightness B7 is obtained in order to obtain the light source brightness B7 and the like.
Instead of C2 and C3, the configuration is such that image sensor outputs (image signals) of three images having different exposure levels are obtained (details will be described later with reference to FIG. 8). By changing the aperture, the ND filter, the mechanical shutter, or the setting of the charge accumulation time of the image sensor 2, the second exposure is performed under the imaging condition C2 whose exposure level is lower than the imaging condition C1 (that is, the image is dark). The background luminance B8 can be obtained. Further, when the third imaging is performed under the imaging condition C3 in which the exposure level is lower than the imaging condition C2 (that is, the image is darker) by changing the setting of the aperture or the like, the light source luminance B7 can be obtained. FIG. 5 shows an image corresponding to the imaging signal obtained at this time.
The light source image 9 (not including flare) in FIG. 5 corresponds to an image formed by the first-order diffracted light of the light source 7. Note that the main subject 6 in FIG. 5 is out of the imageable luminance range of the image capturing condition C3, and thus cannot be actually imaged together with the background 5.
【0018】表3に、本来の像出力(背景輝度B5)にフ
レア輝度A7が加わっても、撮像素子2の信号分解能で
は分解できない輝度差(すなわち撮像素子出力に影響な
い輝度差)を示す。被写体撮影に影響ない程度の輝度差
がある領域では、フレア輝度A7が小さすぎて、フレア
背景輝度B8の素子出力を光源7近傍の背景5の像出力
としても問題がない。そこで本実施の形態では、B撮像
素子出力とR撮像素子出力については輝度差1EV以上
の場合に、G撮像素子出力については輝度差4EV以上
の場合に、撮像素子出力から0次回折光及び2次回折光
の影響を除く処理(以下「フレア除去」という。)を行
う。詳細は図9,図10を用いて後述する。Table 3 shows a luminance difference that cannot be resolved by the signal resolution of the image pickup device 2 (that is, a luminance difference that does not affect the image pickup device output) even when the flare luminance A7 is added to the original image output (background luminance B5). In a region where there is a luminance difference that does not affect the subject photographing, the flare luminance A7 is too small, and there is no problem even if the element output of the flare background luminance B8 is used as the image output of the background 5 near the light source 7. Therefore, in the present embodiment, when the brightness difference between the B image sensor output and the R image sensor output is 1 EV or more, and when the brightness difference between the G image sensor output is 4 EV or more, the 0th order diffracted light and the 2nd order A process for removing the influence of folding light (hereinafter, referred to as “flare removal”) is performed. Details will be described later with reference to FIGS.
【0019】[0019]
【表3】 [Table 3]
【0020】表4に、フレア輝度A7が大きいために、
撮像素子2の信号分解能では正しい像出力を分解できな
い輝度差を示す。被写界に分解不能輝度差以上の輝度差
がある領域では、フレア輝度A7が大きすぎて、フレア
背景輝度B8の撮像素子出力からは光源7近傍の背景5
の正しい像出力を得ることができない。B撮像素子とR
撮像素子では輝度差8EV以上で正しい像出力が得られ
ず、G撮像素子では輝度差11EV以上で正しい像出力
が得られない。そこで本実施の形態では、輝度差8EV
以上ではG撮像素子出力を用いて、B撮像素子出力及び
R撮像素子出力の補完(以下「画像補完」という。)を行
う。詳細は図9,図11を用いて後述する。Table 4 shows that the flare luminance A7 is large.
The signal resolution of the image sensor 2 indicates a luminance difference at which a correct image output cannot be resolved. In an area where there is a luminance difference equal to or larger than the unresolvable luminance difference in the object field, the flare luminance A7 is too large, and the flare background luminance B8 indicates that the background 5 near the light source 7 cannot be obtained.
Cannot obtain a correct image output. B image sensor and R
With the image sensor, a correct image output cannot be obtained at a luminance difference of 8 EV or more, and with a G image sensor, a correct image output cannot be obtained at a luminance difference of 11 EV or more. Therefore, in the present embodiment, a luminance difference of 8 EV
In the above, using the G image sensor output, the B image sensor output and the R image sensor output are complemented (hereinafter, referred to as “image complement”). Details will be described later with reference to FIGS.
【0021】[0021]
【表4】 [Table 4]
【0022】影響ない輝度差(表3)及び分解不能輝度差
(表4)が最大であるG撮像素子出力は、被写界情報を最
も正確に表すため、撮像信号補正の基準にする信号とし
て適当である。表5に、光源7及びその近傍の背景5に
ついての輝度差(B7−B8)とG撮像素子出力差との関
係を示す。なお、表5のデータは表1のデータを用いて
得たものである。Unaffected luminance difference (Table 3) and unresolvable luminance difference
The G image sensor output having the maximum value shown in Table 4 is appropriate as a signal used as a reference for image signal correction in order to represent field information most accurately. Table 5 shows the relationship between the luminance difference (B7-B8) for the light source 7 and the background 5 near the light source 7 and the G image sensor output difference. Note that the data in Table 5 was obtained using the data in Table 1.
【0023】[0023]
【表5】 [Table 5]
【0024】次に、制御部3が行う制御を説明する。た
だし、以下の説明中の輝度差はG撮像素子の出力差に相
当する。まず、被写界撮像及びメモリー4への画像取り
込みを図8に基づいて説明する。ステップ#10で、主
被写体6が適正な明るさで撮像されるように、絞り,N
Dフィルタ,機械式シャッタ又は撮像素子2の電荷蓄積
時間を設定して撮像を行い、得られた撮像信号(撮像素
子2の出力)をメモリー4に貯える。主被写体6が適正
な明るさとなる露出レベルの設定は、撮像素子2の輝度
のダイナミックレンジの中間レベルで主被写体6が再現
されるように行われる。この露出レベルの設定が前述し
た撮像条件C1に相当し、得られる撮像信号が前述した
画像(図4)のデータに相当する。Next, the control performed by the control unit 3 will be described. However, the luminance difference in the following description corresponds to the output difference of the G image sensor. First, the shooting of the object scene and the taking of the image into the memory 4 will be described with reference to FIG. In step # 10, the aperture and the N are set so that the main subject 6 is imaged with appropriate brightness.
An image is captured by setting the D filter, the mechanical shutter, or the charge accumulation time of the image sensor 2, and the obtained image signal (output of the image sensor 2) is stored in the memory 4. The exposure level at which the main subject 6 has an appropriate brightness is set so that the main subject 6 is reproduced at an intermediate level of the dynamic range of the luminance of the image sensor 2. The setting of the exposure level corresponds to the above-described imaging condition C1, and the obtained imaging signal corresponds to the data of the above-described image (FIG. 4).
【0025】次に、撮像素子2においてオーバーフロー
した領域があるか否かを判定する。オーバーフローした
撮像素子領域がなければ次のシーケンス(図9)に進み、
オーバーフローした撮像素子領域があればステップ#3
0に進む。ステップ#30では、主被写体6が適正な明
るさよりも暗く撮像される撮像条件(前述した撮像条件
C2,C3,…)下で撮像を行い、前回の撮像において
オーバーフローした領域の撮像信号についてのみメモリ
ー4の更新を行う。オーバーフローした領域がなくなる
まで、ステップ#20,#30のデータ入力を繰り返し
た後、次のシーケンス(図9)に進む。Next, it is determined whether or not there is an overflow area in the image sensor 2. If there is no overflowing image sensor area, the process proceeds to the next sequence (FIG. 9),
Step # 3 if there is an overflowing image sensor area
Go to 0. In step # 30, imaging is performed under imaging conditions (the above-described imaging conditions C2, C3,...) In which the main subject 6 is imaged darker than appropriate brightness, and only the imaging signal in the area where the previous imaging has overflowed is stored. 4 is updated. The data input in steps # 20 and # 30 is repeated until there is no overflow area, and then the process proceeds to the next sequence (FIG. 9).
【0026】主被写体6が適正な明るさよりも暗く撮像
されるようにするには、絞りを撮像可能輝度差分程度絞
り込んだり、NDフィルタ濃度を濃くしたり、機械式シ
ャッタのシャッタスピードを上げたり、撮像素子2の電
荷蓄積時間を短くしたりすることにより、露出レベルを
低くすればよい。また、主被写体6が適正な明るさより
も暗く撮像されるように露出レベルを変更する際、撮像
条件C1,C2,C3,…の撮像可能輝度差が互いにオ
ーバーラップするようにしてもよい。In order to image the main subject 6 darker than the proper brightness, the aperture should be narrowed down by an imageable luminance difference, the ND filter density should be increased, the mechanical shutter speed should be increased, and the like. The exposure level may be reduced by shortening the charge storage time of the image sensor 2 or the like. When the exposure level is changed so that the main subject 6 is imaged darker than the appropriate brightness, the imageable luminance differences of the imaging conditions C1, C2, C3,... May overlap each other.
【0027】ついで、メモリー画像の加工を図9に基づ
いて説明する。ステップ#40で隣接する明部(光源輝
度B7)と暗部(フレア背景輝度B8)の撮像素子出力を
メモリー4から入力する。次に、ステップ#50で輝度
差(B7−B8)が1EV以上か否かを判定する。輝度差
(B7−B8)が1EV未満であればフレアは無視し得る
程度であるため処理を終了し、1EV以上であればステ
ップ#60で輝度差(B7−B8)が4EV以上か否かを
判定する。Next, processing of a memory image will be described with reference to FIG. In step # 40, the output of the image sensor of the adjacent bright part (light source luminance B7) and dark part (flare background luminance B8) is input from the memory 4. Next, in step # 50, it is determined whether or not the luminance difference (B7-B8) is equal to or larger than 1 EV. Brightness difference
If (B7-B8) is less than 1 EV, the process is terminated because the flare is negligible, and if it is 1 EV or more, it is determined in step # 60 whether or not the luminance difference (B7-B8) is 4 EV or more. .
【0028】ステップ#60の判定で、輝度差(B7−
B8)が4EV未満であればステップ#70に進み、4
EV以上であればステップ#90で輝度差(B7−B8)
が7EV以上か否かを判定する。ステップ#70では、
入力した撮像信号がG撮像素子出力か否かを判定する。
入力した撮像信号がG撮像素子出力であれば、前述した
ようにフレアの影響が少ないので処理を終了する。入力
した撮像信号がB撮像素子出力又はR撮像素子出力であ
れば、フレア除去(図10)を行った後、処理を終了す
る。In the determination of step # 60, the luminance difference (B7-
If B8) is less than 4 EV, the process proceeds to step # 70, where 4
If it is equal to or larger than EV, the luminance difference (B7-B8) is determined in step # 90.
Is 7EV or more. In step # 70,
It is determined whether or not the input image signal is a G image sensor output.
If the input image signal is the output of the G image sensor, the processing is terminated because the influence of flare is small as described above. If the input image signal is the output of the B image sensor or the output of the R image sensor, the process is terminated after performing flare removal (FIG. 10).
【0029】ステップ#90の判定で、輝度差(B7−
B8)が7EV未満であればステップ#100でフレア
除去(図10)を行って処理を終了し、7EV以上であれ
ばステップ#110で輝度差(B7−B8)が8EV以上
か否かを判定する。ステップ#110の判定で、輝度差
(B7−B8)が8EV未満であればステップ#120に
進み、8EV以上であればステップ#150で撮像信号
がG撮像素子出力か否かを判定する。At step # 90, the luminance difference (B7-
If (B8) is less than 7 EV, flare removal (FIG. 10) is performed in step # 100, and the process is terminated. If it is 7 EV or more, it is determined in step # 110 whether the luminance difference (B7-B8) is 8 EV or more. I do. In the determination of step # 110, the luminance difference
If (B7-B8) is less than 8 EV, the process proceeds to step # 120. If (B7-B8) is 8 EV or more, it is determined in step # 150 whether or not the image signal is the G image sensor output.
【0030】ステップ#120でフレア除去(図10)を
行った後、ステップ#130で撮像信号がG撮像素子出
力か否かを判定する。入力した撮像信号がG撮像素子出
力であれば処理を終了し、入力した撮像信号がB撮像素
子出力又はR撮像素子出力であれば画像補完(図11)を
行った後、処理を終了する。ステップ#150の判定
で、撮像信号がB撮像素子出力又はR撮像素子出力であ
ればステップ#170に進む。ステップ#150の判定
で、撮像信号がG撮像素子出力であればステップ#16
0に進んで、そのメモリーされている撮像信号(G撮像
素子出力)を所定値で置き換えた後、ステップ#170
に進む。ステップ#170でフレア除去(図10)、ステ
ップ#180で画像補完(図11)を行った後、処理を終
了する。After performing flare removal (FIG. 10) in step # 120, it is determined in step # 130 whether or not the image pickup signal is the output of the G image pickup device. If the input image signal is a G image sensor output, the process ends. If the input image signal is a B image sensor output or an R image sensor output, image complementation (FIG. 11) is performed, and then the process ends. If it is determined in step # 150 that the image signal is the B image sensor output or the R image sensor output, the process proceeds to step # 170. If it is determined in step # 150 that the image signal is the output of the G image sensor, step # 16
0, and replaces the stored image signal (G image sensor output) with a predetermined value.
Proceed to. After performing flare removal (FIG. 10) in step # 170 and image complementation (FIG. 11) in step # 180, the process ends.
【0031】表6〜表10に、図9のシーケンスにおけ
るB,G,Rの各撮像素子2とフレア除去及び画像補完
との関係を示す。Tables 6 to 10 show the relationship between each of the B, G, and R image sensors 2 in the sequence shown in FIG. 9 and flare removal and image complementation.
【0032】[0032]
【表6】 [Table 6]
【0033】[0033]
【表7】 [Table 7]
【0034】[0034]
【表8】 [Table 8]
【0035】[0035]
【表9】 [Table 9]
【0036】[0036]
【表10】 [Table 10]
【0037】次に、フレア除去を図10に基づいて説明
する。まずステップ#200で、明部(光源7)に隣接す
る暗部の輝度(フレア背景輝度)B8に対応する撮像素子
出力をメモリー4から入力する。先に述べたように、フ
レア背景輝度B8に対応する撮像素子出力は、暗部であ
る背景輝度B5にフレア輝度A7が加わった輝度の撮像
信号に対応する。ステップ#210で、光源輝度(明部
の輝度)B7に対応する撮像素子出力をメモリー4から
入力する。ステップ#220でフレア背景輝度B8に対
応する撮像信号に対して、後述する補正処理{式(8)〜(1
0),フレア除去マップ等}を施し、その像出力を対応す
る光源7近傍の背景像出力としてメモリー4に貯える。Next, flare removal will be described with reference to FIG. First, in step # 200, an image sensor output corresponding to the brightness (flare background brightness) B8 of the dark portion adjacent to the bright portion (light source 7) is input from the memory 4. As described above, the image sensor output corresponding to the flare background luminance B8 corresponds to an image signal of luminance obtained by adding flare luminance A7 to background luminance B5 which is a dark part. In step # 210, an image sensor output corresponding to the light source luminance (brightness of the bright part) B 7 is input from the memory 4. In step # 220, a correction process {Equations (8) to (1)
0), a flare removal map or the like}, and stores the image output in the memory 4 as a background image output near the corresponding light source 7.
【0038】ステップ#220の補正処理について以下
に詳述する。まず、フレアの大きさを定めるために、撮
像光学系1が以下のように構成されているものとする。
撮像光学系1をレンズ1枚構成の薄肉レンズとして簡単
化し、薄肉レンズ表面に回折レンズが付いているものと
する。すると、撮像光学系1を構成する屈折レンズ,回
折レンズのパワーの関係は、以下の式(1)で表される。 φ=φr+φDOE=1 …(1) ただし、 φ :撮像光学系のパワー(簡単のため1とする。)、 φr :屈折レンズの屈折作用によるパワー、 φDOE:回折レンズの回折作用によるパワー、 である。The correction processing in step # 220 will be described in detail below. First, it is assumed that the imaging optical system 1 is configured as follows in order to determine the size of the flare.
It is assumed that the imaging optical system 1 is simplified as a thin lens having one lens, and a diffraction lens is provided on the surface of the thin lens. Then, the relationship between the power of the refractive lens and the power of the diffractive lens constituting the imaging optical system 1 is expressed by the following equation (1). φ = φr + φDOE = 1 (1) where φ: power of the imaging optical system (1 for simplicity), φr: power by the refraction of the refraction lens, and φDOE: power by the refraction of the diffraction lens. .
【0039】色収差が補正されているための条件は、以
下の式(2)で表される。 (φr/νd)+(φDOE/νDOE)=0 …(2) ただし、 νd :屈折レンズのレンズ硝種のアッベ数、 νDOE:回折レンズのアッベ数相当値、 である。The condition for correcting the chromatic aberration is represented by the following equation (2). (φr / νd) + (φDOE / νDOE) = 0 (2) where νd is the Abbe number of the glass type of the refractive lens, and νDOE is the value corresponding to the Abbe number of the diffractive lens.
【0040】0次回折光に対する撮像光学系1のパワー
φ0は以下の式(3)で表され、2次回折光に対する撮像光
学系1のパワーφ2は以下の式(4)で表される。 φ0=r=1+{νDOE/(νd−νDOE)}≒1+(νDOE/νd) …(3) φ2=r=1−{νDOE/(νd−νDOE)}≒1−(νDOE/νd) …(4)The power φ0 of the imaging optical system 1 for the 0th-order diffracted light is represented by the following equation (3), and the power φ2 of the imaging optical system 1 for the second-order diffracted light is represented by the following equation (4). φ0 = r = 1 + {νDOE / (νd−νDOE)} ≒ 1 + (νDOE / νd) (3) φ2 = r = 1− {νDOE / (νd−νDOE)} ≒ 1− (νDOE / νd) (( Four)
【0041】焦点距離がfであり、FナンバーがFであ
る撮像光学系1の像面に対して、焦点距離がf±Δであ
る場合、先の像面でのボケの大きさは以下の式(5)で表
される。またf=1/φであることから、f±Δは以下
の式(6)で表される。したがって、光源7の0次回折光
及び2次回折光によるボケの大きさ(=フレアの大きさ)
は、以下の式(7)で表される。例えば、f=5,F=4,
νd=60,νDOE=-3.45の場合、フレアの大きさ=70(μ
m)となるので、撮像素子2が1画素サイズ□5(μm)、
RGBの1絵素サイズ□10(μm)の場合(1絵素=縦横
2画素の計4画素)、フレアの大きさは7絵素に相当す
ることになる。 [ボケの大きさ]=Δ/F …(5) f±Δ≒1/(1±νDOE/νd)≒1−νDOE/νd,1+νDOE/νd …(6) [フレアの大きさ]=−νDOE/νd/F …(7)When the focal length is f ± Δ with respect to the image plane of the imaging optical system 1 whose focal length is f and the F number is F, the magnitude of blur on the previous image plane is as follows: It is represented by equation (5). Since f = 1 / φ, f ± Δ is represented by the following equation (6). Accordingly, the magnitude of blur caused by the 0th-order diffracted light and the 2nd-order diffracted light of the light source 7 (= flare magnitude)
Is represented by the following equation (7). For example, f = 5, F = 4,
When νd = 60 and νDOE = -3.45, the size of the flare = 70 (μ
m), the image sensor 2 has a pixel size of □ 5 (μm),
In the case of one pixel size of □ 10 (μm) of RGB (one pixel = two pixels vertically and horizontally), the size of the flare is equivalent to seven pixels. [Blur size] = Δ / F (5) f ± Δ ≒ 1 / (1 ± νDOE / νd) ≒ 1−νDOE / νd, 1 + νDOE / νd (6) [flare size] = − νDOE / Νd / F… (7)
【0042】フレア除去の計算は、以下の式(8)〜(10)
を用いて明部(光源輝度B7)の全画素領域について行
う。式(8)〜(10)中のフレア除去マップを表11〜表1
3に示す。このフレア除去マップは、回折効率情報であ
る表1のデータと、0次,2次回折光による結像情報
と、を用いて得られる演算フィルタであり、輝度の高い
画素領域ほど強く作用する。 (暗部のB像出力)=(暗部のB撮像素子出力)−(B撮像素子フレア除去マップ) ×(明部のB像輝度) …(8) (暗部のG像出力)=(暗部のG撮像素子出力)−(G撮像素子フレア除去マップ) ×(明部のG像輝度) …(9) (暗部のR像出力)=(暗部のR撮像素子出力)−(R撮像素子フレア除去マップ) ×(明部のR像輝度) …(10)The calculation of the flare removal is performed by the following equations (8) to (10).
Is performed for all the pixel areas of the bright part (light source luminance B7). Tables 11 to 1 show the flare removal maps in equations (8) to (10).
3 is shown. This flare removal map is an arithmetic filter obtained using the data of Table 1 as diffraction efficiency information and the imaging information by the 0th and 2nd order diffracted light, and acts more strongly in a pixel region with higher luminance. (B image output of dark part) = (B image sensor output of dark part) − (B image sensor flare removal map) × (B image luminance of bright part) (8) (G image output of dark part) = (G of dark part) (Imaging element output) − (G imaging element flare removal map) × (bright image G image luminance) (9) (R image output of dark part) = (R imaging element output of dark part) − (R imaging element flare removal map) ) × (R image brightness in bright area)… (10)
【0043】[0043]
【表11】 [Table 11]
【0044】[0044]
【表12】 [Table 12]
【0045】[0045]
【表13】 [Table 13]
【0046】次に、画像補完を図11に基づいて説明す
る。まずステップ#300でG撮像素子からのG像出力
をメモリー4から入力し、ステップ#310でB,R撮
像素子からのB,R像出力をメモリー4から入力する。
ステップ#320では、フレア除去された撮像素子出力
(B像出力,G像出力,R像出力)を用いて、以下の式(1
1),(12)で表される補正処理をB,R像出力に施し、
B,R像出力としてメモリー4に貯える。 (B像出力)=k×(B像出力)+(1−k)×(G像出力) …(11) (R像出力)=k×(R像出力)+(1−k)×(G像出力) …(12) ただし、 k=0.1〜0.8 である。Next, image complement will be described with reference to FIG. First, in step # 300, the G image output from the G image sensor is input from the memory 4, and in step # 310, the B and R image outputs from the B and R image sensors are input from the memory 4.
In step # 320, the output of the image sensor from which the flare has been removed
Using (B image output, G image output, R image output), the following equation (1)
The correction processing represented by 1) and (12) is performed on the B and R image outputs,
The B and R image outputs are stored in the memory 4. (B image output) = k × (B image output) + (1−k) × (G image output) (11) (R image output) = k × (R image output) + (1−k) × ( (G image output) (12) where k = 0.1 to 0.8.
【0047】以上説明した制御によれば、図4に示す画
像に対応する撮像信号が図5に示す画像等に対応する撮
像信号を用いて補正されるため、図3の被写界に対応す
る画像の撮像信号を得ることが可能である。したがっ
て、被写界に輝度差の大きい領域があっても、図3に示
すような良好な画像を得ることができる。According to the above-described control, the image pickup signal corresponding to the image shown in FIG. 4 is corrected using the image pickup signal corresponding to the image shown in FIG. It is possible to obtain an imaging signal of an image. Therefore, even if there is a region having a large luminance difference in the object scene, a good image as shown in FIG. 3 can be obtained.
【0048】[0048]
【発明の効果】以上説明したように本発明に係る撮像装
置によれば、被写界に輝度差の大きい領域があっても、
主被写体が適正な明るさよりも暗く撮像されるように絞
り等を設定して得られた撮像信号を用いて補正が行われ
るため、良好な画像を得ることができる。As described above, according to the imaging apparatus of the present invention, even if there is an area having a large luminance difference in the object scene,
Since the correction is performed using the imaging signal obtained by setting the aperture and the like so that the main subject is imaged darker than appropriate brightness, a good image can be obtained.
【図1】撮像光学系に用いられている回折レンズの回折
効率を示すグラフ。FIG. 1 is a graph showing the diffraction efficiency of a diffraction lens used in an imaging optical system.
【図2】撮像素子の分光感度を示すグラフ。FIG. 2 is a graph showing the spectral sensitivity of an image sensor.
【図3】被写界の一例を示す図。FIG. 3 is a diagram illustrating an example of an object scene.
【図4】図3の被写界を主被写体が適正な明るさとなる
露出レベルで撮像したときの画像を示す図。FIG. 4 is a view showing an image when the main subject is imaged at an exposure level at which an appropriate brightness is obtained by the main subject in FIG. 3;
【図5】図3の被写界を図4の場合よりも低い露出レベ
ルで撮像したときの画像を示す図。FIG. 5 is a view showing an image when the object scene of FIG. 3 is imaged at a lower exposure level than in the case of FIG. 4;
【図6】図4中のラインxでの各部の輝度を示すグラ
フ。FIG. 6 is a graph showing the luminance of each part at line x in FIG. 4;
【図7】実施の形態の撮像装置の概略構成を示すブロッ
ク図。FIG. 7 is a block diagram illustrating a schematic configuration of an imaging device according to an embodiment.
【図8】被写界撮像及びメモリー時のシーケンスを示す
フローチャート。FIG. 8 is a flowchart showing a sequence at the time of field imaging and memory.
【図9】メモリー画像の加工シーケンスを示すフローチ
ャート。FIG. 9 is a flowchart illustrating a processing sequence of a memory image.
【図10】フレア除去のシーケンスを示すフローチャー
ト。FIG. 10 is a flowchart showing a flare removal sequence.
【図11】画像補完のシーケンスを示すフローチャー
ト。FIG. 11 is a flowchart showing a sequence of image complementation.
1 …撮像光学系 2 …撮像素子 3 …制御部 4 …メモリー 5 …背景 6 …主被写体 7 …光源 8 …光源像(フレアを含む) 9 …光源像(フレアを含まない) DESCRIPTION OF SYMBOLS 1 ... Imaging optical system 2 ... Image sensor 3 ... Control part 4 ... Memory 5 ... Background 6 ... Main subject 7 ... Light source 8 ... Light source image (including flare) 9 ... Light source image (not including flare)
Claims (3)
有する撮像装置であって、被写界における主被写体が適
正な明るさで撮像されるように絞り,NDフィルタ,機
械式シャッタ又は撮像素子の電荷蓄積時間を設定して得
られた撮像信号を、前記主被写体が前記適正な明るさよ
りも暗く撮像されるように絞り,NDフィルタ,機械式
シャッタ又は撮像素子の電荷蓄積時間を設定して得られ
た撮像信号を用いて補正することを特徴とする撮像装
置。1. An image pickup apparatus having a lens formed of a diffraction grating in an image pickup optical system, wherein an aperture, an ND filter, a mechanical shutter, or an image pickup element is provided so that a main subject in an object scene is picked up with appropriate brightness. The image pickup signal obtained by setting the charge accumulation time is set so that the main subject is imaged darker than the appropriate brightness by setting the charge accumulation time of an ND filter, a mechanical shutter, or an image sensor. An imaging apparatus, wherein correction is performed using an obtained imaging signal.
折効率情報を用いることを特徴とする請求項1記載の撮
像装置。2. The imaging apparatus according to claim 1, wherein the diffraction efficiency of the diffraction grating is used for correcting the imaging signal.
次又は2次の回折光による結像情報を用いることを特徴
とする請求項1記載の撮像装置。3. The method according to claim 1, wherein the correction of the imaging signal is performed by using the zero of the diffraction grating.
2. The imaging apparatus according to claim 1, wherein imaging information based on second-order or second-order diffracted light is used.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP10133137A JPH11331691A (en) | 1998-05-15 | 1998-05-15 | Image pickup device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP10133137A JPH11331691A (en) | 1998-05-15 | 1998-05-15 | Image pickup device |
Publications (1)
Publication Number | Publication Date |
---|---|
JPH11331691A true JPH11331691A (en) | 1999-11-30 |
Family
ID=15097638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP10133137A Pending JPH11331691A (en) | 1998-05-15 | 1998-05-15 | Image pickup device |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPH11331691A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005136852A (en) * | 2003-10-31 | 2005-05-26 | Canon Inc | Image processing method, image processing apparatus and image processing program |
JP2008070427A (en) * | 2006-09-12 | 2008-03-27 | Nikon Corp | Optical detecting device, photometry apparatus and camera |
-
1998
- 1998-05-15 JP JP10133137A patent/JPH11331691A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005136852A (en) * | 2003-10-31 | 2005-05-26 | Canon Inc | Image processing method, image processing apparatus and image processing program |
JP2008070427A (en) * | 2006-09-12 | 2008-03-27 | Nikon Corp | Optical detecting device, photometry apparatus and camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4186699B2 (en) | Imaging apparatus and image processing apparatus | |
TWI462055B (en) | Cfa image with synthetic panchromatic image | |
US9137450B2 (en) | Image sensing apparatus, exposure control method and recording medium | |
JP4466015B2 (en) | Image processing apparatus and image processing program | |
KR20130031574A (en) | Image processing method and image processing apparatus | |
KR101536060B1 (en) | Solid-state imaging device and camera module | |
US20020159650A1 (en) | Image processing apparatus and recording medium, and image processing apparatus | |
JP2004248290A (en) | Vignetting compensation | |
JP5000030B1 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
US11388383B2 (en) | Image processing apparatus, image pickup apparatus, image processing method, and non-transitory computer-readable storage medium | |
JP2009059326A (en) | Imaging apparatus | |
JP2010072619A (en) | Exposure operation device and camera | |
JP4419479B2 (en) | Image processing apparatus and image processing program | |
US20010016083A1 (en) | Low spatial noise image sensing apparatus | |
US10043247B2 (en) | Image processing apparatus, image pickup apparatus, image processing method, and storage medium | |
JP4438363B2 (en) | Image processing apparatus and image processing program | |
JP4466017B2 (en) | Image processing apparatus and image processing program | |
JP2005167485A (en) | Apparatus and method for image processing | |
JPH11331691A (en) | Image pickup device | |
JP4466016B2 (en) | Image processing apparatus and image processing program | |
JP2000333076A (en) | Method for eliminating flare component in digital camera | |
JP2011135379A (en) | Imaging apparatus, imaging method and program | |
JP4426009B2 (en) | Image processing method and apparatus, and recording medium | |
JP5104104B2 (en) | Imaging device | |
JP2021110885A (en) | Image capturing apparatus and control method thereof |