WO2010116683A1 - 撮像装置および撮像方法 - Google Patents
撮像装置および撮像方法 Download PDFInfo
- Publication number
- WO2010116683A1 WO2010116683A1 PCT/JP2010/002315 JP2010002315W WO2010116683A1 WO 2010116683 A1 WO2010116683 A1 WO 2010116683A1 JP 2010002315 W JP2010002315 W JP 2010002315W WO 2010116683 A1 WO2010116683 A1 WO 2010116683A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- imaging
- video
- processing unit
- pixel
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/41—Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
Definitions
- the present invention relates to an imaging apparatus and an imaging method.
- This application claims priority on March 30, 2009 based on Japanese Patent Application No. 2009-083276 filed in Japan, the contents of which are incorporated herein by reference.
- An image pickup apparatus represented by a digital camera includes an image pickup element, an imaging optical system (lens optical system), an image processor, a buffer memory, a flash memory (card type memory), an image monitor, an electronic circuit and a mechanical mechanism for controlling these, and the like.
- Consists of A solid-state electronic device such as a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor is usually used for the image sensor.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge Coupled Device
- the light amount distribution imaged on the image sensor is photoelectrically converted, and the obtained electric signal is processed by an image processor and a buffer memory.
- an image processor a DSP (Digital Signal Processor) or the like is used.
- a buffer memory a DRAM (Dynamic Random Access Memory) or the like is used.
- the captured image is recorded and accumulated in a card type flash memory or the like, and the recorded and accumulated image can be displayed on a monitor.
- An optical system for forming an image on an image sensor is usually composed of several aspheric lenses in order to remove aberrations.
- a driving mechanism (actuator) that changes the focal length of the combination lens and the distance between the lens and the image sensor is necessary.
- the imaging device has more pixels and higher definition, the imaging optical system has lower aberration and higher precision, and has a zoom function, autofocus function, Advanced functions such as camera shake correction functions are advancing.
- the imaging device becomes large and it is difficult to reduce the size and thickness.
- the imaging device can be made smaller and thinner by adopting a compound eye structure in the imaging optical system or by combining a non-solid lens such as a liquid crystal lens or a liquid lens.
- a non-solid lens such as a liquid crystal lens or a liquid lens.
- an imaging lens device configured with a solid lens array arranged in a planar shape, a liquid crystal lens array, and one imaging element has been proposed (for example, Patent Document 1).
- the imaging lens apparatus captures a lens system having a fixed focal length lens array 2001 and the same number of variable focus type liquid crystal lens arrays 2002, and an optical image formed through the lens system. It is comprised from the single image pick-up element 2003 to.
- the same number of images as the number of lens arrays 2001 are divided and imaged on the single image sensor 2003.
- a plurality of images obtained from the image sensor 2003 are subjected to image processing by the arithmetic unit 2004 to reconstruct the entire image.
- focus information is detected from the arithmetic unit 2004, and each liquid crystal lens of the liquid crystal lens array 2002 is driven via the liquid crystal drive unit 2005 to perform auto focus.
- the liquid crystal lens and the solid lens are combined to realize an autofocus function and a zoom function, and to achieve miniaturization.
- an image pickup apparatus including one non-solid lens (liquid lens, liquid crystal lens), a solid lens array, and one image pickup device (for example, Patent Document 2).
- the imaging apparatus includes a liquid crystal lens 2131, a compound eye optical system 2120, an image synthesizer 2115, and a drive voltage calculator 2142. Similar to Patent Document 1, this imaging apparatus forms the same number of images as the number of lens arrays on a single imaging element 2105, and reconstructs the image by image processing.
- a small and thin focus adjustment function is realized by combining one non-solid lens (liquid lens, liquid crystal lens) and a solid lens array.
- Patent Document 3 A method of increasing is known (for example, Patent Document 3). This method solves the problem that the resolution cannot be improved depending on the subject distance by providing a diaphragm in one of the sub-cameras and blocking light for half a pixel by this diaphragm.
- Patent Document 3 combines a liquid lens capable of controlling the focal length by applying a voltage from the outside, changes the focal length, and simultaneously changes the image formation position and the pixel phase. The resolution of the composite image is increased.
- a high-definition composite image is realized by combining the imaging lens array and the imaging device having the light shielding unit. Further, by combining a liquid lens with the imaging lens array and the imaging element, high definition of the composite image is realized.
- an image generation method and apparatus for performing super-resolution interpolation processing on a specific region where the parallax of the stereo image is small based on image information of a plurality of imaging means and mapping an image to a spatial model are known (for example, Patent Document 4).
- This apparatus solves the problem that the definition of image data to be pasted on a distant spatial model is lacking in spatial model generation performed in the process of generating a viewpoint conversion image from images captured by a plurality of imaging means.
- JP 2006-251613 A JP 2006-217131 A Special table 2007-520166 Special table 2006-119843
- the present invention has been made in view of such circumstances, and in order to realize a high-quality image pickup apparatus, the relative position of the optical system and the image pickup element can be easily adjusted without requiring manual work.
- An object is to provide an imaging device and an imaging method that can be performed. It is another object of the present invention to provide an imaging apparatus and an imaging method capable of generating a high-quality and high-definition two-dimensional image regardless of the parallax of a stereo image, that is, regardless of the shooting distance.
- An imaging apparatus includes a plurality of imaging elements, a plurality of solid lenses that form images on each of the plurality of imaging elements, and light incident on each of the plurality of imaging elements.
- a plurality of optical axis control units that control the direction of the optical axis, a plurality of video processing units that convert photoelectric conversion signals output from the plurality of imaging devices into video signals, and the plurality of video processing units convert
- a stereo matching process is performed on the basis of the plurality of video signals to obtain a shift amount for each pixel, and a synthesis parameter is generated by normalizing the shift amount exceeding the pixel pitch of the plurality of image pickup devices with the pixel pitch.
- the video signal converted by the stereo image processing unit and each of the plurality of video processing units is synthesized based on the synthesis parameter generated by the stereo image processing unit.
- Ri comprises a video synthesis processing unit for generating a high definition video, the.
- the imaging apparatus includes a stereo image noise reduction processing unit that reduces noise of a parallax image used for the stereo matching process based on the synthesis parameter generated by the stereo image processing unit. Further, it may be provided.
- the video composition processing unit may increase the definition of only a predetermined area based on the parallax image generated by the stereo image processing unit.
- the direction of the optical axis of light incident on each of the plurality of imaging elements is controlled, and the photoelectric conversion signal output from each of the plurality of imaging elements is converted into an image.
- the signal is converted into a signal, and stereo matching is performed based on the converted video signals to obtain the shift amount for each pixel, and the shift amount exceeding the pixel pitch of the plurality of image sensors is normalized by the pixel pitch.
- the synthesized parameters are generated, and the video signal is synthesized based on the synthesized parameters, thereby generating a high-definition video.
- the direction of the optical axis is controlled based on the relative position between the imaging target and the plurality of optical axis controllers, it is possible to set the optical axis at an arbitrary position on the imaging element surface, and focus adjustment An imaging device with a wide range can be realized.
- a stereo image processing unit that obtains a shift amount for each pixel and generates a composite parameter obtained by normalizing the shift amount exceeding the pixel pitch of the plurality of image sensors with the pixel pitch, and a video signal converted by each of the plurality of video processing units
- the stereo image noise reduction processing unit for reducing the noise of the parallax image used for the stereo matching processing is further provided based on the synthesis parameter generated by the stereo image processing unit, noise in the stereo matching processing is removed. can do.
- the video composition processing unit increases the definition of only a predetermined area based on the parallax image generated by the stereo image processing unit, the high-definition processing can be speeded up.
- FIG. 1 is a block diagram illustrating a configuration of an imaging apparatus according to a first embodiment of the present invention. It is a detailed block diagram of the unit imaging part of the imaging device by 1st Embodiment shown in FIG. It is a front view of the liquid-crystal lens by 1st Embodiment. It is sectional drawing of the liquid-crystal lens by 1st Embodiment. It is a schematic diagram explaining the function of the liquid crystal lens used for the imaging device by 1st Embodiment. It is a schematic diagram explaining the liquid crystal lens of the imaging device by 1st Embodiment. It is a schematic diagram explaining the image pick-up element of the imaging device by 1st Embodiment shown in FIG. It is a detailed schematic diagram of an image sensor.
- FIG. 2 It is a block diagram which shows the whole structure of the imaging device shown in FIG. 2 is a detailed block diagram of a video processing unit of the imaging apparatus according to the first embodiment.
- FIG. It is a detailed block diagram of a video composition processing unit of video processing of the imaging device according to the first embodiment.
- movement of an imaging device It is a schematic diagram when an image sensor is displaced and attached due to an attachment error. It is a schematic diagram when an image sensor is displaced and attached due to an attachment error. It is a schematic diagram which shows the operation
- FIG. 1 is a functional block diagram showing the overall configuration of the imaging apparatus according to the first embodiment of the present invention.
- the imaging apparatus 1 shown in FIG. 1 includes six system unit imaging units 2 to 7.
- the unit imaging unit 2 includes an imaging lens 8 and an imaging element 14.
- the unit imaging unit 3 includes an imaging lens 9 and an imaging element 15.
- the unit imaging unit 4 includes an imaging lens 10 and an imaging element 16.
- the unit imaging unit 5 includes an imaging lens 11 and an imaging element 17.
- the unit imaging unit 6 includes an imaging lens 12 and an imaging element 18.
- the unit imaging unit 7 includes an imaging lens 13 and an imaging element 19.
- Each of the imaging lenses 8 to 13 forms an image of light from the imaging target on the corresponding imaging elements 14 to 19, respectively.
- Reference numerals 20 to 25 shown in FIG. 1 indicate optical axes of light incident on the image sensors 14 to 19, respectively.
- the image formed by the imaging lens 9 is photoelectrically converted by the imaging element 15 to convert the optical signal into an electrical signal.
- the electrical signal converted by the image sensor 15 is converted into a video signal by the video processing unit 27 according to preset parameters.
- the video processing unit 27 outputs the converted video signal to the video composition processing unit 38.
- a video signal obtained by converting the electrical signals output from the other unit imaging units 2 and 4 to 7 by the corresponding video processing units 26 and 28 to 31 is input to the video composition processing unit 38.
- the video composition processing unit 38 synthesizes the six video signals picked up by the unit image pickup units 2 to 7 into one video signal while synchronizing them, and outputs it as a high-definition video.
- the video composition processing unit 38 synthesizes a high-definition video based on the result of stereo image processing described later.
- the video composition processing unit 38 when the synthesized high-resolution video is deteriorated from a predetermined determination value, the video composition processing unit 38 generates a control signal based on the determination result and outputs the control signal to the six control units 32 to 37. To do.
- the control units 32 to 37 perform optical axis control of the corresponding imaging lenses 8 to 13 based on the input control signal.
- the video composition processing unit 38 again determines the high definition video. If the determination result is good, the video composition processing unit 38 outputs a high-definition video, and if it is bad, the operation of controlling the imaging lenses 8 to 13 is repeated.
- the unit imaging unit 3 includes a liquid crystal lens (non-solid lens) 301 and an optical lens (solid lens) 302.
- the control unit 33 includes four voltage control units 33a, 33b, 33c, and 33d that control the voltage applied to the liquid crystal lens 301.
- the voltage control units 33a, 33b, 33c, and 33d determine the voltage to be applied to the liquid crystal lens 301 based on the control signal generated by the video composition processing unit 38, and control the liquid crystal lens 301. Since the imaging lenses 8 and 10 to 13 and the control units 32 and 34 to 37 of the other unit imaging units 2 and 4 to 7 shown in FIG. 1 have the same configuration as the imaging lens 9 and the control unit 33, details are shown here. The detailed explanation is omitted.
- FIG. 3A is a front view of the liquid crystal lens 301 according to the first embodiment.
- FIG. 3B is a cross-sectional view of the liquid crystal lens 301 according to the first embodiment.
- the liquid crystal lens 301 in this embodiment includes a transparent first electrode 303, a second electrode 304, a transparent third electrode 305, a liquid crystal layer 306, a first insulating layer 307, a second insulating layer 308, 3 insulating layers 311 and a fourth insulating layer 312.
- the liquid crystal layer 306 is disposed between the second electrode 304 and the third electrode 305.
- the first insulating layer 307 is disposed between the first electrode 303 and the second electrode 304.
- the second insulating layer 308 is disposed between the second electrode 304 and the third electrode 305.
- the third insulating layer 311 is disposed outside the first electrode 303.
- the fourth insulating layer 312 is disposed outside the third electrode 305.
- the second electrode 304 has a circular hole, and is constituted by four electrodes 304a, 304b, 304c, and 304d divided vertically and horizontally as shown in the front view of FIG. 3A. .
- Each electrode 304a, 304b, 304c, 304d can independently apply a voltage.
- the liquid crystal layer 306 has liquid crystal molecules aligned in one direction so as to face the third electrode 305, and a voltage is applied between the electrodes 303, 304, and 305 sandwiching the liquid crystal layer 306, whereby liquid crystal Perform molecular orientation control.
- the insulating layer 308 is made of, for example, transparent glass having a thickness of about several hundreds of micrometers in order to increase the diameter.
- the dimensions of the liquid crystal lens 301 are shown below.
- the size of the circular hole of the second electrode 304 is about ⁇ 2 mm.
- the distance between the second electrode 304 and the first electrode 303 is 70 ⁇ m.
- the thickness of the second insulating layer 308 is 700 ⁇ m.
- the thickness of the liquid crystal layer 306 is 60 ⁇ m.
- the first electrode 303 and the second electrode 304 are different layers, but may be formed on the same surface.
- the shape of the first electrode 303 is a circle having a smaller size than the circular hole of the second electrode 304, and is arranged at the hole position of the second electrode 304.
- the electrode is provided with an electrode take-out portion. At this time, voltage control can be independently performed on the electrodes 304a, 304b, 304c, and 304d that constitute the first electrode 303 and the second electrode. By setting it as such a structure, the whole thickness can be reduced.
- the operation of the liquid crystal lens 301 shown in FIGS. 3A and 3B will be described.
- a voltage is applied between the transparent third electrode 305 and the second electrode 304 made of an aluminum thin film or the like.
- a voltage is applied between the first electrode 303 and the second electrode 304.
- an axial electric field gradient can be formed on the central axis 309 of the second electrode 304 having a circular hole.
- the liquid crystal molecules of the liquid crystal layer 306 are aligned in the direction of the electric field gradient due to the axial target electric field gradient around the edge of the circular electrode formed in this way.
- the refractive index distribution of the extraordinary light changes from the center to the periphery of the circular electrode due to the change in the orientation distribution of the liquid crystal layer 306, so that it can function as a lens.
- the refractive index distribution of the liquid crystal layer 306 can be freely changed by applying a voltage to the first electrode 303 and the second electrode 304, and optical characteristics such as a convex lens and a concave lens can be freely controlled. Is possible.
- an effective voltage of 20 Vrms is applied between the first electrode 303 and the second electrode 304, and an effective voltage of 70 Vrms is applied between the second electrode 304 and the third electrode 305.
- An effective voltage of 90 Vrms is applied between the first electrode 303 and the third electrode 305 to function as a convex lens.
- the liquid crystal driving voltage (voltage applied between the electrodes) is a sine wave or a rectangular wave AC waveform with a duty ratio of 50%.
- the voltage value to be applied is represented by an effective voltage (rms: root mean square value).
- an AC sine wave of 100 Vrms has a voltage waveform having a peak value of ⁇ 144V.
- 1 kHz is used as the frequency of the AC voltage.
- different voltages are applied between the electrodes 304 a, 304 b, 304 c, and 304 d constituting the second electrode 304 and the third electrode 305.
- the refractive index distribution that is axially symmetric when the same voltage is applied becomes an asymmetric distribution with the axis shifted with respect to the second electrode central axis 309 having a circular hole, and the direction in which the incident light goes straight The effect of deflecting from is obtained.
- the direction of deflection of incident light can be changed by appropriately changing the voltage applied between the divided second electrode 304 and third electrode 305.
- the optical axis position is shifted to the position indicated by reference numeral 310.
- the shift amount is 3 ⁇ m, for example.
- FIG. 4 is a schematic diagram for explaining the optical axis shift function of the liquid crystal lens 301.
- the voltage applied between the electrodes 304a, 304b, 304c, and 304d constituting the second electrode and the third electrode 305 is controlled for each of the electrodes 304a, 304b, 304c, and 304d.
- This makes it possible to shift the central axis of the image sensor and the central axis of the refractive index distribution of the liquid crystal lens. This is equivalent to the lens being displaced in the xy plane with respect to the imaging element surface A01. Therefore, the light beam input to the image sensor can be deflected in the u and v planes.
- FIG. 5 shows a detailed configuration of the unit imaging unit 3 shown in FIG.
- the optical lens 302 in the unit imaging unit 3 includes two optical lenses 302a and 302b.
- the liquid crystal lens 301 is disposed between the optical lenses 302a and 302b.
- Each of the optical lenses 302a and 302b includes one or a plurality of lenses.
- Light rays incident from the object plane A02 (see FIG. 4) are collected by the optical lens 302a disposed on the object plane A02 side of the liquid crystal lens 301, and are incident on the liquid crystal lens 301 in a state where the spot is reduced. At this time, the incident angle of the light beam to the liquid crystal lens 301 is almost parallel to the optical axis.
- the light rays emitted from the liquid crystal lens 301 are imaged on the surface of the image sensor 15 by the optical lens 302b disposed on the image sensor 15 side of the liquid crystal lens 301.
- the diameter of the liquid crystal lens 301 can be reduced, the voltage applied to the liquid crystal lens 301 is reduced, the lens effect is increased, and the thickness of the second insulating layer 308 is reduced. Accordingly, the lens thickness can be reduced.
- the imaging apparatus 1 shown in FIG. 1 has a configuration in which one imaging lens is arranged for one imaging element.
- a plurality of second electrodes 304 may be formed on the same substrate, and a plurality of liquid crystal lenses may be integrated. That is, in the liquid crystal lens 301, the hole portion of the second electrode 304 corresponds to the lens. Therefore, by arranging a plurality of patterns of the second electrodes 304 on a single substrate, each hole portion of the second electrode 304 has a lens effect. Therefore, by arranging the plurality of second electrodes 304 on the same substrate in accordance with the arrangement of the plurality of imaging elements, it is possible to deal with all the imaging elements with a single liquid crystal lens unit.
- the number of liquid crystal layers is one.
- the number of electrode divisions is exemplified as a four-division type as an example, the number of electrode divisions can be changed according to the direction in which the electrode is desired to move.
- the image sensor 15 includes pixels 501 that are two-dimensionally arranged.
- the pixel size of the CMOS image sensor of the present embodiment is 5.6 ⁇ m ⁇ 5.6 ⁇ m
- the pixel pitch is 6 ⁇ m ⁇ 6 ⁇ m
- the effective number of pixels is 640 (horizontal) ⁇ 480 (vertical).
- the pixel is a minimum unit of an imaging operation performed by the imaging device.
- one pixel corresponds to one photoelectric conversion element (for example, a photodiode).
- the averaging time is controlled by an electronic or mechanical shutter or the like, and its operating frequency generally matches the frame frequency of the video signal output from the imaging device 1 and is, for example, 60 Hz.
- FIG. 7 shows a detailed configuration of the image sensor 15.
- the pixel 501 of the CMOS image sensor 15 amplifies the signal charge photoelectrically converted by the photodiode 515 by the amplifier 516.
- the signal of each pixel is selected by the vertical horizontal address system by controlling the switch 517 by the vertical scanning circuit 511 and the horizontal scanning circuit 512, and as a voltage or current, a CDS 518 (Correlated Double Sampling), a switch 519, and an amplifier 520 are selected. Is taken out as a signal S01.
- the switch 517 is connected to the horizontal scanning line 513 and the vertical scanning line 514.
- the CDS 518 is a circuit that performs correlated double sampling, and can suppress 1 / f noise among random noises generated by the amplifier 516 and the like. Pixels other than the pixel 501 have the same configuration and function. In addition, it can be mass-produced by applying CMOS logic LSI manufacturing processes, so it is cheaper than CCD image sensors with high-voltage analog circuits, consumes less power because of its smaller elements, and in principle smears and blooming There is also an advantage that it does not occur.
- the monochrome CMOS image sensor 15 is used. However, a color-compatible CMOS image sensor in which R, G, and B color filters are individually attached to each pixel can also be used. By using a Bayer structure in which repetitions of R, G, G, and B are arranged in a checkered pattern, colorization can be easily realized with one image sensor.
- a symbol P001 is a CPU (Central ⁇ ⁇ Processing Unit) that controls the overall processing operation of the imaging apparatus 1 and may be called a microcontroller (microcomputer).
- Symbol P002 is a ROM (Read Only Memory) composed of a non-volatile memory, and stores a setting value necessary for a program of the CPU • P001 and each processing unit.
- Reference numeral P003 denotes a RAM (Random Access Memory) that stores temporary data of the CPU.
- Reference numeral P004 denotes a VideoRAM, which mainly stores video signals and image signals in the middle of calculation, and is composed of SDRAM (Synchronous Dynamic RAM) or the like.
- the RAM P003 is used for storing programs of the CPU P001 and the VideoRAM P004 is used for storing images.
- two RAM blocks may be unified with the VideoRAM P004.
- Reference numeral P005 denotes a system bus to which a CPU / P001, a ROM / P002, a RAM / P003, a VideoRAM / P004, a video processing unit 27, a video composition processing unit 38, and a control unit 33 are connected.
- the system bus P005 is also connected to internal blocks of the video processing unit 27, the video composition processing unit 38, and the control unit 33, which will be described later.
- the CPU P001 controls the system bus P005 as a host, and setting data necessary for video processing, image processing, and optical axis control flows bidirectionally.
- the system bus P005 is used when an image being processed by the video composition processing unit 38 is stored in the VideoRAM ⁇ P004. Different bus lines may be used for the image signal bus requiring a high transfer speed and the low-speed data bus.
- the system bus P005 is connected to an external interface such as a USB or flash memory card (not shown) and a display drive controller of a liquid crystal display as a viewfinder.
- the video composition processing unit 38 performs video composition processing on the signal S02 input from the other video processing unit, and outputs the signal S03 to another control unit or outputs the signal S03 to the outside. .
- FIG. 9 is a block diagram illustrating a configuration of the video processing unit 27.
- the video processing unit 27 includes a video input processing unit 601, a correction processing unit 602, and a calibration parameter storage unit 603.
- the video input processing unit 601 captures a video signal from the unit imaging unit 3, performs signal processing such as knee processing and gamma processing, and also performs white balance control.
- the output of the video input processing unit 601 is output to the correction processing unit 602, and distortion correction processing based on calibration parameters obtained by a calibration procedure described later is performed.
- the correction processing unit 602 calibrates distortion caused by an attachment error of the image sensor 15.
- the calibration parameter storage unit 603 is a RAM (Random Access Memory) and stores a calibration value (calibration value).
- the corrected video signal that is output from the correction processing unit 602 is output to the video composition processing unit 38.
- the data stored in the calibration parameter storage unit 603 is updated by the CPU ⁇ P001 (FIG. 8), for example, when the imaging apparatus 1 is turned on.
- the calibration parameter storage unit 603 may be a ROM (Read Only Memory), and the stored data may be determined by a calibration procedure at the time of factory shipment and stored in the ROM.
- the video input processing unit 601, the correction processing unit 602, and the calibration parameter storage unit 603 are each connected to the system bus P005.
- the above-described gamma processing characteristics of the video input processing unit 601 are stored in the ROM P002.
- the video input processing unit 601 receives data stored in the ROM P002 (FIG. 8) via the system bus P005 by the program of the CPU P001.
- the correction processing unit 602 writes the image data in the middle of the calculation to the VideoRAM / P004 via the system bus P005 or reads it from the VideoRAM / P004.
- the monochrome CMOS image sensor 15 is used, but a color CMOS image sensor may be used.
- the video processing unit 601 performs a Bayer interpolation process.
- FIG. 10 is a block diagram showing a configuration of the video composition processing unit 38.
- the video composition processing unit 38 includes a composition processing unit 701, a composition parameter storage unit 702, a determination unit 703, and a stereo image processing unit 704.
- the composition processing unit 701 performs composition processing on the imaging results (the signal S02 input from the video processing unit) of the plurality of unit imaging units 2 to 7 (FIG. 1). As described later, the resolution of the image can be improved by the synthesis processing by the synthesis processing unit 701.
- the synthesis parameter storage unit 702 stores image shift amount data obtained from, for example, three-dimensional coordinates between unit imaging units derived by calibration described later.
- the determination unit 703 generates a signal S03 to the control unit based on the video composition result.
- the stereo image processing unit 704 obtains a shift amount for each pixel (shift parameter for each pixel) from each captured image of the plurality of unit imaging units 2 to 7. In addition, the stereo image processing unit 704 obtains data normalized by the pixel pitch of the image sensor according to the imaging condition (distance).
- the composition processing unit 701 shifts the image based on this shift amount and composes it.
- the determination unit 703 detects the power of the high-band component of the video signal by, for example, Fourier transforming the result of the synthesis process.
- the synthesis processing unit 701 performs synthesis processing of four unit imaging units.
- the image sensor is assumed to be wide VGA (854 pixels ⁇ 480 pixels).
- the video signal S04 that is the output of the video composition processing unit 38 is a high-vision signal (1920 pixels ⁇ 1080 pixels).
- the frequency band determined by the determination unit 703 is approximately 20 MHz to 30 MHz.
- the upper limit of the video frequency band at which a wide VGA video signal can be reproduced is approximately 10 MHz to 15 MHz.
- the synthesis processing unit 701 performs synthesis processing to restore a component of 20 MHz to 30 MHz.
- the image sensor is a wide VGA.
- An imaging optical system mainly composed of the imaging lenses 8 to 13 (FIG. 1) needs to have characteristics that do not deteriorate the band of the HDTV signal.
- the video composition processing unit 38 controls the control unit 32 to the control unit 37 so that the power of the frequency band (20 MHz to 30 MHz component in the above example) of the synthesized video signal S04 is maximized.
- the determination unit 703 performs a Fourier transform process, and determines the magnitude of energy of a specific frequency or higher (for example, 20 MHz) as a result.
- the effect of restoring the video signal band that exceeds the band of the image sensor changes depending on the phase when the image formed on the image sensor is sampled within a range determined by the size of the pixel.
- the control lenses 32 to 37 are used to control the imaging lenses 8 to 13.
- the control unit 33 controls the liquid crystal lens 301 included in the imaging lens 9.
- the ideal state of the control result is a state in which the sampling phase of the imaging result of each unit imaging unit is shifted in the horizontal, vertical, and diagonal directions by 1 ⁇ 2 of the pixel size. In such an ideal state, the energy of the high band component as a result of the Fourier transform is maximized. That is, the control unit 33 performs control so that the energy of the result of the Fourier transform is maximized by the feedback loop for controlling the liquid crystal lens and determining the resultant synthesis process.
- the imaging lens 2 and the imaging lenses 4 to 7 are passed through the control units 32 and 34 to 37 (FIG. 1) other than the control unit 33 with the video signal from the video processing unit 27 as a reference.
- the optical axis phase of the imaging lens 2 is controlled by the control unit 32.
- the optical axis phase is similarly controlled for the other imaging lenses 4 to 7.
- the phase offset averaged by the image sensor is optimized. In other words, when sampling an image formed on the image sensor with pixels, the sampling phase is controlled to an ideal state for high definition by controlling the optical axis phase.
- the determination unit 703 determines the synthesis processing result, and maintains a control value if a high-definition and high-quality video signal can be synthesized.
- the synthesis processing unit 701 converts the high-definition and high-quality video signal into Video is output as the signal S04. On the other hand, if a high-definition and high-quality video signal cannot be synthesized, the imaging lens is controlled again.
- the output of the video composition processing unit 38 is, for example, a video signal S04, which is output to a display (not shown), is output to an image recording unit (not shown), and is recorded on a magnetic tape or an IC card.
- the synthesis processing unit 701, the synthesis parameter storage unit 702, the determination unit 703, and the stereo image processing unit 704 are each connected to the system bus P005.
- the synthesis parameter storage unit 702 is configured by a RAM.
- the storage unit 702 is updated by the CPU / P001 via the system bus P005 when the imaging apparatus 1 is powered on. Further, the composition processing unit 701 writes the image data in the middle of calculation to the VideoRAM / P004 via the system bus P005 or reads it from the VideoRAM / P004.
- the stereo image processing unit 704 obtains data normalized by the shift amount for each pixel (shift parameter for each pixel) and the pixel pitch of the image sensor. This means that when a video is synthesized with multiple image shift amounts (shift amounts for each pixel) within one screen of the captured video, specifically, a focused video is shot from a subject with a short shooting distance to a subject with a long shooting distance. Effective when you want to. That is, an image with a deep depth of field can be taken. Conversely, when one image shift amount is applied on one screen instead of the shift amount for each pixel, a video with a shallow depth of field can be taken.
- the control unit 33 includes a voltage control unit 801 and a liquid crystal lens parameter storage unit 802.
- the voltage control unit 801 controls the voltage of each electrode of the liquid crystal lens 301 included in the imaging lens 9 in accordance with a control signal input from the determination unit 703 of the video composition processing unit 38.
- the voltage to be controlled is determined by the voltage control unit 801 based on the parameter value read from the liquid crystal lens parameter storage unit 802.
- the electric field distribution of the liquid crystal lens 301 is ideally controlled, and the optical axis is controlled as shown in FIG.
- photoelectric conversion is performed in the image sensor 15 with the capture phase corrected.
- the phase of the pixel is ideally controlled.
- the resolution of the video output signal is improved. If the control result of the control unit 33 is in an ideal state, the energy detection of the result of the Fourier transform, which is the process of the determination unit 703, is maximized. In order to achieve such a state, the control unit 33 forms a feedback loop by the imaging lens 9, the video processing unit 27, and the video synthesis processing unit 38 so that high-frequency energy can be greatly obtained. To control.
- the voltage control unit 801 and the liquid crystal lens parameter storage unit 802 are each connected to the system bus P005.
- the liquid crystal lens parameter storage unit 802 is configured by, for example, a RAM, and is updated by the CPU P001 via the system bus P005 when the imaging apparatus 1 is turned on.
- the calibration parameter storage unit 603, the composite parameter storage unit 702, and the liquid crystal lens parameter storage unit 802 shown in FIGS. 9 to 11 may be configured to be selectively used according to the stored addresses using the same RAM or ROM. Further, a configuration may be used in which some addresses of ROM • P002 and RAM • P003 are used.
- FIG. 12 is a flowchart showing the operation of the imaging apparatus 1.
- the correction processing unit 602 reads calibration parameters from the calibration parameter storage unit 603 (step S901).
- the correction processing unit 602 performs correction for each of the unit imaging units 2 to 7 based on the read calibration parameters (step S902). This correction is to remove distortion for each of the unit imaging units 2 to 7 described later.
- the synthesis processing unit 701 reads a synthesis parameter from the synthesis parameter storage unit 702 (step S903).
- the stereo image processing unit 704 obtains data normalized by the shift amount for each pixel (shift parameter for each pixel) and the pixel pitch of the image sensor. Then, the synthesis processing unit 701 performs the sub-pixel video synthesis high-definition processing based on the read synthesis parameters, the shift amount for each pixel (shift parameter for each pixel), and data normalized by the pixel pitch of the image sensor. It executes (step S904). As will be described later, the composition processing unit 701 constructs a high-definition image based on information having different phases in units of subpixels.
- the determination unit 703 executes high-definition determination (step S905) and determines whether or not it is high-definition (step S906).
- the determination unit 703 holds a determination threshold value therein, determines the degree of high definition, and outputs information on the determination result to each of the control units 32 to 37.
- each of the control units 32 to 37 maintains the same value as the liquid crystal lens parameter without changing the control voltage (step S907).
- the control units 32 to 37 change the control voltage of the liquid crystal lens 301 (step S908).
- the CPU / P001 manages the control end condition, and for example, determines whether or not the power-off condition of the imaging apparatus 1 is satisfied (step S909). If the control end condition is not satisfied in step S909, the CPU ⁇ P001 returns to step S903 and repeats the above-described processing. On the other hand, if the control end condition is satisfied in step S909, the CPU P001 ends the process of the flowchart shown in FIG. Note that the control end condition may be set such that the number of high-definition determinations is 10 in advance when the imaging apparatus 1 is powered on, and the processing of steps S903 to S909 may be repeated for the specified number of times.
- the image size, magnification, rotation amount, and shift amount are the synthesis parameter B01, and are read from the synthesis parameter storage unit 702 in the synthesis parameter reading process (step S903).
- a coordinate B02 is determined based on the image size and magnification of the synthesis parameter B01.
- a conversion operation B03 is performed based on the coordinate B02 and the rotation amount and shift amount of the synthesis parameter B01.
- one high-definition image is obtained from four unit imaging units.
- the four images B11 to B14 captured by the individual unit imaging units are superimposed on one coordinate system B20 using the rotation amount and shift amount parameters.
- a filter operation is performed using the four images B11 to B14 and the weighting coefficient based on the distance. For example, cubic (third order approximation) is used as a filter.
- the weight w acquired from the pixel at the distance d is as follows.
- the determination unit 703 extracts a signal within the defined range (step S1001). For example, when one screen in a frame unit is defined as a definition range, signals for one screen are stored in advance by a frame memory block (not shown). For example, in the case of VGA resolution, one screen is two-dimensional information composed of 640 ⁇ 480 pixels. The determination unit 703 performs Fourier transform on the two-dimensional information to convert time-axis information into frequency-axis information (step S1002). Next, a high-frequency signal is extracted by an HPF (High-pass filter) (step S1003).
- HPF High-pass filter
- the image sensor 9 has an aspect ratio of 4: 3, is a 60 fps (Frame Per Second) (progressive) VGA signal (640 pixels ⁇ 480 pixels), and a video output signal that is an output of the video composition processing unit is Assume the case of Quad-VGA. Assume that the limit resolution of the VGA signal is about 8 MHz and a signal of 10 to 16 MHz is reproduced by the synthesis process. In this case, the HPF has a characteristic of allowing a component of, for example, 10 MHz or more to pass.
- the determination unit 703 performs determination by comparing the signal of 10 MHz or higher with a threshold value (step S1004). For example, when the DC (direct current) component as a result of Fourier transform is 1, a threshold value of energy of 10 MHz or higher is set to 0.5 and compared with the threshold value.
- the case where Fourier transform is performed on the basis of an image for one frame of an imaging result with a certain resolution has been described.
- the definition range is defined in units of lines (horizontal synchronization repeat unit, in the case of a high-definition signal, the number of effective pixels is 1920 pixels)
- the frame memory block becomes unnecessary and the circuit scale can be reduced.
- the high-definition degree of one screen can be determined by repeatedly executing the Fourier transform, for example, 1080 times for the number of lines, and combining the threshold comparison judgment for 1080 times for each line. Good. Further, the determination may be made using several frames of threshold comparison determination results for each screen.
- the threshold determination a fixed threshold may be used, but the threshold may be adaptively changed.
- a feature of the image being determined may be separately extracted, and the threshold value may be switched based on the result. For example, image features may be extracted by histogram detection. Further, the current threshold value may be changed in conjunction with the past determination result.
- step S908 executed by the control units 32 to 37 shown in FIG. 12
- the processing operation of the control unit 33 will be described as an example, but the processing operations of the control units 32 and 34 to 37 are the same.
- the voltage control unit 801 (FIG. 11) reads the current parameter value of the liquid crystal lens from the liquid crystal lens parameter storage unit 802 (step S1101). Then, the voltage control unit 801 updates the parameter value of the liquid crystal lens (step S1102). A past history is given as the liquid crystal lens parameter.
- the voltage of the voltage control unit 33a is being increased every 40V, 45V, 50V and 5V in the past history with respect to the current four voltage control units 33a, 33b, 33c, 33d . It is determined that the voltage should be further increased based on the history and the determination that the current definition is not high definition. And the voltage of the voltage control part 33a is updated to 55V, keeping the voltage value of the voltage control part 33b, the voltage control part 33c, and the voltage control part 33d. In this manner, the voltage values applied to the electrodes 304a, 304b, 304c, and 304d of the four liquid crystal lenses are sequentially updated. Also, the value of the liquid crystal lens parameter is updated as a history.
- the captured images of the plurality of unit imaging units 2 to 7 are synthesized in sub-pixel units, the degree of high definition is determined, and the control voltage is changed so as to maintain high definition performance. .
- the imaging device 1 A sample when an image formed on the image sensor by the imaging lenses 8 to 13 is sampled with pixels of the image sensor by applying different voltages to the divided electrodes 304a, 304b, 304c, and 304d. Change the conversion phase.
- the ideal state of the control is a state in which the sampling phase of the imaging result of each unit imaging unit is shifted in the horizontal, vertical, and diagonal directions by 1 ⁇ 2 of the pixel size.
- the determination unit 703 determines whether the state is ideal.
- This processing operation is, for example, processing performed at the time of factory production of the imaging apparatus 1, and is performed by performing a specific operation such as simultaneously pressing a plurality of operation buttons when the imaging apparatus is turned on.
- This camera calibration process is executed by the CPU P001.
- an operator who adjusts the image pickup apparatus 1 prepares a checker pattern or checkered test chart with a known pattern pitch, changes the posture and angle, and picks up images with 30 different postures of the checker pattern. Obtain (step S1201).
- the CPU P001 analyzes the captured image for each of the unit imaging units 2 to 7, and derives an external parameter value and an internal parameter value for each of the unit imaging units 2 to 7 (step S1202).
- a general camera model called a pinhole camera model
- six external parameter values are three parameters, that is, rotation information and translation information in three dimensions of the camera posture.
- the process of deriving such parameters is calibration.
- a general camera model there are a total of six external parameters including a three-axis vector of yaw, pitch, and roll indicating the camera attitude with respect to world coordinates, and a three-axis component of a translation vector indicating a translation component.
- the internal parameters are the image center (u0, v0) where the optical axis of the camera intersects the image sensor, the angle and aspect ratio of the coordinates assumed on the image sensor, and the focal length.
- the CPU P001 stores the obtained parameters in the calibration parameter storage unit 603 (step S1203).
- the individual camera distortion of the unit imaging units 2 to 7 is corrected by using this parameter in the correction processing of the unit imaging units 2 to 7 (step S902 shown in FIG. 12).
- steps S902 shown in FIG. 12
- a checker pattern that was originally a straight line may be captured as a curve due to camera distortion
- parameters for returning the checker pattern to a straight line are derived by this camera calibration process, and unit imaging is performed. Correction of parts 2 to 7 is performed.
- the CPU P001 derives the parameters between the unit imaging units 2 to 7 as external parameters between the unit imaging units 2 to 7 (step S1204). Then, the parameters stored in the composite parameter storage unit 702 and the liquid crystal lens parameter storage unit 802 are updated (steps S1205 and S1206). This value is used in the sub-pixel video composition high-definition processing S904 and the control voltage change S908.
- the CPU / P001 or the microcomputer in the imaging apparatus 1 has a camera calibration function is shown.
- a configuration may be adopted in which a separate personal computer is prepared, the same processing is executed on the personal computer, and only the obtained parameters are downloaded to the imaging apparatus 1.
- a pinhole camera model as shown in FIG. 17 is used for the state of projection by the camera.
- all the light reaching the image plane passes through the pinhole C01, which is one point at the center of the lens, and forms an image at a position intersecting the image plane C02.
- a coordinate system in which the intersection of the optical axis and the image plane C02 is the origin and the X axis and the Y axis are aligned with the arrangement axis of the camera element is called an image coordinate system.
- a coordinate system in which the camera lens center is the origin, the optical axis is the Z axis, and the X axis and the Y axis are parallel to the X axis and the Y axis is referred to as a camera coordinate system.
- the three-dimensional coordinates M [X, Y, Z] T in the world coordinate system (X w , Y w , Z w ), which is a coordinate system representing the space, and the image coordinate system (x, y)
- A is an internal parameter matrix, which is a matrix like the following equation (2).
- ⁇ and ⁇ are scale factors formed by the product of the pixel size and the focal length.
- (U 0 , v 0 ) is the image center
- ⁇ is a parameter representing the distortion of the coordinate axes of the image.
- [R t] is an external parameter matrix, which is a 4 ⁇ 3 matrix in which a 3 ⁇ 3 rotation matrix R and a translation vector t are arranged.
- a ⁇ T A ⁇ 1 is a 3 ⁇ 3 target matrix as shown in equation (8) and includes six unknowns, and two equations can be established for one H. Therefore, if three or more Hs are obtained, the internal parameter A can be determined.
- a ⁇ T A ⁇ 1 has objectivity, a vector b in which elements of B represented by the following equation (8) are arranged is defined as in equation (9).
- Equation (6) and Equation (7) become the following Equation (12).
- V is a 2n ⁇ 6 matrix.
- b is obtained as an eigenvector corresponding to the minimum eigenvalue of V T V.
- n 2
- ⁇ 0
- b The solution is obtained by adding 0 to equation (13).
- n 1, only two internal parameters can be obtained. For this reason, for example, only ⁇ and ⁇ are unknown, and the remaining internal parameters are known to obtain a solution.
- Optimized parameters can be obtained by optimizing parameters by the nonlinear least square method using the parameters obtained so far as initial values.
- camera calibration can be performed by using three or more images taken with the internal parameters fixed from different viewpoints. At this time, generally, the larger the number of images, the higher the parameter estimation accuracy. Also, the error increases when the rotation between images used for calibration is small.
- FIG. 18 illustrates a point M on the target object plane D03 by using a basic image sensor 15 (referred to as a basic camera D01) and an adjacent image sensor 16 adjacent thereto (referred to as an adjacent camera D02).
- a basic image sensor 15 referred to as a basic camera D01
- an adjacent image sensor 16 adjacent thereto referred to as an adjacent camera D02.
- FIG. 19 shows FIG. 18 using the pinhole camera model shown in FIG. In FIG.
- symbol D06 has shown the pinhole which is the center of the camera lens of the basic camera D01.
- Reference numeral 07 denotes a pinhole that is the center of the camera lens of the adjacent camera D02.
- Reference sign D08 represents the image plane of the basic camera D01, and Z1 represents the optical axis of the basic camera D01.
- Reference sign D09 indicates the image plane of the adjacent camera D02, and Z2 indicates the optical axis of the adjacent camera D02.
- the relationship between the point M on the world coordinate system and the point m on the image coordinate system is expressed by the following expression (16) from the expression (1) from the expression (1) from the viewpoint of camera mobility. Can be expressed as:
- the central projection matrix of the basic camera D01 and P 1, the central projection matrix of the adjacent cameras D02 and P 2.
- Terms m 1 on the image plane D08 in order to obtain the point m 2 on the image plane D09 corresponding to the point, the following method is used. (1) From m 1 , the point M in the three-dimensional space is obtained from the following equation (17) from the equation (16). Since the central projection matrix P is a 3 ⁇ 4 matrix, it is obtained using a pseudo inverse matrix of P.
- the corresponding point m 2 of the adjacent image is obtained by the following (18) using the central projection matrix P 2 of the adjacent camera.
- the corresponding point m 2 between the calculated basic image and the adjacent image is obtained in units of sub-pixels.
- Corresponding point matching using camera parameters has an advantage that the corresponding points can be instantaneously calculated only by matrix calculation because the camera parameters have already been obtained.
- (x u , y u ) are image coordinates of an imaging result of an ideal lens without distortion.
- (X d , y d ) are image coordinates of a lens having distortion.
- the coordinate systems of these coordinates are both the above-described image coordinate system X axis and Y axis.
- R is the distance from the image center to (x u , y u ).
- the image center is determined by the internal parameters u 0 and v 0 described above. Assuming the above model, if the coefficients k 1 to k 5 and internal parameters are derived by calibration, the difference in imaging coordinates due to the presence or absence of distortion can be obtained, and distortion caused by the real lens can be corrected. Become.
- FIG. 20 is a schematic diagram illustrating an imaging state of the imaging apparatus 1.
- the unit imaging unit 3 including the imaging element 15 and the imaging lens 9 images the imaging range E01.
- the unit imaging unit 4 including the imaging element 16 and the imaging lens 10 images the imaging range E02.
- the two unit imaging units 3 and 4 image substantially the same imaging range.
- the arrangement interval of the imaging devices 15 and 16 is 12 mm
- the focal length of the unit imaging units 3 and 4 is 5 mm
- the distance to the imaging range is 600 mm
- the optical axes of the unit imaging units 3 and 4 are parallel to each other.
- the area of the different range of the imaging ranges E01 and E02 is about 3%. In this way, the same part is imaged, and the composition processing unit 38 performs high definition processing.
- waveform 1 in FIG. 21 shows the contour of the subject.
- a waveform 2 in FIG. 21 shows a result of imaging with a single imaging device.
- a waveform 3 in FIG. 21 shows a result of imaging with a single imaging device.
- a waveform 4 in FIG. 21 shows an output of the synthesis processing unit.
- the horizontal axis indicates the extent of the space.
- the expansion of the space indicates both a case where it is a real space and a case where it is a virtual space expansion on the image sensor. These are synonymous because they can be mutually converted and converted by using external parameters and internal parameters.
- the horizontal axis in FIG. 21 is the time axis.
- the time axis of the video signal is synonymous with the expansion of the space.
- the vertical axis in FIG. 21 represents amplitude and intensity. Since the intensity of the object reflected light is photoelectrically converted by a pixel of the image sensor and output as a voltage level, it may be regarded as an amplitude.
- the contour is a contour of an object in the real space.
- the contour that is, the intensity of the reflected light of the object is integrated by the spread of the pixels of the image sensor. Therefore, the unit imaging units 2 to 7 capture the waveform 2 as shown in FIG.
- the integration is performed using an LPF (Low Pass Filter).
- An arrow F01 in the waveform 2 in FIG. 21 indicates the spread of the pixels of the image sensor.
- a waveform 3 in FIG. 21 is a result of imaging with different unit imaging units 2 to 7, and the light is integrated with the spread of the pixel indicated by the arrow F02 in the waveform 3 in FIG.
- the contour (profile) of reflected light below the spread determined by the resolution (pixel size) of the image sensor cannot be reproduced by the image sensor.
- the feature of this embodiment is that an offset is given to both phase relations in the waveform 2 in FIG. 21 and the waveform 3 in FIG.
- an offset is given to both phase relations in the waveform 2 in FIG. 21 and the waveform 3 in FIG.
- the contour of the waveform 1 in FIG. 21 is most reproduced by the waveform 4 in FIG. 21, which corresponds to the width of the arrow F03 in the waveform 4 in FIG.
- a video output exceeding the resolution limit by the above-described averaging (integration using LPF) is obtained by using a plurality of unit imaging units including non-solid lenses typified by liquid crystal lenses and imaging elements. It becomes possible.
- FIG. 22 is a schematic diagram illustrating a relative phase relationship between two unit imaging units.
- sampling is synonymous with sampling, and refers to processing for extracting analog signals at discrete positions.
- FIG. 22 it is assumed that two unit imaging units are used. Therefore, the phase relationship of 0.5 pixel size G01 is ideal as in the state 1 of FIG. As shown in state 1 in FIG. 22, light G02 is incident on each of the two unit imaging units. However, in some cases, the state 2 in FIG. 22 or the state 3 in FIG. 22 may occur depending on the imaging distance or the assembly of the imaging device 1.
- the one-dimensional phase relationship has been described.
- the phase control of the two-dimensional space can be performed by the operation shown in FIG.
- two-dimensional phase control is realized by controlling the phase of the unit imaging unit on one side with respect to the reference one in two dimensions (horizontal, vertical, horizontal + vertical). May be.
- a case is assumed where four unit imaging units are used to capture substantially the same imaging target (subject) to obtain four images.
- individual images are Fourier transformed to determine feature points on the frequency axis, calculate the rotation amount and shift amount relative to the reference image, and use the rotation amount and shift amount to perform interpolation filtering processing By doing so, it becomes possible to obtain a high-definition image.
- the number of pixels of the image sensor is VGA (640 ⁇ 480 pixels)
- a quad-VGA (1280 ⁇ 960 pixels) high-definition image can be obtained by four VGA unit imaging units.
- a cubic (third order approximation) method is used.
- the resolution limit of the image sensor 1 is VGA
- the imaging lens has the ability to pass the Quad-VGA band, and the Quad-VGA band component equal to or higher than VGA is imaged at the VGA resolution as aliasing. By using this aliasing distortion, the high-band component of the Quad-VGA is restored by video composition processing.
- FIG. 23A to 23C are diagrams showing the relationship between the imaging target (subject) and the imaging.
- symbol I01 indicates an image light intensity distribution image.
- a symbol I02 indicates a corresponding point of P1.
- a symbol I03 indicates a pixel of the image sensor M.
- Reference numeral I04 represents a pixel of the image sensor N.
- the amount of light beam averaged in the pixel differs from the phase relationship between the corresponding point and the pixel, and this information is used to increase the resolution.
- reference numeral I06 corresponding points are overlapped by image shift.
- symbol I02 indicates a corresponding point of P1.
- FIG. 23C symbol I02 indicates a corresponding point of P1.
- FIG. 23C is a schematic diagram illustrating a case where one image is captured by two unit imaging units of the imaging elements M and N.
- FIG. 23B shows a state where an image P1 is formed on the pixels of the image sensor. In this way, the phase of the image formed with the pixel is determined. This phase is determined by the positional relationship (baseline length B) of the imaging elements, the focal length f, and the imaging distance H.
- the phases may coincide with each other as shown in FIG. 23C.
- the light intensity distribution image in FIG. 23B schematically shows the light intensity for a certain spread. With respect to such light input, the image sensor averages within the range of pixel expansion. As shown in FIG. 23B, when the two unit imaging units capture with different phases, the same light intensity distribution is averaged with different phases. Therefore, a high-band component (for example, if the imaging device has a VGA resolution, a high band higher than the VGA resolution) can be reproduced by the later-stage combining process.
- a phase shift of 0.5 pixels is ideal.
- 24A and 24B are schematic diagrams for explaining the operation of the imaging apparatus 1.
- 24A and 24B illustrate a state in which an image is picked up by an image pickup apparatus including two unit image pickup units.
- a symbol Mn indicates a pixel of the image sensor M.
- a symbol Nn indicates a pixel of the image sensor N.
- Each image sensor is shown enlarged in pixel units for convenience of explanation.
- the plane of the imaging element is defined in two dimensions u and v, and FIG. 24A corresponds to a cross section of the u axis.
- the imaging targets P0 and P1 are at the same imaging distance H. Images of P0 are formed on u0 and u′0, respectively.
- u0 and u′0 are distances on the image sensor with respect to each optical axis.
- u0 0.
- the distance from the optical axis of each image of P1 is u1 and u′1.
- the relative phase with respect to the pixels of the image sensors M and N at the positions where P0 and P1 are imaged on the image sensors M and N determines the image shift performance. This relationship is determined by the imaging distance H, the focal length f, and the baseline length B that is the distance between the optical axes of the imaging elements.
- FIGS. 24A and 24B the positions where the images are formed, that is, u0 and u′0 are shifted by half the size of the pixel.
- u′0 forms an image around the pixel of the image sensor N. That is, the pixel size is shifted by a half pixel.
- u1 and u′1 are shifted by the size of a half pixel.
- FIG. 24B is a schematic diagram of an operation of restoring and generating one image by calculating the same images of the captured images.
- Pu indicates the pixel size in the u direction
- Pv indicates the pixel size in the v direction.
- a region indicated by a rectangle indicates a pixel.
- FIG. 24B shows a relationship in which the pixels are shifted by half of each other, which is an ideal state for performing image shift and generating a high-definition image.
- FIG. 25A and FIG. 25B are schematic diagrams in the case where the image sensor N is attached with a deviation of half the pixel size from the design due to an attachment error, for example, with respect to FIG. 24A and FIG. 24B.
- a symbol Mn indicates a pixel of the image sensor M.
- a symbol Nn indicates a pixel of the image sensor N.
- the area indicated by a rectangle indicates a pixel.
- the symbol Pu indicates the pixel size in the u direction
- the symbol Pv indicates the pixel size in the v direction.
- the mutual relationship between u1 and u′1 is the same phase with respect to the pixels of each image sensor.
- 26A and 26B are schematic diagrams when the optical axis shift of this embodiment is operated with respect to FIGS. 25A and 25B.
- a symbol Mn indicates a pixel of the image sensor M.
- a symbol Nn indicates a pixel of the image sensor N.
- the area indicated by a rectangle indicates a pixel.
- the symbol Pu indicates the pixel size in the u direction
- the symbol Pv indicates the pixel size in the v direction.
- the movement in the right direction of the pinhole O ′ called the optical axis shift J01 in FIG. 26A is an image of the operation.
- FIGS. 27A and 27B are schematic diagrams for explaining a case where the subject is switched to the object P1 at the distance H1 from the state in which P0 is captured at the imaging distance H0.
- FIG. 27A a symbol Mn indicates a pixel of the image sensor M.
- a symbol Nn indicates a pixel of the image sensor N.
- FIG. 27B the area indicated by a rectangle indicates a pixel.
- the symbol Pu indicates the pixel size in the u direction
- the symbol Pv indicates the pixel size in the v direction.
- FIGS. 27A and 27B are schematic diagrams for explaining a case where the subject is switched to the object P1 at the distance H1 from the state in which P0 is captured at the imaging distance H0.
- FIG. 27A and 27B are schematic diagrams for explaining a case where the subject is switched to the object P1 at the distance H1 from the state in which P0 is captured at the imaging distance H0.
- FIG. 27A is a schematic diagram illustrating the phase relationship between the imaging elements when the subject is P1. After changing the subject to P1 as shown in FIG. 27B, the phases of each other substantially coincide.
- a symbol Mn indicates a pixel of the image sensor M.
- a symbol Nn indicates a pixel of the image sensor N.
- the area indicated by a rectangle indicates a pixel.
- the symbol Pu indicates the pixel size in the u direction
- the symbol Pv indicates the pixel size in the v direction.
- a distance measuring unit for measuring the distance may be provided separately. Alternatively, the distance may be measured with the imaging apparatus of the present embodiment.
- An example of measuring distance using a plurality of cameras is common in surveying and the like.
- the distance measurement performance is in inverse proportion to the distance to the distance measurement object in proportion to the base line length which is the distance between the cameras and the focal length of the camera.
- the imaging apparatus of the present embodiment has, for example, an eight-eye configuration, that is, a configuration including eight unit imaging units.
- the measurement distance that is, the distance to the subject is 500 mm
- four cameras with short distances between the optical axes (baseline lengths) among the eight-eye cameras are assigned to imaging and image shift processing, and the remaining baseline lengths are long.
- high resolution processing of image shift is performed using eight eyes.
- the amount of blur may be determined by analyzing the resolution of a captured image, and the distance may be estimated.
- the accuracy of distance measurement may be improved by using another distance measuring means such as TOF (Time-of-Flight) together.
- TOF Time-of-Flight
- FIG. 29A a symbol Mn indicates a pixel of the image sensor M.
- a symbol Nn indicates a pixel of the image sensor N.
- the horizontal axis indicates the distance (unit: pixel) from the center, and the vertical axis indicates ⁇ r (unit: mm).
- FIG. 29A is a schematic diagram illustrating a case where P1 and P2 are captured in consideration of the depth ⁇ r. The difference (u1-u2) in distance from each optical axis is expressed by equation (22).
- u1-u2 is a value determined by the base line length B, the imaging distance H, and the focal length f.
- these conditions B, H, and f are fixed and regarded as constants.
- the optical axis shift means has an ideal optical axis relationship.
- the relationship between ⁇ r and the position of the pixel is expressed by equation (23).
- FIG. 29B shows a condition in which the influence of depth falls within the range of one pixel, assuming a pixel size of 6 ⁇ m, an imaging distance of 600 mm, and a focal length of 5 mm as an example. Under the condition that the influence of depth falls within the range of one pixel, the effect of image shift is sufficiently obtained. Therefore, for example, if the angle of view is narrowed, depending on the application, image shift performance deterioration due to depth can be avoided.
- FIGS. 29A and 29B when ⁇ r is small (the depth of field is shallow), high definition processing may be performed by applying the same image shift amount on one screen.
- ⁇ r is large (the depth of field is deep) will be described with reference to FIGS. 27A, 27B, and 30.
- FIG. 30 is a flowchart showing the processing operation of the stereo image processing unit 704 shown in FIG.
- a sampling phase shift by pixels of a plurality of imaging elements having a certain baseline length varies depending on the imaging distance. Therefore, in order to achieve high definition at any imaging distance, it is necessary to change the image shift amount according to the imaging distance.
- the imaging distance and the amount of movement of the point imaged on the imaging device are expressed by equation (24).
- the stereo image processing unit 704 obtains data normalized by the shift amount for each pixel (shift parameter for each pixel) and the pixel pitch of the image sensor.
- the stereo image processing unit 704 performs stereo matching using two captured images corrected based on camera parameters obtained in advance (step S3001). Corresponding feature points in the image are obtained by stereo matching, and a shift amount for each pixel (shift parameter for each pixel) is calculated therefrom (step S3002).
- the stereo image processing unit 704 compares the shift amount for each pixel (shift parameter for each pixel) with the pixel pitch of the image sensor (step S3003).
- step S3004 if the shift amount for each pixel is smaller than the pixel pitch of the image sensor, the shift amount for each pixel is used as a synthesis parameter (step S3004).
- step S3005 data normalized by the pixel pitch of the image sensor is obtained, and the data is used as a synthesis parameter (step S3005).
- Stereo matching is a process of searching for a projection point of the same spatial point from another image with respect to a pixel at a position (u, v) in the image on the basis of one image.
- Camera parameters required for the camera projection model are obtained in advance by camera calibration. Therefore, the search for corresponding points can be limited to a straight line (epipolar line).
- the epipolar line K01 is a straight line on the same horizontal line as shown in FIG.
- the epipolar line K01 since the corresponding points on the other image with respect to the reference image are limited to the epipolar line K01, in stereo matching, only the epipolar line K01 needs to be searched. This is important for reducing the matching error and speeding up the processing. Note that the square on the left side of FIG. 31 indicates the reference image.
- Specific search methods include area-based matching and feature-based matching.
- area-based matching as shown in FIG. 32, corresponding points are obtained using a template. Note that the square on the left side of FIG. 32 indicates the reference image.
- feature-based matching is to extract feature points such as edges and corners of each image and obtain correspondence between the feature points.
- multi-baseline stereo As a method for obtaining more accurate corresponding points.
- This is a method that uses not only stereo matching by a set of cameras but also a plurality of stereo image pairs by more cameras.
- a stereo image is obtained by using a pair of stereo cameras having a base line (baseline) in various lengths and directions with respect to a reference camera.
- base line baseline
- the parallax in a plurality of image pairs is a value corresponding to the distance in the depth direction by dividing each parallax by the baseline length.
- stereo matching information obtained from each stereo image pair specifically, an evaluation function such as SSD (Sum of Squared Differences) representing the likelihood of corresponding to each parallax / baseline length is added, and from there Determine the corresponding location. That is, when a change in SSSD (Sum of SSD), which is the sum of SSDs for each parallax / baseline length, is examined, a clearer minimum value appears. Therefore, it is possible to reduce stereo matching correspondence errors and improve estimation accuracy.
- SSSD Standard of SSD
- an occlusion problem that a part that can be seen by one camera is hidden behind another object and cannot be seen by another camera can be reduced.
- FIG. 33 shows an example of a parallax image.
- Image 1 in FIG. 33 is an original image (reference image).
- Image 2 in FIG. 33 is a parallax image obtained as a result of obtaining the parallax for each pixel in image 1 in FIG. 33.
- the higher the luminance of the image the larger the parallax, that is, the imaged object is closer to the camera.
- the lower the luminance the smaller the parallax, that is, the imaged object is located far from the camera.
- FIG. 34 is a block diagram illustrating a configuration of the video composition processing unit 38 in the case of performing noise removal in stereo image processing.
- the video synthesis processing unit 38 shown in FIG. 34 is different from the video synthesis processing unit 38 shown in FIG. 10 in that a stereo image noise reduction processing unit 705 is provided.
- the operation of the video composition processing unit 38 shown in FIG. 34 will be described with reference to the flowchart of the noise removal processing operation in the stereo image processing shown in FIG.
- the processing operations of steps S3001 to S3005 are the same as steps S3001 to S3005 performed by the stereo image processing unit 704 shown in FIG.
- the stereo image noise reduction processing unit 705 determines the shift amount of the adjacent pixel when the shift amount of the synthesis parameter for each pixel obtained in step S3105 is significantly different from the shift amount of the adjacent surrounding synthesis parameter. The noise is removed by substituting it with the most frequent value (step S3106).
- the processing amount reduction operation will be described.
- the whole image is usually refined.
- the processing is performed by increasing the definition of only the face portion (the portion where the luminance of the parallax image is high) of the image 1 in FIG. 33 and not increasing the definition of the background mountain portion (the portion where the luminance of the parallax image is low).
- the amount can be reduced.
- this process extracts a part of an image with a face (part where the distance is close and the brightness of the parallax image is high) from the parallax image, and obtains the image data of the image part and the stereo image processing unit.
- High definition can be achieved in the same manner using the synthesized parameters. As a result, power consumption can be reduced, which is effective in a portable device that operates on a battery or the like.
- the imaging apparatus of the present embodiment it is possible to eliminate the crosstalk by controlling the optical axis of the light incident on the imaging element, and to realize an imaging apparatus that can obtain a high-quality image. it can.
- an image formed on the imaging device is captured by image processing, so that the resolution of the imaging device needs to be larger than the required imaging resolution.
- the imaging apparatus of the present embodiment it is possible to perform control to set not only the optical axis direction of the liquid crystal lens but also the optical axis of light incident on the imaging element at an arbitrary position. Therefore, the size of the image sensor can be reduced, and the image sensor can be mounted on a portable terminal or the like that is required to be light and thin. In addition, a high-quality and high-definition two-dimensional image can be generated regardless of the shooting distance. Furthermore, it is possible to remove noise due to stereo matching and speed up the high definition processing.
- the present invention can be applied to an imaging device that can generate a high-quality and high-definition two-dimensional image regardless of the parallax of the stereo image, that is, regardless of the shooting distance.
- Imaging device 1 ... Imaging device, 2 to 7 unit imaging unit, 8-13 ... Imaging lens, 14-19: Image sensor, 20-25 ... optical axis, 26 to 31 ... video processing unit, 32 to 37 ... control unit, 38 ... Video composition processing section
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
本願は、2009年3月30日に、日本に出願された特願2009-083276号に基づき優先権を主張し、その内容をここに援用する。
また、本発明は、ステレオ画像の視差にかかわらず、すなわち撮影距離にかかわらず、高画質かつ高精細な2次元画像を生成することができる撮像装置および撮像方法を提供することを目的とする。
本実施形態における液晶レンズ301は、透明な第1の電極303、第2の電極304、透明な第3の電極305、液晶層306、第1の絶縁層307、第2の絶縁層308、第3の絶縁層311、第4の絶縁層312によって構成されている。
液晶層306は、第2の電極304と第3の電極305との間に配置されている。第1の絶縁層307は、第1の電極303と第2の電極304との間に配置されている。第2の絶縁層308は、第2の電極304と第3の電極305の間に配置されている。第3の絶縁層311は、第1の電極303の外側に配置されている。第4の絶縁層312は、第3の電極305の外側に配置されている。
ここで、第2の電極304は、円形の孔を有しており、図3Aの正面図に示すように縦、横に分割された4つの電極304a、304b、304c、304dによって構成されている。それぞれの電極304a、304b、304c、304dは、独立して電圧を印加することができる。また、液晶層306は、第3の電極305に対向するように液晶分子を一方向に配向させており、液晶層306を挟む電極303、304、305の間に電圧を印加することで、液晶分子の配向制御を行う。また、絶縁層308には、大口径化のため、例えば数百μm程度の厚さの透明な硝子等を用いている。
例えば、映像合成処理部38の演算途中の画像を、VideoRAM・P004に格納する際に、このシステムバスP005が使用される。高速転送速度が必要な画像信号用のバスと、低速のデータバスとを、異なるバスラインとしてもよい。システムバスP005には、図示しないUSBやフラッシュメモリカードのような外部とのインターフェースや、ビューファインダとしての液晶表示器の表示駆動コントローラが接続される。
映像合成処理部38は、他の映像処理部から入力される信号S02に対して、映像合成処理を行い、信号S03として他の制御部に出力したり、ビデオ信号S04として外部に出力したりする。
合成処理部701は、複数の単位撮像部2~7(図1)の撮像結果(映像処理部から入力される信号S02)を合成処理する。合成処理部701による合成処理により、後述するように画像の解像度を改善することができる。合成パラメータ記憶部702は、例えば後述するキャリブレーションによって導出される単位撮像部間の3次元座標から求まる画像シフト量のデータを格納している。判定部703は、映像合成結果に基づいて制御部への信号S03を生成する。ステレオ画像処理部704は、複数の単位撮像部2~7の各撮像画像から画素毎のシフト量(画素毎のシフトパラメータ)を求める。また、ステレオ画像処理部704は、撮像条件(距離)により撮像素子の画素ピッチで正規化したデータを求める。
次に、合成処理部701は、合成パラメータ記憶部702から、合成パラメータを読み込む(ステップS903)。また、ステレオ画像処理部704は、画素毎のシフト量(画素毎のシフトパラメータ)、及び撮像素子の画素ピッチで正規化したデータを求める。そして、合成処理部701は、読み込んだ合成パラメータ、及び画素毎のシフト量(画素毎のシフトパラメータ)、撮像素子の画素ピッチで正規化したデータを基に、サブ画素映像合成高精細化処理を実行する(ステップS904)。後述するように、合成処理部701は、サブ画素単位での位相が異なる情報をもとに高精細画像を構築する。
ここでは、4つの単位撮像部から1つの高精細画像を得る場合を仮定する。個々の単位撮像部にて撮像した4つの画像B11~B14から、回転量とシフト量のパラメータを使用して1つの座標系B20に重ねる。そして、4つの画像B11~B14と、距離による重み係数とによってフィルタ演算を行う。例えばフィルタとして、キュービック(3次近似)を使用する。距離dにある画素から取得する重みwは、次式となる。
=4-8×d+5×d2-d3 (1≦d<2)
=0 (2≦d)
図18は、基本となる撮像素子15(これを基本カメラD01と称する)と、それと隣り合う隣接の撮像素子16(これを隣接カメラD02と称する)にて対象物体面D03上の点Mを、液晶レンズD04、D05を介して各撮像素子15、16上の点m1またはm2へ投影(撮影)する場合を示している。
また、図19は、図18を、図17に示すピンホールカメラモデルを用いて図示したものである。図19において、符号D06は、基本カメラD01のカメラレンズの中心であるピンホールを示している。符号07は、隣接カメラD02のカメラレンズの中心であるピンホールを示している。また、符号D08は、基本カメラD01の画像平面を示しており、Z1は、基本カメラD01の光軸を示している。また、符号D09は、隣接カメラD02の画像平面を示しており、Z2は、隣接カメラD02の光軸を示している。
ワールド座標系上の点Mと、画像座標系上の点mとの関係は、カメラの移動性等から、中心射影行列Pを用いて表すと(1)式より、以下の(16)式のように表すことができる。
(1) (16)式よりm1から三次元空間中の点Mを、以下の(17)式により求める。中心射影行列Pは、3×4の行列であるため、Pの疑似逆行列を用いて求める。
なお、図21の波形1は、被写体の輪郭について示している。図21の波形2は、単一の撮像装置で撮像した結果について示している。図21の波形3は、単一の撮像装置で撮像した結果について示している。図21の波形4は、合成処理部の出力について示している。
図21において、横軸は、空間の広がりを示している。この空間の広がりとは、現実の空間である場合と、撮像素子上の仮想空間広がりである場合の双方を示す。これらは、外部パラメータ、内部パラメータを用いれば相互に変換や換算が可能であるので同義である。また、撮像素子から順次読み出された映像信号とみなした場合は、図21の横軸は時間軸となる。この場合もディスプレイに表示された場合は、観察者の目において空間の広がりと認識されるため、映像信号の時間軸である場合も空間の広がりと同義である。図21の縦軸は、振幅、強度である。物体反射光の強度を撮像素子の画素で光電変換して電圧レベルとして出力することから、振幅とみなしてよい。
しかし、撮像距離や、撮像装置1の組み立ての関係で、図22の状態2や図22の状態3のようになる場合がある。この場合、平均化されたあとの映像信号のみを用いて画像処理演算を行っても、すでに図22の状態2、図22の状態3のような位相関係で平均化されてしまった信号は、復元不可能である。そこで図22の状態2、図22の状態3の位相関係を、図22の状態4に示すように高精度に制御することが必要になる。本実施形態では、矢印G03、G04で示すように、この制御を図4に示した液晶レンズによる光軸シフトで実現する。以上の処理により、常に理想的な位相関係が保たれるので、観察者に最適な画像を提供可能となる。
前述の内挿フィルタリング処理では、例えばキュービック(3次近似)法を用いる。これは、内挿点までの距離による重み付けの処理である。撮像素子1の解像度限界はVGAであるが、撮像レンズはQuad-VGAの帯域を通過させる能力を持ち、VGA以上のQuad-VGAの帯域成分は折り返し歪み(エイリアシング)としてVGA解像度で撮像される。この折り返し歪みを使用して、映像合成処理により、Quad-VGAの高帯域成分を復元する。
なお、図23Bにおいて、符号I01は、像の光強度分布イメージを示している。符号I02は、P1の対応点を示している。符号I03は、撮像素子Mの画素を示している。符号I04は、撮像素子Nの画素を示している。
図23Bでは、符号I05に示すように、対応点と画素の位相関係から、画素で平均化される光束量が異なり、この情報を利用して高解像度化を図っている。また、符号I06に示すように、イメージシフトにて、対応点同士を重ねている。
図23Cにおいて、符号I02は、P1の対応点を示している。図23Cでは、符号I07に示すように、液晶レンズによる光軸シフトを行っている。
図23A~図23Cにおいては、レンズひずみを無視したピンホールモデルがベースになっている。レンズひずみが小さい撮像装置は、このモデルで説明可能であり、幾何光学のみで説明可能である。図23Aにおいて、P1は撮像対象であり、撮像装置から撮像距離H離れている。ピンホールO、O’が2つの単位撮像部の撮像レンズに相当している。図23Aは、撮像素子M、Nの2つの単位撮像部で1つの像を撮像する場合を示す模式図である。図23Bは、撮像素子の画素にP1の像が結像する様子を示している。このように、画素と結像した像の位相が定まる。この位相は、互いの撮像素子の位置関係(基線長B)、焦点距離f、撮像距離Hで決まる。
各々の撮像素子は、説明の便宜上、画素単位に拡大して記載している。撮像素子の平面をu,vの2次元で定義しており、図24Aは、u軸の断面に相当する。撮像対象P0、P1は、同一撮像距離Hにある。P0の像が、各々u0、u’0に結像する。u0,u’0は、各々の光軸を基準とした撮像素子上の距離である。図24Aでは、P0は撮像素子Mの光軸上にあるので、u0=0である。また、P1の各々の像の、光軸からの距離がu1,u’1である。ここで、P0,P1が撮像素子M,N上に結像する位置の、撮像素子M,Nの画素に対する相対的な位相がイメージシフトの性能を左右する。この関係は、撮像距離H、焦点距離f、撮像素子の光軸の間の距離である基線長Bによって定まる。
なお、図25Aにおいて、符号Mnは、撮像素子Mの画素を示している。また、符号Nnは、撮像素子Nの画素を示している。
また、図25Bにおいて、四角形で示した領域は、画素を示している。また、符号Puは、u方向の画素サイズを示しており、符号Pvは、v方向の画素サイズを示している。
この場合、u1とu’1の互いの関係は、各々の撮像素子の画素に対して同一の位相となる。図25Aでは、双方とも、画素に対して左側に寄った位置に結像している。u0(=0)とu’0の関係も同様である。よって図25Bのように、互いの位相は略一致する。
なお、図26Aにおいて、符号Mnは、撮像素子Mの画素を示している。また、符号Nnは、撮像素子Nの画素を示している。
また、図26Bにおいて、四角形で示した領域は、画素を示している。また、符号Puは、u方向の画素サイズを示しており、符号Pvは、v方向の画素サイズを示している。
図26A中の光軸シフトJ01というピンホールO’の右方向への移動が、その動作のイメージである。このように、光軸シフト手段を用いてピンホールO’をずらすことで、撮像対象が結像する位置が、撮像素子の画素に対して制御可能となる。これにより、図26Bのように理想的な位相関係が達成可能となる。
なお、図27Aにおいて、符号Mnは、撮像素子Mの画素を示している。また、符号Nnは、撮像素子Nの画素を示している。
また、図27Bにおいて、四角形で示した領域は、画素を示している。また、符号Puは、u方向の画素サイズを示しており、符号Pvは、v方向の画素サイズを示している。
図27A及び図27Bは、撮像距離H0でP0を撮像している状態から、距離H1にある物体P1に被写体を切り替えた場合を説明する模式図である。図27Aにおいて、P0,P1はそれぞれ撮像素子M上の光軸上であると仮定しているので、u0=0であり、またu1=0である。P0、P1が撮像素子Nに結像する際の、撮像素子Bの画素とP0,P1の像の関係に注目する。P0は、撮像素子Mの画素の中心に結像している。それに対して撮像素子Nでは、画素の周囲に結像している。よってP0を撮像していたときは最適な位相関係であったといえる。図27Bは、被写体がP1の場合の互いの撮像素子の位相関係を示す模式図である。図27Bにあるように被写体をP1に変更したあとは、互いの位相が略一致してしまう。
なお、図28Aにおいて、符号Mnは、撮像素子Mの画素を示している。また、符号Nnは、撮像素子Nの画素を示している。
また、図28Bにおいて、四角形で示した領域は、画素を示している。また、符号Puは、u方向の画素サイズを示しており、符号Pvは、v方向の画素サイズを示している。
ここで、撮像距離の情報を得るためには、距離を測定する測距手段を別途設ければよい。または、本実施形態の撮像装置で距離を測定してもよい。複数のカメラ(単位撮像部)を用いて距離を測定する例が、測量などでは一般的である。その測距性能は、カメラ間の距離である基線長とカメラの焦点距離に比例して、測距物体までの距離に反比例する。
なお、図29Aにおいて、符号Mnは、撮像素子Mの画素を示している。また、符号Nnは、撮像素子Nの画素を示している。
また、図29Bにおいて、横軸は、中心からの距離(単位:画素)を示しており、縦軸は、Δr(単位:mm)を示している。
図29Aは、奥行きΔrを考慮してP1,P2を撮像する場合を示す模式図である。各々の光軸からの距離の差(u1-u2)は(22)式となる。
このように基準画像に対するもう1つの画像上の対応点は、エピポーラ線K01上に限定されるため、ステレオマッチングでは、そのエピポーラ線K01上だけを探索すればよい。これは、マッチングの誤差の低減や処理を高速化するために重要である。なお、図31の左側の四角形は、基準画像を示している。
一方、特徴ベースマッチングは、各画像のエッジやコーナー等の特徴点を抽出し、その特徴点どうしの対応を求めるものである。
2~7・・・単位撮像部、
8~13・・・撮像レンズ、
14~19・・・撮像素子、
20~25・・・光軸、
26~31・・・映像処理部、
32~37・・・制御部、
38・・・映像合成処理部
Claims (4)
- 複数の撮像素子と、
前記複数の撮像素子のそれぞれに像を結像させる複数の固体レンズと、
前記複数の撮像素子にそれぞれに入射する光の光軸の方向を制御する複数の光軸制御部と、
前記複数の撮像素子のそれぞれが出力する光電変換信号を、映像信号に変換する複数の映像処理部と、
前記複数の映像処理部が変換した複数の映像信号に基づいて、ステレオマッチング処理を行うことにより、画素毎のシフト量を求め、前記複数の撮像素子の画素ピッチを越えるシフト量を前記画素ピッチで正規化した合成パラメータを生成するステレオ画像処理部と、
前記複数の映像処理部のそれぞれが変換した前記映像信号を、前記ステレオ画像処理部が生成した前記合成パラメータに基づいて合成することにより、高精細映像を生成する映像合成処理部と、
を備える撮像装置。 - 前記ステレオ画像処理部で生成した前記合成パラメータに基づき、前記ステレオマッチング処理に用いる視差画像の雑音を低減するステレオ画像雑音低減処理部を、
さらに備える請求項1に記載の撮像装置。 - 前記映像合成処理部は、前記ステレオ画像処理部で生成した前記視差画像に基づいて、所定領域のみ高精細化する請求項1または2に記載の撮像装置。
- 複数の撮像素子にそれぞれに入射する光の光軸の方向を制御し、
前記複数の撮像素子のそれぞれが出力する光電変換信号を、映像信号に変換し、
変換した複数の映像信号に基づいて、ステレオマッチング処理を行うことにより、画素毎のシフト量を求め、前記複数の撮像素子の画素ピッチを越えるシフト量を前記画素ピッチで正規化した合成パラメータを生成し、
前記映像信号を前記合成パラメータに基づいて合成することにより、高精細映像を生成する撮像方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/260,857 US20120026297A1 (en) | 2009-03-30 | 2010-03-30 | Imaging apparatus and imaging method |
CN201080014012XA CN102365859A (zh) | 2009-03-30 | 2010-03-30 | 摄像装置和摄像方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-083276 | 2009-03-30 | ||
JP2009083276A JP4529010B1 (ja) | 2009-03-30 | 2009-03-30 | 撮像装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010116683A1 true WO2010116683A1 (ja) | 2010-10-14 |
Family
ID=42767901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/002315 WO2010116683A1 (ja) | 2009-03-30 | 2010-03-30 | 撮像装置および撮像方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20120026297A1 (ja) |
JP (1) | JP4529010B1 (ja) |
CN (1) | CN102365859A (ja) |
WO (1) | WO2010116683A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2970573A1 (fr) * | 2011-01-18 | 2012-07-20 | Inst Telecom Telecom Bretagne | Dispositif de capture d'images stereoscopiques |
JP2013061850A (ja) * | 2011-09-14 | 2013-04-04 | Canon Inc | ノイズ低減のための画像処理装置及び画像処理方法 |
JP2017161245A (ja) * | 2016-03-07 | 2017-09-14 | 株式会社明電舎 | ラインセンサカメラのステレオキャリブレーション装置及びステレオキャリブレーション方法 |
Families Citing this family (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8866920B2 (en) | 2008-05-20 | 2014-10-21 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
CN103501416B (zh) | 2008-05-20 | 2017-04-12 | 派力肯成像公司 | 成像系统 |
US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
WO2011063347A2 (en) | 2009-11-20 | 2011-05-26 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
SG185500A1 (en) | 2010-05-12 | 2012-12-28 | Pelican Imaging Corp | Architectures for imager arrays and array cameras |
US8878950B2 (en) | 2010-12-14 | 2014-11-04 | Pelican Imaging Corporation | Systems and methods for synthesizing high resolution images using super-resolution processes |
WO2012155119A1 (en) | 2011-05-11 | 2012-11-15 | Pelican Imaging Corporation | Systems and methods for transmitting and receiving array camera image data |
WO2013003276A1 (en) | 2011-06-28 | 2013-01-03 | Pelican Imaging Corporation | Optical arrangements for use with an array camera |
US20130265459A1 (en) | 2011-06-28 | 2013-10-10 | Pelican Imaging Corporation | Optical arrangements for use with an array camera |
WO2013043761A1 (en) | 2011-09-19 | 2013-03-28 | Pelican Imaging Corporation | Determining depth from multiple views of a scene that include aliasing using hypothesized fusion |
WO2013049699A1 (en) | 2011-09-28 | 2013-04-04 | Pelican Imaging Corporation | Systems and methods for encoding and decoding light field image files |
US9225959B2 (en) | 2012-01-10 | 2015-12-29 | Samsung Electronics Co., Ltd. | Method and apparatus for recovering depth value of depth image |
EP2817955B1 (en) | 2012-02-21 | 2018-04-11 | FotoNation Cayman Limited | Systems and methods for the manipulation of captured light field image data |
US9210392B2 (en) | 2012-05-01 | 2015-12-08 | Pelican Imaging Coporation | Camera modules patterned with pi filter groups |
WO2014005123A1 (en) | 2012-06-28 | 2014-01-03 | Pelican Imaging Corporation | Systems and methods for detecting defective camera arrays, optic arrays, and sensors |
US20140002674A1 (en) | 2012-06-30 | 2014-01-02 | Pelican Imaging Corporation | Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors |
EP2888720B1 (en) | 2012-08-21 | 2021-03-17 | FotoNation Limited | System and method for depth estimation from images captured using array cameras |
US20140055632A1 (en) | 2012-08-23 | 2014-02-27 | Pelican Imaging Corporation | Feature based high resolution motion estimation from low resolution images captured using an array source |
US9214013B2 (en) | 2012-09-14 | 2015-12-15 | Pelican Imaging Corporation | Systems and methods for correcting user identified artifacts in light field images |
CN104685860A (zh) | 2012-09-28 | 2015-06-03 | 派力肯影像公司 | 利用虚拟视点从光场生成图像 |
US9288395B2 (en) * | 2012-11-08 | 2016-03-15 | Apple Inc. | Super-resolution based on optical image stabilization |
US9143711B2 (en) | 2012-11-13 | 2015-09-22 | Pelican Imaging Corporation | Systems and methods for array camera focal plane control |
WO2014130849A1 (en) | 2013-02-21 | 2014-08-28 | Pelican Imaging Corporation | Generating compressed light field representation data |
WO2014133974A1 (en) | 2013-02-24 | 2014-09-04 | Pelican Imaging Corporation | Thin form computational and modular array cameras |
US9638883B1 (en) | 2013-03-04 | 2017-05-02 | Fotonation Cayman Limited | Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process |
WO2014138697A1 (en) | 2013-03-08 | 2014-09-12 | Pelican Imaging Corporation | Systems and methods for high dynamic range imaging using array cameras |
US8866912B2 (en) | 2013-03-10 | 2014-10-21 | Pelican Imaging Corporation | System and methods for calibration of an array camera using a single captured image |
WO2014164909A1 (en) | 2013-03-13 | 2014-10-09 | Pelican Imaging Corporation | Array camera architecture implementing quantum film sensors |
US9106784B2 (en) | 2013-03-13 | 2015-08-11 | Pelican Imaging Corporation | Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing |
WO2014165244A1 (en) | 2013-03-13 | 2014-10-09 | Pelican Imaging Corporation | Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies |
WO2014164550A2 (en) | 2013-03-13 | 2014-10-09 | Pelican Imaging Corporation | System and methods for calibration of an array camera |
WO2014159779A1 (en) | 2013-03-14 | 2014-10-02 | Pelican Imaging Corporation | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US9100586B2 (en) | 2013-03-14 | 2015-08-04 | Pelican Imaging Corporation | Systems and methods for photometric normalization in array cameras |
US9497370B2 (en) | 2013-03-15 | 2016-11-15 | Pelican Imaging Corporation | Array camera architecture implementing quantum dot color filters |
US9497429B2 (en) | 2013-03-15 | 2016-11-15 | Pelican Imaging Corporation | Extended color processing on pelican array cameras |
US9445003B1 (en) | 2013-03-15 | 2016-09-13 | Pelican Imaging Corporation | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information |
JP2016524125A (ja) | 2013-03-15 | 2016-08-12 | ペリカン イメージング コーポレイション | カメラアレイを用いた立体撮像のためのシステムおよび方法 |
US10122993B2 (en) | 2013-03-15 | 2018-11-06 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US9633442B2 (en) | 2013-03-15 | 2017-04-25 | Fotonation Cayman Limited | Array cameras including an array camera module augmented with a separate camera |
US9161020B2 (en) * | 2013-04-26 | 2015-10-13 | B12-Vision Co., Ltd. | 3D video shooting control system, 3D video shooting control method and program |
US9915850B2 (en) * | 2013-07-30 | 2018-03-13 | Nokia Technologies Oy | Optical beams |
WO2015048694A2 (en) | 2013-09-27 | 2015-04-02 | Pelican Imaging Corporation | Systems and methods for depth-assisted perspective distortion correction |
EP3066690A4 (en) | 2013-11-07 | 2017-04-05 | Pelican Imaging Corporation | Methods of manufacturing array camera modules incorporating independently aligned lens stacks |
US10119808B2 (en) | 2013-11-18 | 2018-11-06 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US9426361B2 (en) | 2013-11-26 | 2016-08-23 | Pelican Imaging Corporation | Array camera configurations incorporating multiple constituent array cameras |
WO2015134996A1 (en) | 2014-03-07 | 2015-09-11 | Pelican Imaging Corporation | System and methods for depth regularization and semiautomatic interactive matting using rgb-d images |
DE102014104028B4 (de) * | 2014-03-24 | 2016-02-18 | Sick Ag | Optoelektronische Vorrichtung und Verfahren zum Justieren |
TWI538476B (zh) * | 2014-03-24 | 2016-06-11 | 立普思股份有限公司 | 立體攝影系統及其方法 |
US9247117B2 (en) | 2014-04-07 | 2016-01-26 | Pelican Imaging Corporation | Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array |
US9521319B2 (en) | 2014-06-18 | 2016-12-13 | Pelican Imaging Corporation | Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor |
CN113256730B (zh) | 2014-09-29 | 2023-09-05 | 快图有限公司 | 用于阵列相机的动态校准的系统和方法 |
CN104539934A (zh) | 2015-01-05 | 2015-04-22 | 京东方科技集团股份有限公司 | 图像采集装置和图像处理方法、系统 |
JP6482308B2 (ja) * | 2015-02-09 | 2019-03-13 | キヤノン株式会社 | 光学装置および撮像装置 |
US9942474B2 (en) | 2015-04-17 | 2018-04-10 | Fotonation Cayman Limited | Systems and methods for performing high speed video capture and depth estimation using array cameras |
US10495843B2 (en) * | 2015-08-25 | 2019-12-03 | Electronics And Telecommunications Research Institute | Imaging apparatus with adjustable lens and method for operating the same |
KR101822895B1 (ko) * | 2016-04-07 | 2018-01-29 | 엘지전자 주식회사 | 차량 운전 보조 장치 및 차량 |
KR101822894B1 (ko) * | 2016-04-07 | 2018-01-29 | 엘지전자 주식회사 | 차량 운전 보조 장치 및 차량 |
CN105827922B (zh) * | 2016-05-25 | 2019-04-19 | 京东方科技集团股份有限公司 | 一种摄像装置及其拍摄方法 |
EP3264741A1 (en) | 2016-06-30 | 2018-01-03 | Thomson Licensing | Plenoptic sub aperture view shuffling with improved resolution |
CN112782846B (zh) * | 2016-10-31 | 2023-03-31 | Lg伊诺特有限公司 | 控制用于驱动液体透镜的电压的电路、包括其的相机模块及光学装置 |
US10482618B2 (en) | 2017-08-21 | 2019-11-19 | Fotonation Limited | Systems and methods for hybrid depth regularization |
KR102646521B1 (ko) | 2019-09-17 | 2024-03-21 | 인트린식 이노베이션 엘엘씨 | 편광 큐를 이용한 표면 모델링 시스템 및 방법 |
CA3157197A1 (en) | 2019-10-07 | 2021-04-15 | Boston Polarimetrics, Inc. | Systems and methods for surface normals sensing with polarization |
MX2022005289A (es) | 2019-11-30 | 2022-08-08 | Boston Polarimetrics Inc | Sistemas y metodos para segmentacion de objetos transparentes usando se?ales de polarizacion. |
US11195303B2 (en) | 2020-01-29 | 2021-12-07 | Boston Polarimetrics, Inc. | Systems and methods for characterizing object pose detection and measurement systems |
CN115428028A (zh) | 2020-01-30 | 2022-12-02 | 因思创新有限责任公司 | 用于合成用于在包括偏振图像的不同成像模态下训练统计模型的数据的系统和方法 |
US11953700B2 (en) | 2020-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
US12020455B2 (en) | 2021-03-10 | 2024-06-25 | Intrinsic Innovation Llc | Systems and methods for high dynamic range image reconstruction |
US12069227B2 (en) | 2021-03-10 | 2024-08-20 | Intrinsic Innovation Llc | Multi-modal and multi-spectral stereo camera arrays |
US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
US12067746B2 (en) | 2021-05-07 | 2024-08-20 | Intrinsic Innovation Llc | Systems and methods for using computer vision to pick up small objects |
US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
CN116156284A (zh) * | 2021-11-17 | 2023-05-23 | 深圳Tcl新技术有限公司 | 成像装置及智能显示设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08307776A (ja) * | 1995-04-27 | 1996-11-22 | Hitachi Ltd | 撮像装置 |
JP2006119843A (ja) * | 2004-10-20 | 2006-05-11 | Olympus Corp | 画像生成方法およびその装置 |
JP2006217131A (ja) * | 2005-02-02 | 2006-08-17 | Matsushita Electric Ind Co Ltd | 撮像装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3542397B2 (ja) * | 1995-03-20 | 2004-07-14 | キヤノン株式会社 | 撮像装置 |
JP4377673B2 (ja) * | 2003-12-19 | 2009-12-02 | 日本放送協会 | 立体画像撮像装置および立体画像表示装置 |
ES2731852T3 (es) * | 2005-04-29 | 2019-11-19 | Koninklijke Philips Nv | Un aparato de visualización estereoscópica |
JP4102854B2 (ja) * | 2006-03-22 | 2008-06-18 | 松下電器産業株式会社 | 撮像装置 |
-
2009
- 2009-03-30 JP JP2009083276A patent/JP4529010B1/ja not_active Expired - Fee Related
-
2010
- 2010-03-30 CN CN201080014012XA patent/CN102365859A/zh active Pending
- 2010-03-30 US US13/260,857 patent/US20120026297A1/en not_active Abandoned
- 2010-03-30 WO PCT/JP2010/002315 patent/WO2010116683A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08307776A (ja) * | 1995-04-27 | 1996-11-22 | Hitachi Ltd | 撮像装置 |
JP2006119843A (ja) * | 2004-10-20 | 2006-05-11 | Olympus Corp | 画像生成方法およびその装置 |
JP2006217131A (ja) * | 2005-02-02 | 2006-08-17 | Matsushita Electric Ind Co Ltd | 撮像装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2970573A1 (fr) * | 2011-01-18 | 2012-07-20 | Inst Telecom Telecom Bretagne | Dispositif de capture d'images stereoscopiques |
JP2013061850A (ja) * | 2011-09-14 | 2013-04-04 | Canon Inc | ノイズ低減のための画像処理装置及び画像処理方法 |
JP2017161245A (ja) * | 2016-03-07 | 2017-09-14 | 株式会社明電舎 | ラインセンサカメラのステレオキャリブレーション装置及びステレオキャリブレーション方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2010239290A (ja) | 2010-10-21 |
JP4529010B1 (ja) | 2010-08-25 |
US20120026297A1 (en) | 2012-02-02 |
CN102365859A (zh) | 2012-02-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010116683A1 (ja) | 撮像装置および撮像方法 | |
JP4413261B2 (ja) | 撮像装置及び光軸制御方法 | |
US11570423B2 (en) | System and methods for calibration of an array camera | |
Venkataraman et al. | Picam: An ultra-thin high performance monolithic camera array | |
Perwass et al. | Single lens 3D-camera with extended depth-of-field | |
US8824833B2 (en) | Image data fusion systems and methods | |
US20120147150A1 (en) | Electronic equipment | |
US9473700B2 (en) | Camera systems and methods for gigapixel computational imaging | |
JP5677366B2 (ja) | 撮像装置 | |
JP2012249070A (ja) | 撮像装置及び撮像方法 | |
US20120230549A1 (en) | Image processing device, image processing method and recording medium | |
JP2013061850A (ja) | ノイズ低減のための画像処理装置及び画像処理方法 | |
CN107979716B (zh) | 相机模块和包括该相机模块的电子装置 | |
JP4322921B2 (ja) | カメラモジュールおよびそれを備えた電子機器 | |
JP6544978B2 (ja) | 画像出力装置およびその制御方法、撮像装置、プログラム | |
Ueno et al. | Compound-Eye Camera Module as Small as 8.5$\times $8.5$\times $6.0 mm for 26 k-Resolution Depth Map and 2-Mpix 2D Imaging | |
WO2009088068A1 (ja) | 撮像装置及び光軸制御方法 | |
CN113395413A (zh) | 相机模块、成像设备和图像处理方法 | |
KR20210114846A (ko) | 고정된 기하학적 특성을 이용한 카메라 모듈, 촬상 장치 및 이미지 처리 방법 | |
JP2013157713A (ja) | 画像処理装置および画像処理方法、プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080014012.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10761389 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13260857 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10761389 Country of ref document: EP Kind code of ref document: A1 |