WO2012147368A1 - Image capturing apparatus - Google Patents
Image capturing apparatus Download PDFInfo
- Publication number
- WO2012147368A1 WO2012147368A1 PCT/JP2012/002934 JP2012002934W WO2012147368A1 WO 2012147368 A1 WO2012147368 A1 WO 2012147368A1 JP 2012002934 W JP2012002934 W JP 2012002934W WO 2012147368 A1 WO2012147368 A1 WO 2012147368A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- imaging
- unnecessary
- unit
- area
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/18—Signals indicating condition of a camera member or suitability of light
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
Definitions
- the present invention relates to a photographing apparatus, and more particularly, to a photographing apparatus in which a user of a digital camera, an electronic device with a camera function, or the like grips a casing to photograph a subject.
- Electronic devices with photographing functions such as digital cameras and electronic devices with camera functions are widely used.
- a mobile communication terminal such as a mobile phone device or a smartphone is also generally equipped with a camera function.
- This type of photographing apparatus has been reduced in size in order to improve portability, and there is a problem that a so-called “finger photographing” is likely to occur in which a user's finger appears in a photographed image at the time of photographing.
- a so-called “finger photographing” is likely to occur in which a user's finger appears in a photographed image at the time of photographing.
- a finger is likely to turn around to the lens side when trying to hold it firmly, and finger shooting is likely to occur because the lens is located at the end of the terminal.
- Patent Document 1 discloses that an unnecessary image is captured by comparing a focus evaluation value, a signal level of a color component, and a signal level of a luminance component in an unnecessary image detection region and a non-detection region.
- An imaging device is disclosed that warns a photographer when it is determined that the camera is busy.
- Patent Document 2 when both a moving area and a non-moving low-luminance area exist based on a temporal change in the position of a predetermined low-luminance area in a plurality of live view images, the photographer's finger is An imaging apparatus that determines that the image capturing target area is included in the imaging target area is disclosed.
- Patent Document 1 it is necessary to acquire the focus evaluation value by moving the lens from the infinite end to the macro end.
- Patent Document 2 requires information on temporal change of the position of the low luminance region in the captured image. Since these affect the shooting response, there is a problem that it takes time to take a picture while determining the presence or absence of fingering.
- finger shooting may be erroneously determined.
- the desk in the lower part of the photographed image exists at a close distance from the photographing device, so that the contrast value of the photographed image increases on the macro side.
- the desk position does not change over time.
- the desk is erroneously determined to be a finger.
- the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an imaging apparatus capable of accurately determining finger exposure in a short time.
- the imaging device of the present invention includes an imaging unit that captures a subject image, an imaging data storage unit that stores imaging data of the subject acquired by the imaging unit, a motion detection unit that detects a motion of the imaging device, and the imaging Based on the image parameter identification unit for identifying the distribution of the image parameters of the data and the image parameters of the plurality of imaging data acquired in a state where the imaging apparatus is in motion, substantially the same image in substantially the same region in the imaging data
- an unnecessary image determination unit that determines that the region is an unnecessary image region including an unnecessary image
- a notification unit that notifies that the unnecessary image region has been detected are provided.
- FIG. 1 is a block diagram showing a configuration of a main part of a photographing apparatus according to a first embodiment of the present invention.
- 6 is a flowchart showing an operation procedure of the photographing apparatus according to the first embodiment.
- the block diagram which shows the structure of the principal part of the imaging device which concerns on the 2nd Embodiment of this invention.
- 7 is a flowchart showing an operation procedure of the photographing apparatus according to the second embodiment.
- (A) to (C) Schematic diagram for explaining the operation of the photographing apparatus according to the third embodiment. 7 is a flowchart showing an operation procedure of the photographing apparatus according to the third embodiment.
- FIG. 1 The block diagram which shows the structure of the principal part of the imaging device which concerns on the 4th Embodiment of this invention.
- FIG. 1 Schematic diagrams for explaining the operation of the photographing apparatus according to the fourth embodiment.
- FIG. 1 Schematic diagram for explaining the operation of the photographing apparatus according to the fifth embodiment.
- FIG. 1 Schematic diagram for explaining the operation of the photographing apparatus according to the fifth embodiment.
- a configuration example applied to an electronic device having a shooting function such as a digital camera, a mobile phone device, or a mobile terminal with a camera function such as a smartphone, is shown as an example of a shooting device.
- FIG. 1 is a block diagram showing the configuration of the main part of the photographing apparatus according to the first embodiment of the present invention.
- the imaging apparatus includes an imaging unit 11, an imaging data storage unit 12, a color distribution recognition unit 13, a motion detection unit 14, an unnecessary image determination unit 15, a display unit 16, a control unit 17, and an operation unit 18.
- the imaging unit 11 includes an imaging optical system having one or a plurality of lenses and an imaging element such as a CMOS image sensor or a CCD image sensor, and captures a subject image formed by the imaging optical system. And photoelectrically convert the image data of the subject.
- an imaging optical system having one or a plurality of lenses and an imaging element such as a CMOS image sensor or a CCD image sensor, and captures a subject image formed by the imaging optical system. And photoelectrically convert the image data of the subject.
- the imaging data storage unit 12 includes a memory, and stores imaging data acquired by the imaging unit 11.
- the imaging data storage unit 12 stores, as imaging data, image data of a moving image or a still image in a preview image before shooting, image data of a still image or a moving image shot by a user's shooting operation, and the like. .
- the color distribution recognizing unit 13 functions as an example of an image parameter recognizing unit, and recognizes the color distribution in the image with respect to the imaging data stored in the imaging data storage unit 12. In the present embodiment, the color distribution recognizing unit 13 recognizes the color distribution of the captured image data of the preview image before photographing when the motion detecting unit 14 detects the motion of the photographing apparatus.
- the motion detection unit 14 includes an acceleration sensor, a gyroscope, and the like, and detects the motion of the photographing apparatus. Note that the motion detection unit 14 is not limited to the one that directly detects the movement of the photographing apparatus main body, but may be one that detects the movement of the photographing data indirectly by detecting the movement in the image of the imaging data. is there.
- the unnecessary image determination unit 15 determines whether or not an unnecessary image exists in the image to be captured based on the recognition result of the color distribution of the imaging data at the time of motion detection by the color distribution recognition unit 13.
- the unnecessary image determination unit 15 has a region where the color distribution is the same (substantially the same) in the moving image of the preview image during motion detection, that is, although the photographing apparatus is moving. If the same (substantially the same) color distribution continues at the same (substantially the same) position in the image, it is determined that an unnecessary image exists in the corresponding area. Since this unnecessary image is considered to be a finger of a user holding the photographing apparatus, it is determined that a finger image has occurred.
- various image parameters can be used as information for determining an unnecessary image in the unnecessary image determination unit 15. For example, using RGB, YCbCr, L * a * b * , XYZ, L * u * v * , HSV, etc. as image parameters, the distribution of image data such as color (including hue, color difference, etc.), brightness, etc. It is possible to recognize an unnecessary image in the image based on the recognition result.
- the display unit 16 functions as an example of a notification unit, and includes a liquid crystal display, an LED, a lamp, and the like, and displays a determination result of an unnecessary image to notify the user of the presence of the unnecessary image.
- the control unit 17 controls the operation of each unit of the photographing apparatus, and particularly controls the operation of the imaging unit 11, the color distribution recognition unit 13, and the unnecessary image determination unit 15.
- the operation unit 18 is configured by an operation device including a switch having an operation member such as a push button or a touch panel, and performs various operation inputs in the photographing apparatus based on user operation instructions.
- the color distribution recognition unit 13, the unnecessary image determination unit 15, and the control unit 17 execute a predetermined program in a processing unit including a computer having a processor and a memory that perform various information processing, and the like. Functions of each part are realized.
- FIG. 2 is a flowchart showing an operation procedure of the photographing apparatus according to the first embodiment.
- the color distribution recognizing unit 13 determines whether or not the movement of the photographing apparatus is detected by the movement detecting unit 14 (step S11). If the movement is detected, the two frames immediately before the moving image of the preview image are detected. The imaging data of the image is read from the imaging data storage unit 12 (step S12). The number of frames to be read may be any number as long as it is a plurality of frames of two or more. Subsequently, the color distribution recognition unit 13 recognizes the color distribution of the imaging data of each frame (step S13).
- the unnecessary image determination unit 15 determines whether there is a region having the same color distribution (substantially the same region) in the imaging data of a plurality of frames based on the recognition result of the color distribution by the color distribution recognition unit 13, that is, images of different frames. It is determined whether the same color distribution (substantially the same color distribution) exists in the same position (substantially the same position) (step S14). Here, if there are regions having the same color distribution in two frames (YES in step S14), it is determined that an unnecessary image exists. In this case, the unnecessary image determination unit 15 outputs a signal including control information, display information, and the like to the display unit 16 to display that the unnecessary image is detected on the display unit 16 (step S15). On the other hand, if it is determined in step S14 that there is no region having the same color distribution in the imaging data of a plurality of frames (NO in step S14), it is determined that there is no unnecessary image, and the present process is terminated.
- the first embodiment when a plurality of frames of captured image data of a preview image acquired while detecting the movement of the photographing apparatus are compared, there is a specific color region whose position does not change with respect to the movement of the photographing apparatus. In addition, it is determined that there is an unnecessary image due to fingering. Thereby, the presence or absence of an unnecessary image can be determined immediately, and the determination of a finger image can be made in a short time. In addition, since the determination is performed only when there is a movement of the photographing apparatus, it is possible to prevent erroneous determination when, for example, the photographing apparatus is placed on a desk and fixed point photographing is performed.
- FIG. 3 is a block diagram showing the configuration of the main part of the photographing apparatus according to the second embodiment of the present invention.
- the second embodiment is an example of a photographing apparatus having a monocular 3D photographing function of sequentially photographing two images (left eye image and right eye image) having parallax with one photographing optical system.
- the imaging apparatus includes an imaging unit 21, an imaging data storage unit 22, a luminance distribution recognition unit 23, a luminance distribution storage unit 24, an unnecessary image determination unit 25, a display unit 26, a control unit 27, and an operation unit 28.
- the user moves the shooting device in a horizontal direction by a predetermined amount, The other image is taken at a position having the parallax (second photo shooting location).
- first image shooting location After shooting one of the left-eye image and the right-eye image at an arbitrary position (first image shooting location), the user moves the shooting device in a horizontal direction by a predetermined amount, The other image is taken at a position having the parallax (second photo shooting location).
- second photo shooting location a case will be described in which the left eye image is taken as the first image and the right eye image is taken as the second image.
- the imaging data storage unit 22 sequentially stores image data of a preview image before shooting as imaging data acquired by the imaging unit 21, and stores image data of a left eye image and a right eye image at the time of stereoscopic shooting.
- the luminance distribution recognizing unit 23 functions as an example of an image parameter recognizing unit, and recognizes the luminance distribution in the image with respect to the imaging data stored in the imaging data storage unit 22.
- the luminance distribution recognizing unit 23 captures both the captured left-eye image and the preview image before capturing the right-eye image or the captured right-eye image when stereoscopic imaging by monocular 3D imaging is performed. Recognize the luminance distribution.
- monocular 3D shooting the user moves the imaging device in a horizontal direction by a predetermined amount to capture two images with parallax. Therefore, the imaging device detects a state in which movement is occurring in the imaging device by recognizing the state of the monocular 3D imaging mode.
- the luminance distribution storage unit 24 stores the luminance distribution of the imaging data recognized by the luminance distribution recognition unit 23.
- the luminance distribution of the imaging data of the left-eye image previously captured is stored.
- the unnecessary image determination unit 25 determines whether or not an unnecessary image exists in the image to be captured based on the recognition result of the luminance distribution of the imaging data by the luminance distribution recognition unit 23.
- the unnecessary image determination unit 25 compares the luminance distributions of both the captured left-eye image and the captured image of the preview image before capturing the right-eye image or the captured right-eye image, and the same position (substantially the same position). ), It is determined that an unnecessary image exists in the corresponding area. That is, it is determined that there is an unnecessary image when there is a low-luminance region at the same position in the image despite the movement of the imaging device for monocular 3D imaging. Since the unnecessary image in the low brightness area is considered to be a finger of a user holding the photographing apparatus, it is determined that a finger image is generated.
- a color distribution may be used in the same manner as in the first embodiment, instead of the luminance distribution.
- Various image parameters can also be used. For example, using RGB, YCbCr, L * a * b * , XYZ, L * u * v * , HSV, etc. as image parameters, the distribution of image data such as color (including hue, color difference, etc.), brightness, etc. It is possible to recognize an unnecessary image in the image based on the recognition result.
- the display unit 26 functions as an example of a notification unit, and includes a liquid crystal display, an LED, a lamp, and the like, and displays a determination result of an unnecessary image to notify the user of the presence of the unnecessary image.
- the control unit 27 controls the operation of each unit of the photographing apparatus, and in particular controls the operation of the imaging unit 21, the luminance distribution recognition unit 23, and the unnecessary image determination unit 25.
- the operation unit 28 includes a switch having an operation member such as a push button, or an operation device including a touch panel, and performs various operation inputs in the photographing apparatus based on user operation instructions.
- the luminance distribution recognition unit 23, the luminance distribution storage unit 24, the unnecessary image determination unit 25, and the control unit 27 are a predetermined program in a processing unit that includes a processor that performs various types of information processing, a memory, and the like. By executing the above, the function of each unit is realized.
- FIG. 4A to 4C are schematic views for explaining the operation of the photographing apparatus according to the second embodiment.
- 4A is an external view of a monocular 3D photographing camera
- FIG. 4B is a diagram illustrating the position of the camera with respect to a subject
- FIG. 4C is an example of a left-eye image and a right-eye image with and without fingering.
- FIG. 4A is an external view of a monocular 3D photographing camera
- FIG. 4B is a diagram illustrating the position of the camera with respect to a subject
- FIG. 4C is an example of a left-eye image and a right-eye image with and without fingering.
- a camera 30 having a monocular 3D photographing function has a lens 31 of one photographing optical system.
- FIG. 4B in the case of performing stereoscopic shooting by monocular 3D shooting, after shooting the left eye image of the subject, the camera 30 is moved by a predetermined amount in the horizontal right direction so that a predetermined parallax is obtained. Take a picture.
- FIG. 5 is a flowchart showing an operation procedure of the photographing apparatus according to the second embodiment.
- the luminance distribution recognition unit 23 reads the imaging data of the left eye image from the imaging data storage unit 22 (step S21), and recognizes the luminance distribution of the left eye image (step S22). Then, the luminance distribution recognition unit 23 stores the recognized luminance distribution of the left eye image in the luminance distribution storage unit 24 (step S23).
- the luminance distribution recognition unit 23 reads the imaging data of the right eye image from the imaging data storage unit 22 (step S24), and recognizes the luminance distribution of the right eye image (step S25). Note that instead of the imaging data of the right eye image, the luminance distribution of the preview image before the right eye image is captured may be recognized. Further, the luminance distribution of the right eye image may be stored in the luminance distribution storage unit 24 as in the case of the left eye image.
- the unnecessary image determination unit 25 is reduced to the same position (substantially the same position) in the luminance distribution of the imaging data of both the left eye image and the right eye image based on the result of the luminance distribution recognition by the luminance distribution recognition unit 23. It is determined whether there is a luminance area (step S26). Here, if there is a low luminance area at the same position of the two images (YES in step S26), it is determined that an unnecessary image exists. In this case, the unnecessary image determination unit 25 outputs a signal including control information, display information, and the like to the display unit 26, and displays that the unnecessary image has been detected on the display unit 26 (step S27).
- step S26 determines that there is no low luminance area at the same position in the imaging data of the left eye image and the right eye image (NO in step S26), it is determined that there is no unnecessary image, and this process is performed as it is. finish.
- the imaging data of the left eye image and the right eye image are compared, and if there is a low luminance area at the same position with respect to the movement of the imaging device, It is determined that an unnecessary image exists. As a result, the presence / absence of an unnecessary image can be determined immediately and easily, and the finger image can be determined in a short time without error.
- the third embodiment is an example of a photographing apparatus having a compound eye 3D photographing function for simultaneously photographing two images (a left-eye image and a right-eye image) having parallax by two photographing optical systems.
- the configuration of the photographing apparatus of the third embodiment is the same as that of the second embodiment, and some functions are different from those of the second embodiment. A description will be given centering on differences from the second embodiment.
- FIGS. 6A to 6C are schematic diagrams for explaining the operation of the photographing apparatus according to the third embodiment.
- 6A is an external view of a compound-eye 3D shooting camera
- FIG. 6B is a diagram illustrating the position of the camera with respect to the subject
- FIG. 6C is an example of a left-eye image and a right-eye image with and without fingering.
- FIG. 6A is an external view of a compound-eye 3D shooting camera
- FIG. 6B is a diagram illustrating the position of the camera with respect to the subject
- FIG. 6C is an example of a left-eye image and a right-eye image with and without fingering.
- a camera 40 having a compound-eye 3D imaging function has two imaging optical system lenses 41 and 42.
- FIG. 6B in the case of performing stereoscopic imaging by compound eye 3D imaging, the left eye image and the right eye image of the subject are simultaneously captured by the two lenses 41 and 42, respectively.
- FIG. 7 is a flowchart showing an operation procedure of the photographing apparatus according to the third embodiment.
- the imaging data of the left eye image and the right eye image are simultaneously acquired and stored in the imaging data storage unit 22 respectively.
- the luminance distribution recognition unit 23 reads the imaging data of the left eye image from the imaging data storage unit 22 (step S31), and recognizes the luminance distribution of the left eye image (step S32). Then, the luminance distribution recognition unit 23 stores the recognized luminance distribution of the left eye image in the luminance distribution storage unit 24 (step S33).
- the luminance distribution recognition unit 23 reads the imaging data of the right eye image from the imaging data storage unit 22 (step S34), and recognizes the luminance distribution of the right eye image (step S35). Note that the luminance distribution of the right eye image may be stored in the luminance distribution storage unit 24 as in the case of the left eye image.
- the unnecessary image determination unit 25 has a low luminance region in only one image in the luminance distribution of the imaging data of both the left eye image and the right eye image based on the result of the luminance distribution recognition by the luminance distribution recognition unit 23. Is determined (step S36). Here, if only one image has a low-luminance region (YES in step S36), it is determined that an unnecessary image exists. In this case, the unnecessary image determination unit 25 outputs a signal including control information, display information, and the like to the display unit 26, and displays that the unnecessary image is detected on the display unit 26 (step S37).
- step S36 determines whether the low-luminance area exists in both the left-eye image and the right-eye image, or if the low-luminance area exists in both, it is determined that no unnecessary image exists.
- the imaging device of the third embodiment when the movement of the imaging device is detected, the color distribution of the imaging data is recognized and the position in the image does not change. By detecting this, it is also possible to determine the presence or absence of unnecessary images.
- the imaging data of the left eye image and the right eye image are compared, and if only one of the images has a low luminance area, there is an unnecessary image due to fingering. It is determined that As a result, the presence / absence of an unnecessary image can be determined immediately and easily, and the finger image can be determined in a short time without error.
- FIG. 8 is a block diagram showing the configuration of the main part of the photographing apparatus according to the fourth embodiment of the present invention.
- the imaging apparatus of the fourth embodiment includes an image processing unit 51 that performs image processing on a captured image, and an image storage unit 52 that stores image data after image processing. Others are the same as those of the first embodiment shown in FIG. 1, and here, the description will focus on differences from the first embodiment. It should be noted that the fourth embodiment can be combined with the second embodiment or the third embodiment.
- the image processing unit 51 corrects the captured image so as to eliminate the unnecessary image by performing an unnecessary image region complementing process or the like.
- the image storage unit 52 stores the image data of the captured image after the image processing by the image processing unit 51. Note that the captured image after image processing can be displayed on the display unit 16.
- FIGS. 9A to 9C are schematic diagrams for explaining the operation of the photographing apparatus according to the fourth embodiment.
- (A) is a diagram showing image data of a plurality of frames of a moving image
- (B) is a diagram showing an example of image correction using image data of other frames
- (C) is an image of the image data. It is a figure explaining the calculation method of these motion vectors.
- the imaging data storage unit 12 a moving image in which the same color distribution area exists in the N ⁇ 1th frame, the Nth frame, the N + 1th frame, the N + 2th frame,. It is assumed that the imaging data is stored. In this case, the color distribution recognition unit 13 and the unnecessary image determination unit 15 determine that there is an unnecessary image.
- the imaging data corresponds to image data at the time of moving image shooting or image data at the time of preview display.
- the image processing unit 51 performs an unnecessary image region complementing process using the imaging data of two or more frames.
- the pixels in the unnecessary image area are complemented with image data of other frames as shown on the right side.
- the image processing unit 51 extracts feature points P1, P2, and P3 common to the images of the two frames, and calculates a motion vector of the feature points. From this motion vector, the amount of motion in the area corresponding to the unnecessary image area is known.
- the corresponding region S of the (N ⁇ 1) th frame is obtained from the unnecessary image region S ′ by the inverse vector of the calculated motion vector of the feature point.
- the imaging data used for the correction process of the unnecessary image area may be a frame of a captured image being recorded at the time of shooting or a frame of a preview image.
- the present invention can be applied not only to moving image imaging data but also to still image imaging data.
- the imaging data used for pixel complementation may use not only the previous frame of the correction target frame but also the subsequent frame, or both the front and rear frames.
- image correction when finger copying occurs, image correction can be performed by appropriately complementing the pixels in the unnecessary image area, and the influence of finger shooting on the captured image can be eliminated. Further, since it is not necessary to perform trimming of an unnecessary image area or the like in image correction, it is possible to prevent a problem that the angle of view of the captured image is narrowed.
- FIGS. 10A and 10B and FIGS. 11A and 11B are schematic diagrams for explaining the operation of the photographing apparatus according to the fifth embodiment.
- 10A is a diagram illustrating a state in which an unnecessary image region is designated
- FIG. 10B is a diagram illustrating imaging data of a plurality of frames of a moving image.
- 11A and 11B are diagrams illustrating a plurality of examples of image correction.
- FIG. 11A is a diagram illustrating an example of an image when completion is completed
- FIG. 11B is a diagram illustrating an example of a clipping process when completion is not completed.
- the user designates an area where a finger is captured or an area where there is a high possibility of finger capture as an unnecessary image area 61. .
- an area where finger shooting occurs and an area where there is a high possibility of finger shooting may be specified.
- an unnecessary image area is set in advance and image correction of the unnecessary image area is performed, or an image of another area is cut out without using the unnecessary image area. Is generated.
- FIG. 12 is a flowchart showing an operation procedure of the photographing apparatus according to the fifth embodiment.
- the control unit 17 performs image processing on the area information of the specified unnecessary image area.
- the data is transferred to the unit 51 (step S41).
- the image processing unit 51 stores area information of unnecessary image areas.
- the operation unit 18 and the control unit 17 realize the function of the unnecessary image region setting unit.
- the image processing unit 51 specifies an area that needs to be complemented in the imaging data stored in the imaging data storage unit 12 (step S42).
- the imaging data storage unit 12 stores imaging data of a plurality of frames of moving images such as the (N ⁇ 1) th frame, the Nth frame, the (N + 1) th frame, the (N + 2) th frame,. It shall be.
- the imaging data corresponds to image data at the time of moving image shooting or image data at the time of preview display.
- the image processing unit 51 performs a complementing process when the unnecessary image region can be complemented using image data of other frames.
- the image processing unit 51 complements unnecessary image regions in the preview image (step S43).
- imaging data of a plurality of frames in a state where there is a motion can be acquired.
- the image processing unit 51 uses the imaging data stored in the imaging data storage unit 12 based on the instruction from the control unit 17. Then, a captured image of the subject is generated (step S44).
- the image processing unit 51 determines whether or not the complementing process in step S43 is normally completed (step S45). If the complementing process is normally completed, a captured image with a full angle of view is generated, and the image storage unit 52, It outputs to the display part 16 (step S46). As shown in FIG. 11A, for the image data of the image 71 at the time of completion of complementation, the image processing unit 51 generates a captured image with a full angle of view as it is.
- the image processing unit 51 generates a captured image by cutting out a non-fingerprint region in the image so as to exclude the region including the unnecessary image region, and the image storage unit 52. And output to the display unit 16 (step S47).
- the image processing unit 51 cuts out an area excluding the unnecessary image area and generates a captured image.
- a method for extracting the non-fingerprint region there are two methods as illustrated. In the first method, a photographed image is generated by cutting out a non-fingerprinted region as it is like a region 73 indicated by a broken line frame. In the second method, as in a region 74 indicated by a one-dot chain line, the aspect ratio of the original image is adjusted in the horizontal and vertical directions, and the non-fingerprinted region is cut out to generate a captured image. .
- the user can intentionally correct the unnecessary image region by setting the region where the finger image is generated or the region having high possibility as the unnecessary image region in advance and performing image correction. In this way, it is possible to eliminate the influence of finger shooting on the photographed image.
- the unnecessary image area is corrected by complementing the pixels. If the complement cannot be performed, the unnecessary image area is appropriately trimmed to obtain a photographed image without fingering.
- An imaging unit that captures a subject image, an imaging data storage unit that stores imaging data of the subject acquired by the imaging unit, a motion detection unit that detects movement of the imaging device, and an image parameter distribution of the imaging data Based on the image parameter identification unit for identifying and image parameters of a plurality of imaging data acquired in a state where the imaging device is in motion, when there is a distribution of substantially the same image parameters in substantially the same region in the imaging data,
- An imaging apparatus comprising: an unnecessary image determination unit that determines that the region is an unnecessary image region including an unnecessary image; and a notification unit that notifies that the unnecessary image region has been detected.
- image parameter distributions such as color distribution and luminance distribution are recognized for a plurality of imaging data acquired in a moving state in the imaging apparatus such as a preview image before imaging, and substantially the same in substantially the same region.
- image parameter distribution it can be determined that the image area is an unnecessary image area. Therefore, it is possible to determine an unnecessary image area due to finger shooting in a short time, and to improve the shooting response. Further, when there is no movement in the photographing apparatus, an unnecessary image area is not determined, so that erroneous determination at the time of fixed-point shooting can be suppressed.
- the imaging apparatus wherein the image parameter identification unit recognizes a color distribution of imaging data as the image parameter.
- the unnecessary image determination unit is an unnecessary image region in which unnecessary images are included when there are regions having substantially the same color distribution in the color distribution of the plurality of imaging data.
- the imaging apparatus wherein the image parameter identification unit recognizes a luminance distribution of imaging data as the image parameter.
- the imaging unit can perform monocular 3D imaging in which two images with parallax are sequentially captured by a single imaging optical system, and the image parameter identification unit captures the image as the image parameter. Recognizing the luminance distribution of the data, the unnecessary image determination unit, when there is a low luminance region at substantially the same position in both of the image data of the two images, the unnecessary image including the unnecessary image in the region An imaging device that determines to be an area. With the above configuration, when performing monocular 3D imaging, it is possible to determine an unnecessary image area by recognizing a low-luminance area at approximately the same position in two pieces of imaging data with parallax acquired sequentially by moving the imaging apparatus. .
- the imaging unit can perform compound eye 3D imaging in which two images with parallax are simultaneously captured by two imaging optical systems, and the image parameter identification unit captures an image as the image parameter. Recognizing the luminance distribution of the data, the unnecessary image determination unit determines that the area is an unnecessary image area including an unnecessary image when only one of the image data of the two images has a low luminance area. Shooting device to do. With the above configuration, when performing compound eye 3D shooting, an unnecessary image region can be determined by recognizing a low-luminance region in only one of two imaging data having parallax acquired simultaneously.
- the imaging apparatus includes an image processing unit that performs image processing on the imaging data, and the image processing unit stores other imaging data stored in the imaging data storage unit for imaging data including the unnecessary image region.
- An imaging apparatus that calculates a motion vector of an image of a subject using data and complements pixels in the unnecessary image area from the other imaging data according to the motion vector.
- An imaging apparatus comprising: an unnecessary image area setting unit that presets an area that is likely to be the unnecessary image area.
- the imaging apparatus wherein the image processing unit captures an image including the unnecessary image area when the image data including the unnecessary image area cannot be complemented with pixels from the other imaging data.
- An imaging device that cuts out data and generates a captured image.
- An imaging method in an imaging device including an imaging unit that captures a subject image, the step of storing imaging data of the subject acquired by the imaging unit, the step of detecting movement of the imaging device, and an image of the imaging data When there is substantially the same image parameter distribution in substantially the same region in the imaging data based on the step of identifying the parameter distribution and the image parameters of the plurality of imaging data acquired in a state where the imaging device is in motion And a step of determining that the area is an unnecessary image area including an unnecessary image and a step of notifying that the unnecessary image area has been detected.
- a program that causes a computer to execute each procedure in the above photographing method A program that causes a computer to execute each procedure in the above photographing method.
- the present invention has an effect of enabling fingering judgment to be accurately performed in a short time in a photographing apparatus.
- a user of a digital camera, an electronic device with a camera function, etc. It is useful as a photographing device that performs the above photographing.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Exposure Control For Cameras (AREA)
- Camera Bodies And Camera Details Or Accessories (AREA)
- Indication In Cameras, And Counting Of Exposures (AREA)
Abstract
Provided is an image capturing apparatus equipped with: an imaging data storing unit (12) for storing imaging data of a captured subject acquired by an imaging unit (11); a motion detection unit (14) for detecting a motion of the image capturing apparatus; a color distribution identification unit (13) for identifying a color distribution of the imaging data; and an unnecessary image determination g unit (15). Based on image parameters of a plurality of imaging data acquired while the image capturing apparatus is in motion, when an approximately identical color distribution exists in an approximately identical region in the imaging data, said unnecessary image determination unit (15) determines that this region is an unnecessary image region containing an unnecessary image due to a finger being captured within an image.
Description
本発明は、撮影装置に関し、特にデジタルカメラ、カメラ機能付き電子機器などの使用者が筐体を把持して被写体の撮影を行う撮影装置に関する。
The present invention relates to a photographing apparatus, and more particularly, to a photographing apparatus in which a user of a digital camera, an electronic device with a camera function, or the like grips a casing to photograph a subject.
デジタルカメラ、カメラ機能付き電子機器などの撮影機能を持つ電子機器が広く普及している。カメラ機能付き電子機器の例として、携帯電話装置、スマートフォンなどの携帯通信端末においても、カメラ機能が搭載されたものが一般的になりつつある。
Electronic devices with photographing functions such as digital cameras and electronic devices with camera functions are widely used. As an example of an electronic device with a camera function, a mobile communication terminal such as a mobile phone device or a smartphone is also generally equipped with a camera function.
この種の撮影装置は、携帯性を高めるために、小型化が進んでおり、撮影時に使用者の指が撮影画像内に写ってしまう現象、いわゆる「指写り」が生じやすいという課題がある。撮影装置によっては、レンズに指がかからないように持とうとすると、装置筐体をしっかり持つことができず、手振れしやすくなってしまう場合がある。また、スマートフォンなどでは、筐体の厚みが薄いため、しっかり持とうとすると指がレンズ側に回りこみ易く、またレンズが端末の端部に位置しているため、指写りが発生しやすい。
This type of photographing apparatus has been reduced in size in order to improve portability, and there is a problem that a so-called “finger photographing” is likely to occur in which a user's finger appears in a photographed image at the time of photographing. Depending on the photographic device, if you try to hold the lens so that your finger does not touch it, you may not be able to hold the device housing firmly and it will be easy to shake. In addition, in a smartphone or the like, since the thickness of the housing is thin, a finger is likely to turn around to the lens side when trying to hold it firmly, and finger shooting is likely to occur because the lens is located at the end of the terminal.
指写りの課題を解決するものとして、特許文献1には、不要画像検出領域と非検出領域とで、フォーカス評価値、色成分の信号レベル、輝度成分の信号レベルを比較し、不要画像が写り込んでいると判定した場合に撮影者に対して警告を行う撮像装置が開示されている。また、特許文献2には、複数のライブビュー画像における所定の低輝度領域の位置の経時変化に基づいて、移動領域と非移動低輝度領域との両方が存在する場合に、撮影者の手指が撮影対象領域内に含まれていると判定する撮像装置が開示されている。
In order to solve the problem of image capture, Patent Document 1 discloses that an unnecessary image is captured by comparing a focus evaluation value, a signal level of a color component, and a signal level of a luminance component in an unnecessary image detection region and a non-detection region. An imaging device is disclosed that warns a photographer when it is determined that the camera is busy. Further, in Patent Document 2, when both a moving area and a non-moving low-luminance area exist based on a temporal change in the position of a predetermined low-luminance area in a plurality of live view images, the photographer's finger is An imaging apparatus that determines that the image capturing target area is included in the imaging target area is disclosed.
特許文献1の従来例では、レンズを無限端からマクロ端まで移動させてフォーカス評価値を取得する必要がある。特許文献2の従来例では、撮影画像における低輝度領域の位置の経時変化情報を必要とする。これらは撮影レスポンスに影響を与えるため、指写りの有無を判定しながら撮影を行うのに時間がかかるという課題がある。
In the conventional example of Patent Document 1, it is necessary to acquire the focus evaluation value by moving the lens from the infinite end to the macro end. The conventional example of Patent Document 2 requires information on temporal change of the position of the low luminance region in the captured image. Since these affect the shooting response, there is a problem that it takes time to take a picture while determining the presence or absence of fingering.
また、撮影状況によっては、撮影画像における低輝度領域が所定位置に継続して存在する場合、指写りを誤判定してしまうことがある。例えば、撮影装置を机の上に置いて定点撮影する場合を想定すると、撮影画像の下部に写る机は撮影装置から至近距離に存在するため、マクロ側で撮影画像のコントラスト値が大きくなる。また、机の位置は経時変化しない。従来例では、このような場合は机を指写りと誤判定してしまう。
Also, depending on the shooting situation, if a low-luminance region in the shot image is continuously present at a predetermined position, finger shooting may be erroneously determined. For example, assuming a case where a photographing device is placed on a desk for fixed-point photography, the desk in the lower part of the photographed image exists at a close distance from the photographing device, so that the contrast value of the photographed image increases on the macro side. Also, the desk position does not change over time. In the conventional example, in such a case, the desk is erroneously determined to be a finger.
本発明は、上記事情に鑑みてなされたもので、その目的は、短時間に精度良く指写りの判定が可能な撮影装置を提供することにある。
The present invention has been made in view of the above circumstances, and an object of the present invention is to provide an imaging apparatus capable of accurately determining finger exposure in a short time.
本発明の撮影装置は、被写体像を撮像する撮像部と、前記撮像部により取得した被写体の撮像データを記憶する撮像データ記憶部と、当該撮影装置の動きを検出する動き検出部と、前記撮像データの画像パラメータの分布を識別する画像パラメータ識別部と、前記撮影装置の動きがある状態で取得された複数の撮像データの画像パラメータに基づき、前記撮像データにおいて略同一の領域に略同一の画像パラメータの分布がある場合、当該領域を不要画像が含まれる不要画像領域であると判定する不要画像判定部と、前記不要画像領域が検出されたことを通知する通知部と、を備える。
The imaging device of the present invention includes an imaging unit that captures a subject image, an imaging data storage unit that stores imaging data of the subject acquired by the imaging unit, a motion detection unit that detects a motion of the imaging device, and the imaging Based on the image parameter identification unit for identifying the distribution of the image parameters of the data and the image parameters of the plurality of imaging data acquired in a state where the imaging apparatus is in motion, substantially the same image in substantially the same region in the imaging data When there is a parameter distribution, an unnecessary image determination unit that determines that the region is an unnecessary image region including an unnecessary image, and a notification unit that notifies that the unnecessary image region has been detected are provided.
本発明によれば、撮影装置において、短時間に精度良く指写りの判定を可能にすることができる。
According to the present invention, it is possible to accurately determine finger exposure in a short time in the photographing apparatus.
本実施形態では、撮影装置の一例として、デジタルカメラ、携帯電話装置またはスマートフォン等のカメラ機能付き携帯端末など、撮影機能を有する電子機器に適用した構成例を示す。
In this embodiment, a configuration example applied to an electronic device having a shooting function, such as a digital camera, a mobile phone device, or a mobile terminal with a camera function such as a smartphone, is shown as an example of a shooting device.
(第1の実施形態)
図1は本発明の第1の実施形態に係る撮影装置の主要部の構成を示すブロック図である。 (First embodiment)
FIG. 1 is a block diagram showing the configuration of the main part of the photographing apparatus according to the first embodiment of the present invention.
図1は本発明の第1の実施形態に係る撮影装置の主要部の構成を示すブロック図である。 (First embodiment)
FIG. 1 is a block diagram showing the configuration of the main part of the photographing apparatus according to the first embodiment of the present invention.
撮影装置は、撮像部11、撮像データ記憶部12、色分布認識部13、動き検出部14、不要画像判定部15、表示部16、制御部17、操作部18を備えている。
The imaging apparatus includes an imaging unit 11, an imaging data storage unit 12, a color distribution recognition unit 13, a motion detection unit 14, an unnecessary image determination unit 15, a display unit 16, a control unit 17, and an operation unit 18.
撮像部11は、一つまたは複数のレンズを有する撮影光学系と、CMOSイメージセンサまたはCCDイメージセンサ等の撮像素子とを有して構成され、撮影光学系にて結像した被写体像を撮像素子にて光電変換し、被写体の撮像データを出力する。
The imaging unit 11 includes an imaging optical system having one or a plurality of lenses and an imaging element such as a CMOS image sensor or a CCD image sensor, and captures a subject image formed by the imaging optical system. And photoelectrically convert the image data of the subject.
撮像データ記憶部12は、メモリを有して構成され、撮像部11にて取得した撮像データを記憶する。撮像データ記憶部12には、撮像データとして、撮影前のプレビュー画像における動画像または静止画像の画像データ、あるいは、ユーザの撮影操作による静止画像または動画像の撮影画像の画像データ等が記憶される。
The imaging data storage unit 12 includes a memory, and stores imaging data acquired by the imaging unit 11. The imaging data storage unit 12 stores, as imaging data, image data of a moving image or a still image in a preview image before shooting, image data of a still image or a moving image shot by a user's shooting operation, and the like. .
色分布認識部13は、画像パラメータ認識部の一例として機能するものであり、撮像データ記憶部12に記憶された撮像データについて、画像中の色分布を認識する。本実施形態では、色分布認識部13は、動き検出部14によって撮影装置の動きが検出された状態のときに、撮影前のプレビュー画像の動画像の撮像データについて色分布を認識する。
The color distribution recognizing unit 13 functions as an example of an image parameter recognizing unit, and recognizes the color distribution in the image with respect to the imaging data stored in the imaging data storage unit 12. In the present embodiment, the color distribution recognizing unit 13 recognizes the color distribution of the captured image data of the preview image before photographing when the motion detecting unit 14 detects the motion of the photographing apparatus.
動き検出部14は、加速度センサ、ジャイロスコープ等を有して構成され、撮影装置の動きを検出する。なお、動き検出部14は、撮影装置本体の動きを直接検出するものに限らず、撮像データの画像中の動きを検出して間接的に撮影装置の動きを検出するものを用いることも可能である。
The motion detection unit 14 includes an acceleration sensor, a gyroscope, and the like, and detects the motion of the photographing apparatus. Note that the motion detection unit 14 is not limited to the one that directly detects the movement of the photographing apparatus main body, but may be one that detects the movement of the photographing data indirectly by detecting the movement in the image of the imaging data. is there.
不要画像判定部15は、色分布認識部13による動き検出時の撮像データの色分布の認識結果に基づき、撮影する画像中に不要画像が存在しているかどうかを判定する。本実施形態では、不要画像判定部15は、動き検出中のプレビュー画像の動画像において、色分布が同様(略同一)の領域がある場合、すなわち、撮影装置が動いているにも関わらず、画像中の同様(略同一)の位置に同様(略同一)の色分布が継続してある場合は該当領域に不要画像が存在すると判定する。この不要画像は、撮影装置を把持しているユーザの指などと考えられるので、これによって指写りが発生していることを判定する。
The unnecessary image determination unit 15 determines whether or not an unnecessary image exists in the image to be captured based on the recognition result of the color distribution of the imaging data at the time of motion detection by the color distribution recognition unit 13. In the present embodiment, the unnecessary image determination unit 15 has a region where the color distribution is the same (substantially the same) in the moving image of the preview image during motion detection, that is, although the photographing apparatus is moving. If the same (substantially the same) color distribution continues at the same (substantially the same) position in the image, it is determined that an unnecessary image exists in the corresponding area. Since this unnecessary image is considered to be a finger of a user holding the photographing apparatus, it is determined that a finger image has occurred.
なお、不要画像判定部15において不要画像を判定するための情報として、色分布の他に、各種の画像パラメータを用いることも可能である。例えば、画像パラメータとして、RGB、YCbCr、L*a*b*、XYZ、L*u*v*、HSVなどを用いて、撮像データの色(色相、色差等を含む)、輝度などの分布を認識し、この認識結果に基づいて画像中の不要画像の判定が可能である。
In addition to the color distribution, various image parameters can be used as information for determining an unnecessary image in the unnecessary image determination unit 15. For example, using RGB, YCbCr, L * a * b * , XYZ, L * u * v * , HSV, etc. as image parameters, the distribution of image data such as color (including hue, color difference, etc.), brightness, etc. It is possible to recognize an unnecessary image in the image based on the recognition result.
表示部16は、通知部の一例として機能するものであり、液晶ディスプレイ、LED、ランプ等により構成され、不要画像の判定結果を表示してユーザに不要画像の存在を通知する。
The display unit 16 functions as an example of a notification unit, and includes a liquid crystal display, an LED, a lamp, and the like, and displays a determination result of an unnecessary image to notify the user of the presence of the unnecessary image.
制御部17は、撮影装置の各部の動作を制御するものであり、特に撮像部11、色分布認識部13、不要画像判定部15の動作制御を行う。操作部18は、押しボタン等の操作部材を有したスイッチ、あるいはタッチパネル等からなる操作デバイスによって構成され、ユーザの操作指示に基づき撮影装置における各種操作入力を行う。
The control unit 17 controls the operation of each unit of the photographing apparatus, and particularly controls the operation of the imaging unit 11, the color distribution recognition unit 13, and the unnecessary image determination unit 15. The operation unit 18 is configured by an operation device including a switch having an operation member such as a push button or a touch panel, and performs various operation inputs in the photographing apparatus based on user operation instructions.
色分布認識部13、不要画像判定部15、制御部17は、各種情報処理を行うプロセッサ及びメモリ等を有して構成されるコンピュータ等からなる処理部において、所定のプログラムを実行することにより、各部の機能が実現される。
The color distribution recognition unit 13, the unnecessary image determination unit 15, and the control unit 17 execute a predetermined program in a processing unit including a computer having a processor and a memory that perform various information processing, and the like. Functions of each part are realized.
次に、第1の実施形態における撮影装置の動作について説明する。図2は第1の実施形態に係る撮影装置の動作手順を示すフローチャートである。
Next, the operation of the photographing apparatus in the first embodiment will be described. FIG. 2 is a flowchart showing an operation procedure of the photographing apparatus according to the first embodiment.
色分布認識部13は、動き検出部14によって撮影装置の動きを検出しているかどうかを判断し(ステップS11)、動きを検出している場合は、プレビュー画像の動画像において、直前2フレーム分の画像の撮像データを撮像データ記憶部12から読み込む(ステップS12)。なお、読み込むフレーム数は2フレーム以上の複数フレームであればいくつでもよい。続いて、色分布認識部13は、それぞれのフレームの撮像データの色分布を認識する(ステップS13)。
The color distribution recognizing unit 13 determines whether or not the movement of the photographing apparatus is detected by the movement detecting unit 14 (step S11). If the movement is detected, the two frames immediately before the moving image of the preview image are detected. The imaging data of the image is read from the imaging data storage unit 12 (step S12). The number of frames to be read may be any number as long as it is a plurality of frames of two or more. Subsequently, the color distribution recognition unit 13 recognizes the color distribution of the imaging data of each frame (step S13).
次に、不要画像判定部15は、色分布認識部13による色分布の認識結果に基づき、複数フレームの撮像データにおいて色分布が同じ領域(略同一の領域)があるか、すなわち異なるフレームの画像中の同じ位置(略同一の位置)に同じ色分布(略同一の色分布)が存在するかを判断する(ステップS14)。ここで、二つのフレームにおいて色分布が同じ領域がある場合(ステップS14にてYES)は、不要画像が存在していると判定する。この場合、不要画像判定部15は、表示部16に対して制御情報、表示情報等を含む信号を出力し、表示部16に不要画像を検出したことを表示させる(ステップS15)。一方、ステップS14の判断において、複数フレームの撮像データ中に色分布が同じ領域が存在しない場合(ステップS14にてNO)は、不要画像が存在しないと判定し、そのまま本処理を終了する。
Next, the unnecessary image determination unit 15 determines whether there is a region having the same color distribution (substantially the same region) in the imaging data of a plurality of frames based on the recognition result of the color distribution by the color distribution recognition unit 13, that is, images of different frames. It is determined whether the same color distribution (substantially the same color distribution) exists in the same position (substantially the same position) (step S14). Here, if there are regions having the same color distribution in two frames (YES in step S14), it is determined that an unnecessary image exists. In this case, the unnecessary image determination unit 15 outputs a signal including control information, display information, and the like to the display unit 16 to display that the unnecessary image is detected on the display unit 16 (step S15). On the other hand, if it is determined in step S14 that there is no region having the same color distribution in the imaging data of a plurality of frames (NO in step S14), it is determined that there is no unnecessary image, and the present process is terminated.
第1の実施形態では、撮影装置の動きを検出している間に取得したプレビュー画像の複数フレームの撮像データを比較し、撮影装置の動きに対して位置が変化しない特定の色領域がある場合に、指写りによる不要画像が存在していると判定する。これにより、即時に不要画像の有無を判定でき、短時間で指写りの判定をすることができる。また、撮影装置の動きがある場合のみに判定を行うため、例えば撮影装置を机の上に置いて定点撮影する場合などでの誤判定を防止できる。
In the first embodiment, when a plurality of frames of captured image data of a preview image acquired while detecting the movement of the photographing apparatus are compared, there is a specific color region whose position does not change with respect to the movement of the photographing apparatus In addition, it is determined that there is an unnecessary image due to fingering. Thereby, the presence or absence of an unnecessary image can be determined immediately, and the determination of a finger image can be made in a short time. In addition, since the determination is performed only when there is a movement of the photographing apparatus, it is possible to prevent erroneous determination when, for example, the photographing apparatus is placed on a desk and fixed point photographing is performed.
(第2の実施形態)
図3は本発明の第2の実施形態に係る撮影装置の主要部の構成を示すブロック図である。 (Second Embodiment)
FIG. 3 is a block diagram showing the configuration of the main part of the photographing apparatus according to the second embodiment of the present invention.
図3は本発明の第2の実施形態に係る撮影装置の主要部の構成を示すブロック図である。 (Second Embodiment)
FIG. 3 is a block diagram showing the configuration of the main part of the photographing apparatus according to the second embodiment of the present invention.
第2の実施形態は、一つの撮影光学系によって視差のある二枚の画像(左目画像と右目画像)を順次撮影する単眼3D撮影の機能を持つ撮影装置の例である。撮影装置は、撮像部21、撮像データ記憶部22、輝度分布認識部23、輝度分布記憶部24、不要画像判定部25、表示部26、制御部27、操作部28を備えている。
The second embodiment is an example of a photographing apparatus having a monocular 3D photographing function of sequentially photographing two images (left eye image and right eye image) having parallax with one photographing optical system. The imaging apparatus includes an imaging unit 21, an imaging data storage unit 22, a luminance distribution recognition unit 23, a luminance distribution storage unit 24, an unnecessary image determination unit 25, a display unit 26, a control unit 27, and an operation unit 28.
単眼3D撮影による立体撮影を行う場合は、左目画像または右目画像の一方を任意の位置(一枚目撮影場所)にて撮影した後、ユーザが水平方向に撮影装置を所定量移動させ、所定の視差を有する位置(二枚目撮影場所)において、他方の画像を撮影する。以下では、一枚目に左目画像を撮影し、二枚目に右目画像を撮影する場合について説明する。
When performing stereoscopic shooting by monocular 3D shooting, after shooting one of the left-eye image and the right-eye image at an arbitrary position (first image shooting location), the user moves the shooting device in a horizontal direction by a predetermined amount, The other image is taken at a position having the parallax (second photo shooting location). In the following, a case will be described in which the left eye image is taken as the first image and the right eye image is taken as the second image.
撮像データ記憶部22は、撮像部21にて取得した撮像データとして、撮影前のプレビュー画像の画像データを逐次記憶するとともに、立体撮影時の左目画像、右目画像の画像データを記憶する。
The imaging data storage unit 22 sequentially stores image data of a preview image before shooting as imaging data acquired by the imaging unit 21, and stores image data of a left eye image and a right eye image at the time of stereoscopic shooting.
輝度分布認識部23は、画像パラメータ認識部の一例として機能するものであり、撮像データ記憶部22に記憶された撮像データについて、画像中の輝度分布を認識する。本実施形態では、輝度分布認識部23は、単眼3D撮影による立体撮影が行われているときに、撮影した左目画像と、右目画像撮影前のプレビュー画像または撮影した右目画像との双方の撮像データについて輝度分布を認識する。単眼3D撮影では、ユーザが撮影装置を水平方向に所定量移動させて視差のある二枚の画像を撮影するため、撮影装置において必然的に動きが生じる。よって、撮影装置は、単眼3D撮影モードの状態を認識することにより、撮影装置に動きが発生している状態を検出する。
The luminance distribution recognizing unit 23 functions as an example of an image parameter recognizing unit, and recognizes the luminance distribution in the image with respect to the imaging data stored in the imaging data storage unit 22. In the present embodiment, the luminance distribution recognizing unit 23 captures both the captured left-eye image and the preview image before capturing the right-eye image or the captured right-eye image when stereoscopic imaging by monocular 3D imaging is performed. Recognize the luminance distribution. In monocular 3D shooting, the user moves the imaging device in a horizontal direction by a predetermined amount to capture two images with parallax. Therefore, the imaging device detects a state in which movement is occurring in the imaging device by recognizing the state of the monocular 3D imaging mode.
輝度分布記憶部24は、輝度分布認識部23により認識した撮像データの輝度分布を記憶する。本実施形態では、先に撮影した左目画像の撮像データの輝度分布を記憶しておく。なお、左目画像と右目画像の双方の撮像データの輝度分布を記憶してもよい。
The luminance distribution storage unit 24 stores the luminance distribution of the imaging data recognized by the luminance distribution recognition unit 23. In the present embodiment, the luminance distribution of the imaging data of the left-eye image previously captured is stored. In addition, you may memorize | store the luminance distribution of the imaging data of both the left eye image and the right eye image.
不要画像判定部25は、輝度分布認識部23による撮像データの輝度分布の認識結果に基づき、撮影する画像中に不要画像が存在しているかどうかを判定する。本実施形態では、不要画像判定部25は、撮影した左目画像と、右目画像撮影前のプレビュー画像または撮影した右目画像との双方の撮像データについて輝度分布を比較し、同じ位置(略同一の位置)に低輝度領域がある場合は該当領域に不要画像が存在すると判定する。すなわち、単眼3D撮影のために撮影装置に動きがあったにも関わらず、画像中の同じ位置に低輝度領域がある場合に、不要画像有りと判定する。この低輝度領域の不要画像は、撮影装置を把持しているユーザの指などと考えられるので、これによって指写りが発生していることを判定する。
The unnecessary image determination unit 25 determines whether or not an unnecessary image exists in the image to be captured based on the recognition result of the luminance distribution of the imaging data by the luminance distribution recognition unit 23. In the present embodiment, the unnecessary image determination unit 25 compares the luminance distributions of both the captured left-eye image and the captured image of the preview image before capturing the right-eye image or the captured right-eye image, and the same position (substantially the same position). ), It is determined that an unnecessary image exists in the corresponding area. That is, it is determined that there is an unnecessary image when there is a low-luminance region at the same position in the image despite the movement of the imaging device for monocular 3D imaging. Since the unnecessary image in the low brightness area is considered to be a finger of a user holding the photographing apparatus, it is determined that a finger image is generated.
なお、不要画像判定部25において不要画像を判定するための情報として、輝度分布の代わりに、第1の実施形態と同様に色分布を用いてもよい。また、各種の画像パラメータを用いることも可能である。例えば、画像パラメータとして、RGB、YCbCr、L*a*b*、XYZ、L*u*v*、HSVなどを用いて、撮像データの色(色相、色差等を含む)、輝度などの分布を認識し、この認識結果に基づいて画像中の不要画像の判定が可能である。
As information for determining an unnecessary image in the unnecessary image determination unit 25, a color distribution may be used in the same manner as in the first embodiment, instead of the luminance distribution. Various image parameters can also be used. For example, using RGB, YCbCr, L * a * b * , XYZ, L * u * v * , HSV, etc. as image parameters, the distribution of image data such as color (including hue, color difference, etc.), brightness, etc. It is possible to recognize an unnecessary image in the image based on the recognition result.
表示部26は、通知部の一例として機能するものであり、液晶ディスプレイ、LED、ランプ等により構成され、不要画像の判定結果を表示してユーザに不要画像の存在を通知する。
The display unit 26 functions as an example of a notification unit, and includes a liquid crystal display, an LED, a lamp, and the like, and displays a determination result of an unnecessary image to notify the user of the presence of the unnecessary image.
制御部27は、撮影装置の各部の動作を制御するものであり、特に撮像部21、輝度分布認識部23、不要画像判定部25の動作制御を行う。操作部28は、押しボタン等の操作部材を有したスイッチ、あるいはタッチパネル等からなる操作デバイスによって構成され、ユーザの操作指示に基づき撮影装置における各種操作入力を行う。
The control unit 27 controls the operation of each unit of the photographing apparatus, and in particular controls the operation of the imaging unit 21, the luminance distribution recognition unit 23, and the unnecessary image determination unit 25. The operation unit 28 includes a switch having an operation member such as a push button, or an operation device including a touch panel, and performs various operation inputs in the photographing apparatus based on user operation instructions.
輝度分布認識部23、輝度分布記憶部24、不要画像判定部25、制御部27は、各種情報処理を行うプロセッサ及びメモリ等を有して構成されるコンピュータ等からなる処理部において、所定のプログラムを実行することにより、各部の機能が実現される。
The luminance distribution recognition unit 23, the luminance distribution storage unit 24, the unnecessary image determination unit 25, and the control unit 27 are a predetermined program in a processing unit that includes a processor that performs various types of information processing, a memory, and the like. By executing the above, the function of each unit is realized.
次に、第2の実施形態における撮影装置の動作について説明する。図4(A)~(C)は第2の実施形態に係る撮影装置の動作を説明する模式図である。図4において、(A)は単眼3D撮影用カメラの外観図、(B)は被写体に対するカメラの位置を示す図、(C)は指写りがある場合と無い場合の左目画像及び右目画像の例を示す図である。
Next, the operation of the photographing apparatus in the second embodiment will be described. 4A to 4C are schematic views for explaining the operation of the photographing apparatus according to the second embodiment. 4A is an external view of a monocular 3D photographing camera, FIG. 4B is a diagram illustrating the position of the camera with respect to a subject, and FIG. 4C is an example of a left-eye image and a right-eye image with and without fingering. FIG.
図4(A)に示すように、単眼3D撮影機能を持つカメラ30は、一つの撮影光学系のレンズ31を有している。図4(B)に示すように、単眼3D撮影により立体撮影を行う場合、被写体の左目画像を撮影した後、所定の視差が得られるようにカメラ30を水平右方向に所定量移動させ、右目画像を撮影する。
As shown in FIG. 4A, a camera 30 having a monocular 3D photographing function has a lens 31 of one photographing optical system. As shown in FIG. 4B, in the case of performing stereoscopic shooting by monocular 3D shooting, after shooting the left eye image of the subject, the camera 30 is moved by a predetermined amount in the horizontal right direction so that a predetermined parallax is obtained. Take a picture.
図4(C)の上段に示すように、指写りが無い場合は、一枚目の左目画像と二枚目の右目画像の同じ位置に低輝度領域が存在することは無い。このため、所定視差を有する同じ位置に低輝度領域の無い二つの撮影画像から、立体画像を生成することができる。図4(C)の下段に示すように、カメラのレンズの一部にユーザの指がかかり、指写りがある場合は、一枚目の左目画像と二枚目の右目画像との双方において、同じ位置に低輝度領域が存在する。二つの画像間に視差を与えるために被写体に対するカメラ位置が移動したにも関わらず、同じ位置に低輝度領域がある場合は、この低輝度領域は指写りによる不要画像であると判定できる。
As shown in the upper part of FIG. 4C, when there is no finger, a low luminance area does not exist at the same position in the first left eye image and the second right eye image. For this reason, it is possible to generate a stereoscopic image from two captured images that do not have a low luminance area at the same position having a predetermined parallax. As shown in the lower part of FIG. 4C, when a user's finger is applied to a part of the lens of the camera and there is a finger shot, in both the first left-eye image and the second right-eye image, A low luminance region exists at the same position. If the camera position relative to the subject has moved to give parallax between the two images, but there is a low luminance area at the same position, it can be determined that the low luminance area is an unnecessary image due to fingering.
図5は第2の実施形態に係る撮影装置の動作手順を示すフローチャートである。
FIG. 5 is a flowchart showing an operation procedure of the photographing apparatus according to the second embodiment.
輝度分布認識部23は、左目画像の撮像データを撮像データ記憶部22から読み込み(ステップS21)、左目画像の輝度分布を認識する(ステップS22)。そして、輝度分布認識部23は、認識した左目画像の輝度分布を輝度分布記憶部24に記憶させる(ステップS23)。
The luminance distribution recognition unit 23 reads the imaging data of the left eye image from the imaging data storage unit 22 (step S21), and recognizes the luminance distribution of the left eye image (step S22). Then, the luminance distribution recognition unit 23 stores the recognized luminance distribution of the left eye image in the luminance distribution storage unit 24 (step S23).
続いて、輝度分布認識部23は、右目画像の撮像データを撮像データ記憶部22から読み込み(ステップS24)、右目画像の輝度分布を認識する(ステップS25)。なお、右目画像の撮像データの代わりに、右目画像撮影前のプレビュー画像の輝度分布を認識してもよい。また、左目画像と同様に右目画像の輝度分布を輝度分布記憶部24に記憶させてもよい。
Subsequently, the luminance distribution recognition unit 23 reads the imaging data of the right eye image from the imaging data storage unit 22 (step S24), and recognizes the luminance distribution of the right eye image (step S25). Note that instead of the imaging data of the right eye image, the luminance distribution of the preview image before the right eye image is captured may be recognized. Further, the luminance distribution of the right eye image may be stored in the luminance distribution storage unit 24 as in the case of the left eye image.
次に、不要画像判定部25は、輝度分布認識部23による輝度分布の認識結果に基づき、左目画像と右目画像との双方の撮像データの輝度分布において、同じ位置(略同一の位置)に低輝度領域があるかどうかを判断する(ステップS26)。ここで、二つの画像の同じ位置に低輝度領域がある場合(ステップS26にてYES)は、不要画像が存在していると判定する。この場合、不要画像判定部25は、表示部26に対して制御情報、表示情報等を含む信号を出力し、表示部26に不要画像を検出したことを表示させる(ステップS27)。一方、ステップS26の判断において、左目画像と右目画像の撮像データ中の同じ位置に低輝度領域が存在しない場合(ステップS26にてNO)は、不要画像が存在しないと判定し、そのまま本処理を終了する。
Next, the unnecessary image determination unit 25 is reduced to the same position (substantially the same position) in the luminance distribution of the imaging data of both the left eye image and the right eye image based on the result of the luminance distribution recognition by the luminance distribution recognition unit 23. It is determined whether there is a luminance area (step S26). Here, if there is a low luminance area at the same position of the two images (YES in step S26), it is determined that an unnecessary image exists. In this case, the unnecessary image determination unit 25 outputs a signal including control information, display information, and the like to the display unit 26, and displays that the unnecessary image has been detected on the display unit 26 (step S27). On the other hand, if it is determined in step S26 that there is no low luminance area at the same position in the imaging data of the left eye image and the right eye image (NO in step S26), it is determined that there is no unnecessary image, and this process is performed as it is. finish.
第2の実施形態では、単眼3D撮影による立体撮影を行う場合に、左目画像と右目画像の撮像データを比較し、撮影装置の移動に対して同じ位置に低輝度領域がある場合、指写りによる不要画像が存在していると判定する。これにより、即時かつ容易に不要画像の有無を判定でき、短時間で誤り無く指写りの判定をすることができる。
In the second embodiment, when performing stereoscopic imaging by monocular 3D imaging, the imaging data of the left eye image and the right eye image are compared, and if there is a low luminance area at the same position with respect to the movement of the imaging device, It is determined that an unnecessary image exists. As a result, the presence / absence of an unnecessary image can be determined immediately and easily, and the finger image can be determined in a short time without error.
(第3の実施形態)
第3の実施形態は、二つの撮影光学系によって視差のある二枚の画像(左目画像と右目画像)を同時撮影する複眼3D撮影の機能を持つ撮影装置の例である。第3の実施形態の撮影装置の構成は第2の実施形態と同様であり、第2の実施形態とは一部の機能が異なっている。第2の実施形態と異なる点を中心に説明する。 (Third embodiment)
The third embodiment is an example of a photographing apparatus having a compound eye 3D photographing function for simultaneously photographing two images (a left-eye image and a right-eye image) having parallax by two photographing optical systems. The configuration of the photographing apparatus of the third embodiment is the same as that of the second embodiment, and some functions are different from those of the second embodiment. A description will be given centering on differences from the second embodiment.
第3の実施形態は、二つの撮影光学系によって視差のある二枚の画像(左目画像と右目画像)を同時撮影する複眼3D撮影の機能を持つ撮影装置の例である。第3の実施形態の撮影装置の構成は第2の実施形態と同様であり、第2の実施形態とは一部の機能が異なっている。第2の実施形態と異なる点を中心に説明する。 (Third embodiment)
The third embodiment is an example of a photographing apparatus having a compound eye 3D photographing function for simultaneously photographing two images (a left-eye image and a right-eye image) having parallax by two photographing optical systems. The configuration of the photographing apparatus of the third embodiment is the same as that of the second embodiment, and some functions are different from those of the second embodiment. A description will be given centering on differences from the second embodiment.
図6(A)~(C)は第3の実施形態に係る撮影装置の動作を説明する模式図である。図6において、(A)は複眼3D撮影用カメラの外観図、(B)は被写体に対するカメラの位置を示す図、(C)は指写りがある場合と無い場合の左目画像及び右目画像の例を示す図である。
FIGS. 6A to 6C are schematic diagrams for explaining the operation of the photographing apparatus according to the third embodiment. 6A is an external view of a compound-eye 3D shooting camera, FIG. 6B is a diagram illustrating the position of the camera with respect to the subject, and FIG. 6C is an example of a left-eye image and a right-eye image with and without fingering. FIG.
図6(A)に示すように、複眼3D撮影機能を持つカメラ40は、二つの撮影光学系のレンズ41、42を有している。図6(B)に示すように、複眼3D撮影により立体撮影を行う場合、二つのレンズ41、42によって同時に被写体の左目画像と右目画像をそれぞれ撮影する。
As shown in FIG. 6A, a camera 40 having a compound-eye 3D imaging function has two imaging optical system lenses 41 and 42. As shown in FIG. 6B, in the case of performing stereoscopic imaging by compound eye 3D imaging, the left eye image and the right eye image of the subject are simultaneously captured by the two lenses 41 and 42, respectively.
図6(C)の上段に示すように、指写りが無い場合は、左目画像と右目画像の一方のみに低輝度領域が存在することは無い。このため、所定視差を有する一方の画像のみに低輝度領域が存在することの無い二つの撮影画像から、立体画像を生成することができる。図6(C)の下段に示すように、指写りがある場合は、通常はカメラの一方のレンズの一部にユーザの指がかかるため、左目画像と右目画像のどちらか一方において、低輝度領域が存在する。一方の画像のみに低輝度領域がある場合は、この低輝度領域は指写りによる不要画像であると判定できる。
As shown in the upper part of FIG. 6 (C), when there is no finger image, there is no low luminance area in only one of the left eye image and the right eye image. For this reason, a stereoscopic image can be generated from two captured images in which only one image having a predetermined parallax does not have a low luminance area. As shown in the lower part of FIG. 6 (C), when there is finger shooting, the user's finger is usually applied to a part of one lens of the camera, so that either the left eye image or the right eye image has low brightness. An area exists. When only one image has a low-luminance area, it can be determined that the low-luminance area is an unnecessary image due to fingering.
図7は第3の実施形態に係る撮影装置の動作手順を示すフローチャートである。複眼3D撮影による立体撮影を行う場合は、左目画像と右目画像の撮像データを同時に取得し、それぞれ撮像データ記憶部22に記憶する。
FIG. 7 is a flowchart showing an operation procedure of the photographing apparatus according to the third embodiment. When performing stereoscopic imaging by compound eye 3D imaging, the imaging data of the left eye image and the right eye image are simultaneously acquired and stored in the imaging data storage unit 22 respectively.
輝度分布認識部23は、左目画像の撮像データを撮像データ記憶部22から読み込み(ステップS31)、左目画像の輝度分布を認識する(ステップS32)。そして、輝度分布認識部23は、認識した左目画像の輝度分布を輝度分布記憶部24に記憶させる(ステップS33)。
The luminance distribution recognition unit 23 reads the imaging data of the left eye image from the imaging data storage unit 22 (step S31), and recognizes the luminance distribution of the left eye image (step S32). Then, the luminance distribution recognition unit 23 stores the recognized luminance distribution of the left eye image in the luminance distribution storage unit 24 (step S33).
続いて、輝度分布認識部23は、右目画像の撮像データを撮像データ記憶部22から読み込み(ステップS34)、右目画像の輝度分布を認識する(ステップS35)。なお、左目画像と同様に右目画像の輝度分布を輝度分布記憶部24に記憶させてもよい。
Subsequently, the luminance distribution recognition unit 23 reads the imaging data of the right eye image from the imaging data storage unit 22 (step S34), and recognizes the luminance distribution of the right eye image (step S35). Note that the luminance distribution of the right eye image may be stored in the luminance distribution storage unit 24 as in the case of the left eye image.
次に、不要画像判定部25は、輝度分布認識部23による輝度分布の認識結果に基づき、左目画像と右目画像との双方の撮像データの輝度分布において、一方の画像だけに低輝度領域があるかどうかを判断する(ステップS36)。ここで、一方の画像だけに低輝度領域がある場合(ステップS36にてYES)は、不要画像が存在していると判定する。この場合、不要画像判定部25は、表示部26に対して制御情報、表示情報等を含む信号を出力し、表示部26に不要画像を検出したことを表示させる(ステップS37)。一方、ステップS36の判断において、左目画像と右目画像の撮像データの両方に低輝度領域が存在しないか、あるいは両方に低輝度領域が存在する場合は、不要画像が存在しないと判定し、そのまま本処理を終了する。
Next, the unnecessary image determination unit 25 has a low luminance region in only one image in the luminance distribution of the imaging data of both the left eye image and the right eye image based on the result of the luminance distribution recognition by the luminance distribution recognition unit 23. Is determined (step S36). Here, if only one image has a low-luminance region (YES in step S36), it is determined that an unnecessary image exists. In this case, the unnecessary image determination unit 25 outputs a signal including control information, display information, and the like to the display unit 26, and displays that the unnecessary image is detected on the display unit 26 (step S37). On the other hand, if it is determined in step S36 that the low-luminance area does not exist in both the left-eye image and the right-eye image, or if the low-luminance area exists in both, it is determined that no unnecessary image exists, The process ends.
なお、第3の実施形態の撮影装置において、第1の実施形態と同様に、撮影装置の動きが検出されたときに撮像データの色分布の認識を行い、画像中の位置が変化しない色領域を検出することによって、不要画像の有無を判定することも可能である。
Note that, in the imaging device of the third embodiment, as in the first embodiment, when the movement of the imaging device is detected, the color distribution of the imaging data is recognized and the position in the image does not change. By detecting this, it is also possible to determine the presence or absence of unnecessary images.
第3の実施形態では、複眼3D撮影による立体撮影を行う場合に、左目画像と右目画像の撮像データを比較し、一方の画像のみに低輝度領域がある場合、指写りによる不要画像が存在していると判定する。これにより、即時かつ容易に不要画像の有無を判定でき、短時間で誤り無く指写りの判定をすることができる。
In the third embodiment, when performing stereoscopic imaging by compound eye 3D imaging, the imaging data of the left eye image and the right eye image are compared, and if only one of the images has a low luminance area, there is an unnecessary image due to fingering. It is determined that As a result, the presence / absence of an unnecessary image can be determined immediately and easily, and the finger image can be determined in a short time without error.
(第4の実施形態)
第4の実施形態として、撮影画像において指写りによる不要画像領域の画像補正動作の第1例を示す。 (Fourth embodiment)
As a fourth embodiment, a first example of an image correction operation for an unnecessary image region by finger shooting in a captured image will be described.
第4の実施形態として、撮影画像において指写りによる不要画像領域の画像補正動作の第1例を示す。 (Fourth embodiment)
As a fourth embodiment, a first example of an image correction operation for an unnecessary image region by finger shooting in a captured image will be described.
図8は本発明の第4の実施形態に係る撮影装置の主要部の構成を示すブロック図である。第4の実施形態の撮影装置は、撮影画像に関する画像処理を行う画像処理部51と、画像処理後の画像データを保存する画像保存部52とを備えている。その他は図1に示した第1の実施形態と同様であり、ここでは第1の実施形態と異なる点を中心に説明する。なお、第4の実施形態を第2の実施形態または第3の実施形態と組み合せることも可能である。
FIG. 8 is a block diagram showing the configuration of the main part of the photographing apparatus according to the fourth embodiment of the present invention. The imaging apparatus of the fourth embodiment includes an image processing unit 51 that performs image processing on a captured image, and an image storage unit 52 that stores image data after image processing. Others are the same as those of the first embodiment shown in FIG. 1, and here, the description will focus on differences from the first embodiment. It should be noted that the fourth embodiment can be combined with the second embodiment or the third embodiment.
画像処理部51は、撮像データ記憶部12に記憶された撮像データにおいて不要画像が存在する場合、不要画像領域の補完処理等を行って不要画像を無くすように撮影画像を補正する。画像保存部52は、画像処理部51による画像処理後の撮影画像の画像データを保存する。なお、画像処理後の撮影画像は、表示部16に表示することも可能である。
When the image data stored in the image data storage unit 12 includes an unnecessary image, the image processing unit 51 corrects the captured image so as to eliminate the unnecessary image by performing an unnecessary image region complementing process or the like. The image storage unit 52 stores the image data of the captured image after the image processing by the image processing unit 51. Note that the captured image after image processing can be displayed on the display unit 16.
図9(A)~(C)は第4の実施形態に係る撮影装置の動作を説明する模式図である。図9において、(A)は動画像の複数フレームの撮像データを示す図、(B)は他のフレームの撮像データを利用した画像補正の例を示す図、(C)は撮像データの画像中の動きベクトルの算出方法を説明する図である。
FIGS. 9A to 9C are schematic diagrams for explaining the operation of the photographing apparatus according to the fourth embodiment. 9, (A) is a diagram showing image data of a plurality of frames of a moving image, (B) is a diagram showing an example of image correction using image data of other frames, and (C) is an image of the image data. It is a figure explaining the calculation method of these motion vectors.
図9(A)に示すように、撮像データ記憶部12には、N-1フレーム目,Nフレーム目,N+1フレーム目,N+2フレーム目,…と複数フレームにおいて同じ色分布領域が存在する動画像の撮像データが記憶されているものとする。この場合、色分布認識部13及び不要画像判定部15によって不要画像ありと判定される。撮像データは、動画像撮影時の画像データ、あるいはプレビュー表示時の画像データに相当する。
As shown in FIG. 9A, in the imaging data storage unit 12, a moving image in which the same color distribution area exists in the N−1th frame, the Nth frame, the N + 1th frame, the N + 2th frame,. It is assumed that the imaging data is stored. In this case, the color distribution recognition unit 13 and the unnecessary image determination unit 15 determine that there is an unnecessary image. The imaging data corresponds to image data at the time of moving image shooting or image data at the time of preview display.
画像処理部51は、二つ以上のフレームの撮像データを用いて不要画像領域の補完処理を行う。図9(B)に示す例では、N-1フレーム目とNフレーム目の撮像データを用いて、右側に示すように不要画像領域の画素を他のフレームの撮像データによって補完する。このとき、画像処理部51は、図9(C)に示すように、二つのフレームの画像に共通する特徴点P1、P2、P3を抽出し、特徴点の動きベクトルを算出する。この動きベクトルによって、不要画像領域に対応する領域の動き量がわかる。Nフレーム目の不要画像領域S′を補完するには、算出した特徴点の動きベクトルの逆ベクトルによって、不要画像領域S′からN-1フレーム目の対応領域Sを求める。この対応領域Sの画素データを不要画像領域S′に当てはめることによって、不要画像領域S′に隠れた被写体部分の画像を復元できる。
The image processing unit 51 performs an unnecessary image region complementing process using the imaging data of two or more frames. In the example shown in FIG. 9B, using the image data of the (N−1) th frame and the Nth frame, the pixels in the unnecessary image area are complemented with image data of other frames as shown on the right side. At this time, as shown in FIG. 9C, the image processing unit 51 extracts feature points P1, P2, and P3 common to the images of the two frames, and calculates a motion vector of the feature points. From this motion vector, the amount of motion in the area corresponding to the unnecessary image area is known. In order to complement the unnecessary image region S ′ of the Nth frame, the corresponding region S of the (N−1) th frame is obtained from the unnecessary image region S ′ by the inverse vector of the calculated motion vector of the feature point. By applying the pixel data of the corresponding area S to the unnecessary image area S ′, the image of the subject portion hidden in the unnecessary image area S ′ can be restored.
なお、不要画像領域の補正処理に用いる撮像データは、撮影時の記録中の撮影画像のフレームであってもよいし、プレビュー画像のフレームであってもよい。また、動画像の撮像データに限らず、静止画像の撮像データに適用することも可能である。画素の補完に用いる撮像データは、補正対象のフレームの前のフレームだけでなく、後ろのフレームを用いてもよいし、前後両方のフレームを用いてもよい。
Note that the imaging data used for the correction process of the unnecessary image area may be a frame of a captured image being recorded at the time of shooting or a frame of a preview image. Further, the present invention can be applied not only to moving image imaging data but also to still image imaging data. The imaging data used for pixel complementation may use not only the previous frame of the correction target frame but also the subsequent frame, or both the front and rear frames.
第4の実施形態によれば、指写りが発生した場合に、適切に不要画像領域の画素を補完して画像補正を行い、撮影画像に対する指写りの影響を無くすことができる。また、画像補正において不要画像領域のトリミング等を行う必要がないため、撮影画像の画角が狭くなる不具合を防止できる。
According to the fourth embodiment, when finger copying occurs, image correction can be performed by appropriately complementing the pixels in the unnecessary image area, and the influence of finger shooting on the captured image can be eliminated. Further, since it is not necessary to perform trimming of an unnecessary image area or the like in image correction, it is possible to prevent a problem that the angle of view of the captured image is narrowed.
(第5の実施形態)
第5の実施形態として、撮影画像において指写りによる不要画像領域の画像補正動作の第2例を示す。第5の実施形態の撮影装置の構成は第4の実施形態と同様であり、画像補正動作の他の例を示したものである。 (Fifth embodiment)
As a fifth embodiment, a second example of an image correction operation for an unnecessary image region by finger shooting in a captured image will be described. The configuration of the photographing apparatus of the fifth embodiment is the same as that of the fourth embodiment, and shows another example of the image correction operation.
第5の実施形態として、撮影画像において指写りによる不要画像領域の画像補正動作の第2例を示す。第5の実施形態の撮影装置の構成は第4の実施形態と同様であり、画像補正動作の他の例を示したものである。 (Fifth embodiment)
As a fifth embodiment, a second example of an image correction operation for an unnecessary image region by finger shooting in a captured image will be described. The configuration of the photographing apparatus of the fifth embodiment is the same as that of the fourth embodiment, and shows another example of the image correction operation.
図10(A)、(B)及び図11(A)、(B)は第5の実施形態に係る撮影装置の動作を説明する模式図である。図10において、(A)は不要画像領域を指定している状態を示す図、(B)は動画像の複数フレームの撮像データを示す図である。図11は画像補正の複数例を示す図であり、(A)は補完完了時の画像例を示す図、(B)は補完未完了時の切り出し処理の例を示す図である。
FIGS. 10A and 10B and FIGS. 11A and 11B are schematic diagrams for explaining the operation of the photographing apparatus according to the fifth embodiment. 10A is a diagram illustrating a state in which an unnecessary image region is designated, and FIG. 10B is a diagram illustrating imaging data of a plurality of frames of a moving image. 11A and 11B are diagrams illustrating a plurality of examples of image correction. FIG. 11A is a diagram illustrating an example of an image when completion is completed, and FIG. 11B is a diagram illustrating an example of a clipping process when completion is not completed.
図10(A)に示すように、撮影時には、撮影装置の表示部16のファインダ画面において、指写りしている領域、あるいは指写りの可能性が高い領域をユーザが不要画像領域61として指定する。撮影装置の形状、大きさ、操作部のレイアウトなどによって、指写りが発生する領域、指写りの可能性が高い領域が特定されることがある。本実施形態では、このような場合を想定して、予め不要画像領域を設定し、不要画像領域の画像補正を行うか、あるいは不要画像領域を使用せずに他領域の画像を切り出して撮影画像を生成する。
As shown in FIG. 10A, at the time of shooting, on the finder screen of the display unit 16 of the shooting apparatus, the user designates an area where a finger is captured or an area where there is a high possibility of finger capture as an unnecessary image area 61. . Depending on the shape and size of the photographing apparatus, the layout of the operation unit, and the like, an area where finger shooting occurs and an area where there is a high possibility of finger shooting may be specified. In this embodiment, assuming such a case, an unnecessary image area is set in advance and image correction of the unnecessary image area is performed, or an image of another area is cut out without using the unnecessary image area. Is generated.
図12は第5の実施形態に係る撮影装置の動作手順を示すフローチャートである。ユーザが操作部18(図10(A)の例ではタッチパネル)を操作して、不要画像領域である指写り領域を指定すると、制御部17は、指定された不要画像領域の領域情報を画像処理部51に転送する(ステップS41)。画像処理部51は、不要画像領域の領域情報を記憶しておく。ここで、操作部18及び制御部17が不要画像領域設定部の機能を実現する。
FIG. 12 is a flowchart showing an operation procedure of the photographing apparatus according to the fifth embodiment. When the user operates the operation unit 18 (a touch panel in the example of FIG. 10A) and designates a finger capture area that is an unnecessary image area, the control unit 17 performs image processing on the area information of the specified unnecessary image area. The data is transferred to the unit 51 (step S41). The image processing unit 51 stores area information of unnecessary image areas. Here, the operation unit 18 and the control unit 17 realize the function of the unnecessary image region setting unit.
次に、画像処理部51は、撮像データ記憶部12に記憶された撮像データにおいて、補完が必要な領域を特定する(ステップS42)。図10(B)に示すように、撮像データ記憶部12には、N-1フレーム目,Nフレーム目,N+1フレーム目,N+2フレーム目,…と複数フレームの動画像の撮像データが記憶されているものとする。撮像データは、動画像撮影時の画像データ、あるいはプレビュー表示時の画像データに相当する。画像処理部51は、第4の実施形態のように、他のフレームの撮像データを用いて不要画像領域の補完が可能な場合は、補完処理を行う。図12の処理例においては、画像処理部51は、プレビュー画像において不要画像領域の補完を行う(ステップS43)。このとき、補完用の撮像データを得るために、ユーザに対して撮影装置を動かすことを促す表示または音声等の通知を行うことも可能である。これにより、動きのある状態での複数フレームの撮像データを取得できる。
Next, the image processing unit 51 specifies an area that needs to be complemented in the imaging data stored in the imaging data storage unit 12 (step S42). As shown in FIG. 10B, the imaging data storage unit 12 stores imaging data of a plurality of frames of moving images such as the (N−1) th frame, the Nth frame, the (N + 1) th frame, the (N + 2) th frame,. It shall be. The imaging data corresponds to image data at the time of moving image shooting or image data at the time of preview display. As in the fourth embodiment, the image processing unit 51 performs a complementing process when the unnecessary image region can be complemented using image data of other frames. In the processing example of FIG. 12, the image processing unit 51 complements unnecessary image regions in the preview image (step S43). At this time, in order to obtain complementary imaging data, it is also possible to notify the user of a display or a voice prompting the user to move the imaging device. Thereby, imaging data of a plurality of frames in a state where there is a motion can be acquired.
その後、ユーザが操作部18のシャッタボタンを押下操作して、撮影の実行を指示すると、制御部17の指示に基づき、画像処理部51は、撮像データ記憶部12に記憶された撮像データを元に、被写体の撮影画像を生成する(ステップS44)。
Thereafter, when the user depresses the shutter button of the operation unit 18 to instruct execution of shooting, the image processing unit 51 uses the imaging data stored in the imaging data storage unit 12 based on the instruction from the control unit 17. Then, a captured image of the subject is generated (step S44).
画像処理部51は、ステップS43における補完処理が正常に完了したかどうかを判断し(ステップS45)、補完処理が正常に完了した場合、フル画角の撮影画像を生成し、画像保存部52、表示部16に出力する(ステップS46)。図11(A)に示すように、補完完了時の画像71の撮像データについては、画像処理部51はこのままフル画角の撮影画像を生成する。
The image processing unit 51 determines whether or not the complementing process in step S43 is normally completed (step S45). If the complementing process is normally completed, a captured image with a full angle of view is generated, and the image storage unit 52, It outputs to the display part 16 (step S46). As shown in FIG. 11A, for the image data of the image 71 at the time of completion of complementation, the image processing unit 51 generates a captured image with a full angle of view as it is.
一方、補完処理が正常に完了していない場合、画像処理部51は、不要画像領域を含む領域を除くように、画像中の指写り無し領域を切り出して撮影画像を生成し、画像保存部52、表示部16に出力する(ステップS47)。図11(B)に示すように、補完未完了時の画像72の撮像データについては、画像処理部51は不要画像領域を除いた領域を切り出して撮影画像を生成する。指写り無し領域の切り出し方法としては、図示したような2つの方法が挙げられる。第1の方法は、破線枠で示す領域73のように、指写り無し領域をそのまま切り出して撮影画像を生成するものである。第2の方法は、一点鎖線枠で示す領域74のように、元画像と縦横比を合わせて左右方向及び上下方向のトリミングを行い、指写り無し領域を切り出して撮影画像を生成するものである。
On the other hand, if the complement processing has not been completed normally, the image processing unit 51 generates a captured image by cutting out a non-fingerprint region in the image so as to exclude the region including the unnecessary image region, and the image storage unit 52. And output to the display unit 16 (step S47). As shown in FIG. 11B, with respect to the image data of the image 72 when the complement is not completed, the image processing unit 51 cuts out an area excluding the unnecessary image area and generates a captured image. As a method for extracting the non-fingerprint region, there are two methods as illustrated. In the first method, a photographed image is generated by cutting out a non-fingerprinted region as it is like a region 73 indicated by a broken line frame. In the second method, as in a region 74 indicated by a one-dot chain line, the aspect ratio of the original image is adjusted in the horizontal and vertical directions, and the non-fingerprinted region is cut out to generate a captured image. .
第5の実施形態によれば、指写りが発生した領域または可能性の高い領域を予め不要画像領域として設定し、画像補正を行うことにより、ユーザが意図的に不要画像領域の補正を実行でき、撮影画像に対する指写りの影響を無くすことができる。また、不要画像領域の補正は、画素の補完によって行い、補完できない場合は、不要画像領域のトリミングを適切に行うことによって、指写りの無い撮影画像を得ることができる。
According to the fifth embodiment, the user can intentionally correct the unnecessary image region by setting the region where the finger image is generated or the region having high possibility as the unnecessary image region in advance and performing image correction. In this way, it is possible to eliminate the influence of finger shooting on the photographed image. In addition, the unnecessary image area is corrected by complementing the pixels. If the complement cannot be performed, the unnecessary image area is appropriately trimmed to obtain a photographed image without fingering.
本発明に係る実施形態の種々の態様として、以下のものが含まれる。
The following are included as various aspects of the embodiment according to the present invention.
被写体像を撮像する撮像部と、前記撮像部により取得した被写体の撮像データを記憶する撮像データ記憶部と、当該撮影装置の動きを検出する動き検出部と、前記撮像データの画像パラメータの分布を識別する画像パラメータ識別部と、前記撮影装置の動きがある状態で取得された複数の撮像データの画像パラメータに基づき、前記撮像データにおいて略同一の領域に略同一の画像パラメータの分布がある場合、当該領域を不要画像が含まれる不要画像領域であると判定する不要画像判定部と、前記不要画像領域が検出されたことを通知する通知部と、を備える撮影装置。
上記構成により、撮影前のプレビュー画像などの撮影装置に動きのある状態で取得した複数の撮像データについて、色分布、輝度分布等の画像パラメータの分布を認識し、略同一の領域に略同一の画像パラメータの分布がある場合、不要画像領域であると判定できる。このため、短時間に指写りによる不要画像領域を判定でき、撮影レスポンスを向上できる。また、撮影装置に動きが無い場合は、不要画像領域を判定しないため、定点撮影時などでの誤判定を抑制できる。 An imaging unit that captures a subject image, an imaging data storage unit that stores imaging data of the subject acquired by the imaging unit, a motion detection unit that detects movement of the imaging device, and an image parameter distribution of the imaging data Based on the image parameter identification unit for identifying and image parameters of a plurality of imaging data acquired in a state where the imaging device is in motion, when there is a distribution of substantially the same image parameters in substantially the same region in the imaging data, An imaging apparatus comprising: an unnecessary image determination unit that determines that the region is an unnecessary image region including an unnecessary image; and a notification unit that notifies that the unnecessary image region has been detected.
With the above configuration, image parameter distributions such as color distribution and luminance distribution are recognized for a plurality of imaging data acquired in a moving state in the imaging apparatus such as a preview image before imaging, and substantially the same in substantially the same region. When there is an image parameter distribution, it can be determined that the image area is an unnecessary image area. Therefore, it is possible to determine an unnecessary image area due to finger shooting in a short time, and to improve the shooting response. Further, when there is no movement in the photographing apparatus, an unnecessary image area is not determined, so that erroneous determination at the time of fixed-point shooting can be suppressed.
上記構成により、撮影前のプレビュー画像などの撮影装置に動きのある状態で取得した複数の撮像データについて、色分布、輝度分布等の画像パラメータの分布を認識し、略同一の領域に略同一の画像パラメータの分布がある場合、不要画像領域であると判定できる。このため、短時間に指写りによる不要画像領域を判定でき、撮影レスポンスを向上できる。また、撮影装置に動きが無い場合は、不要画像領域を判定しないため、定点撮影時などでの誤判定を抑制できる。 An imaging unit that captures a subject image, an imaging data storage unit that stores imaging data of the subject acquired by the imaging unit, a motion detection unit that detects movement of the imaging device, and an image parameter distribution of the imaging data Based on the image parameter identification unit for identifying and image parameters of a plurality of imaging data acquired in a state where the imaging device is in motion, when there is a distribution of substantially the same image parameters in substantially the same region in the imaging data, An imaging apparatus comprising: an unnecessary image determination unit that determines that the region is an unnecessary image region including an unnecessary image; and a notification unit that notifies that the unnecessary image region has been detected.
With the above configuration, image parameter distributions such as color distribution and luminance distribution are recognized for a plurality of imaging data acquired in a moving state in the imaging apparatus such as a preview image before imaging, and substantially the same in substantially the same region. When there is an image parameter distribution, it can be determined that the image area is an unnecessary image area. Therefore, it is possible to determine an unnecessary image area due to finger shooting in a short time, and to improve the shooting response. Further, when there is no movement in the photographing apparatus, an unnecessary image area is not determined, so that erroneous determination at the time of fixed-point shooting can be suppressed.
上記の撮影装置であって、前記画像パラメータ識別部は、前記画像パラメータとして撮像データの色分布を認識する撮影装置。
上記構成により、撮像データの色分布を用いて、動きがある状態での複数の撮像データにおいて非移動の略同一色分布領域を認識し、不要画像領域の有無を判定できる。 The imaging apparatus, wherein the image parameter identification unit recognizes a color distribution of imaging data as the image parameter.
With the configuration described above, it is possible to recognize the non-moving substantially identical color distribution region in a plurality of imaging data in a state of motion using the color distribution of the imaging data and determine the presence or absence of an unnecessary image region.
上記構成により、撮像データの色分布を用いて、動きがある状態での複数の撮像データにおいて非移動の略同一色分布領域を認識し、不要画像領域の有無を判定できる。 The imaging apparatus, wherein the image parameter identification unit recognizes a color distribution of imaging data as the image parameter.
With the configuration described above, it is possible to recognize the non-moving substantially identical color distribution region in a plurality of imaging data in a state of motion using the color distribution of the imaging data and determine the presence or absence of an unnecessary image region.
上記の撮影装置であって、前記不要画像判定部は、前記複数の撮像データの色分布において、略同一の色分布の領域が存在する場合、当該領域を不要画像が含まれる不要画像領域であると判定する撮影装置。
上記構成により、撮像データの色分布を用いて、動きがある状態での複数の撮像データにおいて非移動の略同一色分布領域を認識し、不要画像領域の有無を判定できる。 In the above imaging device, the unnecessary image determination unit is an unnecessary image region in which unnecessary images are included when there are regions having substantially the same color distribution in the color distribution of the plurality of imaging data. An imaging device that determines that
With the configuration described above, it is possible to recognize the non-moving substantially identical color distribution region in a plurality of imaging data in a state of motion using the color distribution of the imaging data and determine the presence or absence of an unnecessary image region.
上記構成により、撮像データの色分布を用いて、動きがある状態での複数の撮像データにおいて非移動の略同一色分布領域を認識し、不要画像領域の有無を判定できる。 In the above imaging device, the unnecessary image determination unit is an unnecessary image region in which unnecessary images are included when there are regions having substantially the same color distribution in the color distribution of the plurality of imaging data. An imaging device that determines that
With the configuration described above, it is possible to recognize the non-moving substantially identical color distribution region in a plurality of imaging data in a state of motion using the color distribution of the imaging data and determine the presence or absence of an unnecessary image region.
上記の撮影装置であって、前記画像パラメータ識別部は、前記画像パラメータとして撮像データの輝度分布を認識する撮影装置。
上記構成により、撮像データの輝度分布を用いて、動きがある状態での複数の撮像データにおいて非移動の低輝度領域を認識し、不要画像領域の有無を判定できる。 The imaging apparatus, wherein the image parameter identification unit recognizes a luminance distribution of imaging data as the image parameter.
With the above-described configuration, it is possible to recognize the non-moving low-luminance region in the plurality of imaging data in a state of motion using the luminance distribution of the imaging data and determine the presence or absence of an unnecessary image region.
上記構成により、撮像データの輝度分布を用いて、動きがある状態での複数の撮像データにおいて非移動の低輝度領域を認識し、不要画像領域の有無を判定できる。 The imaging apparatus, wherein the image parameter identification unit recognizes a luminance distribution of imaging data as the image parameter.
With the above-described configuration, it is possible to recognize the non-moving low-luminance region in the plurality of imaging data in a state of motion using the luminance distribution of the imaging data and determine the presence or absence of an unnecessary image region.
上記の撮影装置であって、前記撮像部は、一つの撮影光学系によって視差のある二枚の画像を順次撮影する単眼3D撮影が可能であり、前記画像パラメータ識別部は、前記画像パラメータとして撮像データの輝度分布を認識し、前記不要画像判定部は、前記二枚の画像の撮像データの双方において、略同一の位置に低輝度領域が存在する場合、当該領域を不要画像が含まれる不要画像領域であると判定する撮影装置。
上記構成により、単眼3D撮影を行う場合に、撮影装置を移動させて順次取得する視差のある二つの撮像データにおいて、略同一の位置に低輝度領域を認識することにより、不要画像領域を判定できる。 In the above imaging device, the imaging unit can perform monocular 3D imaging in which two images with parallax are sequentially captured by a single imaging optical system, and the image parameter identification unit captures the image as the image parameter. Recognizing the luminance distribution of the data, the unnecessary image determination unit, when there is a low luminance region at substantially the same position in both of the image data of the two images, the unnecessary image including the unnecessary image in the region An imaging device that determines to be an area.
With the above configuration, when performing monocular 3D imaging, it is possible to determine an unnecessary image area by recognizing a low-luminance area at approximately the same position in two pieces of imaging data with parallax acquired sequentially by moving the imaging apparatus. .
上記構成により、単眼3D撮影を行う場合に、撮影装置を移動させて順次取得する視差のある二つの撮像データにおいて、略同一の位置に低輝度領域を認識することにより、不要画像領域を判定できる。 In the above imaging device, the imaging unit can perform monocular 3D imaging in which two images with parallax are sequentially captured by a single imaging optical system, and the image parameter identification unit captures the image as the image parameter. Recognizing the luminance distribution of the data, the unnecessary image determination unit, when there is a low luminance region at substantially the same position in both of the image data of the two images, the unnecessary image including the unnecessary image in the region An imaging device that determines to be an area.
With the above configuration, when performing monocular 3D imaging, it is possible to determine an unnecessary image area by recognizing a low-luminance area at approximately the same position in two pieces of imaging data with parallax acquired sequentially by moving the imaging apparatus. .
上記の撮影装置であって、前記撮像部は、二つの撮影光学系によって視差のある二枚の画像を同時撮影する複眼3D撮影が可能であり、前記画像パラメータ識別部は、前記画像パラメータとして撮像データの輝度分布を認識し、前記不要画像判定部は、前記二枚の画像の撮像データの一方のみに低輝度領域が存在する場合、当該領域を不要画像が含まれる不要画像領域であると判定する撮影装置。
上記構成により、複眼3D撮影を行う場合に、同時取得する視差のある二つの撮像データにおいて、一方のみに低輝度領域を認識することにより、不要画像領域を判定できる。 In the above-described imaging device, the imaging unit can perform compound eye 3D imaging in which two images with parallax are simultaneously captured by two imaging optical systems, and the image parameter identification unit captures an image as the image parameter. Recognizing the luminance distribution of the data, the unnecessary image determination unit determines that the area is an unnecessary image area including an unnecessary image when only one of the image data of the two images has a low luminance area. Shooting device to do.
With the above configuration, when performing compound eye 3D shooting, an unnecessary image region can be determined by recognizing a low-luminance region in only one of two imaging data having parallax acquired simultaneously.
上記構成により、複眼3D撮影を行う場合に、同時取得する視差のある二つの撮像データにおいて、一方のみに低輝度領域を認識することにより、不要画像領域を判定できる。 In the above-described imaging device, the imaging unit can perform compound eye 3D imaging in which two images with parallax are simultaneously captured by two imaging optical systems, and the image parameter identification unit captures an image as the image parameter. Recognizing the luminance distribution of the data, the unnecessary image determination unit determines that the area is an unnecessary image area including an unnecessary image when only one of the image data of the two images has a low luminance area. Shooting device to do.
With the above configuration, when performing compound eye 3D shooting, an unnecessary image region can be determined by recognizing a low-luminance region in only one of two imaging data having parallax acquired simultaneously.
上記の撮影装置であって、前記撮像データの画像処理を行う画像処理部を備え、前記画像処理部は、前記不要画像領域を含む撮像データについて、前記撮像データ記憶部に記憶された他の撮像データを用いて被写体の画像の動きベクトルを算出し、前記動きベクトルに応じて前記他の撮像データから前記不要画像領域の画素を補完する、撮影装置。
上記構成により、他の撮像データを用いて不要画像領域を補完することにより、指写りが生じても撮影画像には影響を与えないようにすることができる。また、不要画像領域のトリミング等を行わないので、画角が狭くなることを防止できる。 The imaging apparatus includes an image processing unit that performs image processing on the imaging data, and the image processing unit stores other imaging data stored in the imaging data storage unit for imaging data including the unnecessary image region. An imaging apparatus that calculates a motion vector of an image of a subject using data and complements pixels in the unnecessary image area from the other imaging data according to the motion vector.
With the configuration described above, the unnecessary image area is complemented using other imaging data, so that the captured image can be prevented from being affected even if finger capture occurs. Further, since the unnecessary image area is not trimmed, it is possible to prevent the angle of view from being narrowed.
上記構成により、他の撮像データを用いて不要画像領域を補完することにより、指写りが生じても撮影画像には影響を与えないようにすることができる。また、不要画像領域のトリミング等を行わないので、画角が狭くなることを防止できる。 The imaging apparatus includes an image processing unit that performs image processing on the imaging data, and the image processing unit stores other imaging data stored in the imaging data storage unit for imaging data including the unnecessary image region. An imaging apparatus that calculates a motion vector of an image of a subject using data and complements pixels in the unnecessary image area from the other imaging data according to the motion vector.
With the configuration described above, the unnecessary image area is complemented using other imaging data, so that the captured image can be prevented from being affected even if finger capture occurs. Further, since the unnecessary image area is not trimmed, it is possible to prevent the angle of view from being narrowed.
上記の撮影装置であって、前記不要画像領域となる可能性が高い領域を予め設定する不要画像領域設定部を備える撮影装置。
上記構成により、予め不要画像領域を設定することによって、不要画像領域に関する画像補正など、指写りの対策を意図的かつ効果的に機能させることができる。 An imaging apparatus comprising: an unnecessary image area setting unit that presets an area that is likely to be the unnecessary image area.
With the configuration described above, by setting the unnecessary image area in advance, it is possible to intentionally and effectively cause countermeasures against finger image capturing such as image correction related to the unnecessary image area.
上記構成により、予め不要画像領域を設定することによって、不要画像領域に関する画像補正など、指写りの対策を意図的かつ効果的に機能させることができる。 An imaging apparatus comprising: an unnecessary image area setting unit that presets an area that is likely to be the unnecessary image area.
With the configuration described above, by setting the unnecessary image area in advance, it is possible to intentionally and effectively cause countermeasures against finger image capturing such as image correction related to the unnecessary image area.
上記の撮影装置であって、前記画像処理部は、前記不要画像領域を含む撮像データについて、前記他の撮像データからの画素を補完ができない場合、前記不要画像領域を含む領域を除くように撮像データを切り出して撮影画像を生成する、撮影装置。
上記構成により、不要画像領域について他の撮像データを用いて補完できない場合、不要画像領域を含まない撮像データを切り出して撮影画像を生成できる。 The imaging apparatus, wherein the image processing unit captures an image including the unnecessary image area when the image data including the unnecessary image area cannot be complemented with pixels from the other imaging data. An imaging device that cuts out data and generates a captured image.
With the above configuration, when the unnecessary image area cannot be complemented using other imaging data, the captured image can be generated by cutting out the imaging data not including the unnecessary image area.
上記構成により、不要画像領域について他の撮像データを用いて補完できない場合、不要画像領域を含まない撮像データを切り出して撮影画像を生成できる。 The imaging apparatus, wherein the image processing unit captures an image including the unnecessary image area when the image data including the unnecessary image area cannot be complemented with pixels from the other imaging data. An imaging device that cuts out data and generates a captured image.
With the above configuration, when the unnecessary image area cannot be complemented using other imaging data, the captured image can be generated by cutting out the imaging data not including the unnecessary image area.
被写体像を撮像する撮像部を備える撮影装置における撮影方法であって、前記撮像部により取得した被写体の撮像データを記憶するステップと、当該撮影装置の動きを検出するステップと、前記撮像データの画像パラメータの分布を識別するステップと、前記撮影装置の動きがある状態で取得された複数の撮像データの画像パラメータに基づき、前記撮像データにおいて略同一の領域に略同一の画像パラメータの分布がある場合、当該領域を不要画像が含まれる不要画像領域であると判定するステップと、前記不要画像領域が検出されたことを通知するステップと、を有する撮影方法。
An imaging method in an imaging device including an imaging unit that captures a subject image, the step of storing imaging data of the subject acquired by the imaging unit, the step of detecting movement of the imaging device, and an image of the imaging data When there is substantially the same image parameter distribution in substantially the same region in the imaging data based on the step of identifying the parameter distribution and the image parameters of the plurality of imaging data acquired in a state where the imaging device is in motion And a step of determining that the area is an unnecessary image area including an unnecessary image and a step of notifying that the unnecessary image area has been detected.
上記の撮影方法における各手順を、コンピュータにより実行させるプログラム。
A program that causes a computer to execute each procedure in the above photographing method.
なお、本発明は、本発明の趣旨ならびに範囲を逸脱することなく、明細書の記載、並びに周知の技術に基づいて、当業者が様々な変更、応用することも本発明の予定するところであり、保護を求める範囲に含まれる。また、発明の趣旨を逸脱しない範囲において、上記実施形態における各構成要素を任意に組み合わせてもよい。
The present invention is intended to be variously modified and applied by those skilled in the art based on the description in the specification and well-known techniques without departing from the spirit and scope of the present invention. Included in the scope for protection. In addition, the constituent elements in the above embodiment may be arbitrarily combined without departing from the spirit of the invention.
本出願は、2011年4月28日出願の日本特許出願(特願2011-101617)に基づくものであり、その内容はここに参照として取り込まれる。
This application is based on a Japanese patent application filed on April 28, 2011 (Japanese Patent Application No. 2011-101617), the contents of which are incorporated herein by reference.
本発明は、撮影装置において、短時間に精度良く指写りの判定を可能にすることができる効果を有し、例えばデジタルカメラ、カメラ機能付き電子機器などの使用者が筐体を把持して被写体の撮影を行う撮影装置等として有用である。
INDUSTRIAL APPLICABILITY The present invention has an effect of enabling fingering judgment to be accurately performed in a short time in a photographing apparatus. For example, a user of a digital camera, an electronic device with a camera function, etc. It is useful as a photographing device that performs the above photographing.
11、21 撮像部
12、22 撮像データ記憶部
13 色分布認識部
14 動き検出部
15、25 不要画像判定部
16、26 表示部
17、27 制御部
18、28 操作部
23 輝度分布認識部
24 輝度分布記憶部
51 画像処理部
52 画像保存部 DESCRIPTION OF SYMBOLS 11, 21 Imaging part 12, 22 Imaging data storage part 13 Color distribution recognition part 14 Motion detection part 15, 25 Unnecessary image determination part 16, 26 Display part 17, 27 Control part 18, 28 Operation part 23 Luminance distribution recognition part 24 Luminance Distribution storage unit 51 Image processing unit 52 Image storage unit
12、22 撮像データ記憶部
13 色分布認識部
14 動き検出部
15、25 不要画像判定部
16、26 表示部
17、27 制御部
18、28 操作部
23 輝度分布認識部
24 輝度分布記憶部
51 画像処理部
52 画像保存部 DESCRIPTION OF
Claims (11)
- 被写体像を撮像する撮像部と、
前記撮像部により取得した被写体の撮像データを記憶する撮像データ記憶部と、
当該撮影装置の動きを検出する動き検出部と、
前記撮像データの画像パラメータの分布を識別する画像パラメータ識別部と、
前記撮影装置の動きがある状態で取得された複数の撮像データの画像パラメータに基づき、前記撮像データにおいて略同一の領域に略同一の画像パラメータの分布がある場合、当該領域を不要画像が含まれる不要画像領域であると判定する不要画像判定部と、
前記不要画像領域が検出されたことを通知する通知部と、
を備える撮影装置。 An imaging unit that captures a subject image;
An imaging data storage unit for storing imaging data of the subject acquired by the imaging unit;
A motion detector for detecting the motion of the imaging device;
An image parameter identification unit for identifying an image parameter distribution of the imaging data;
Based on image parameters of a plurality of imaging data acquired in a state where the photographing apparatus is in motion, if there is a distribution of substantially the same image parameters in substantially the same area in the imaging data, the area includes unnecessary images. An unnecessary image determination unit that determines that the area is an unnecessary image area;
A notification unit for notifying that the unnecessary image area has been detected;
An imaging device comprising: - 請求項1に記載の撮影装置であって、
前記画像パラメータ識別部は、前記画像パラメータとして撮像データの色分布を認識する撮影装置。 The imaging device according to claim 1,
The image parameter identifying unit is a photographing apparatus that recognizes a color distribution of image data as the image parameter. - 請求項2に記載の撮影装置であって、
前記不要画像判定部は、前記複数の撮像データの色分布において、略同一の色分布の領域が存在する場合、当該領域を不要画像が含まれる不要画像領域であると判定する撮影装置。 The imaging device according to claim 2,
In the color distribution of the plurality of imaging data, the unnecessary image determination unit determines that the region is an unnecessary image region including an unnecessary image when there is a region having substantially the same color distribution. - 請求項1に記載の撮影装置であって、
前記画像パラメータ識別部は、前記画像パラメータとして撮像データの輝度分布を認識する撮影装置。 The imaging device according to claim 1,
The image parameter identifying unit is a photographing device that recognizes a luminance distribution of image data as the image parameter. - 請求項1に記載の撮影装置であって、
前記撮像部は、一つの撮影光学系によって視差のある二枚の画像を順次撮影する単眼3D撮影が可能であり、
前記画像パラメータ識別部は、前記画像パラメータとして撮像データの輝度分布を認識し、
前記不要画像判定部は、前記二枚の画像の撮像データの双方において、略同一の位置に低輝度領域が存在する場合、当該領域を不要画像が含まれる不要画像領域であると判定する撮影装置。 The imaging device according to claim 1,
The imaging unit is capable of monocular 3D imaging in which two images with parallax are sequentially captured by one imaging optical system,
The image parameter identification unit recognizes a luminance distribution of imaging data as the image parameter,
The unnecessary image determination unit determines that the area is an unnecessary image area including an unnecessary image when a low-luminance area exists at substantially the same position in both of the imaging data of the two images. . - 請求項1に記載の撮影装置であって、
前記撮像部は、二つの撮影光学系によって視差のある二枚の画像を同時撮影する複眼3D撮影が可能であり、
前記画像パラメータ識別部は、前記画像パラメータとして撮像データの輝度分布を認識し、
前記不要画像判定部は、前記二枚の画像の撮像データの一方のみに低輝度領域が存在する場合、当該領域を不要画像が含まれる不要画像領域であると判定する撮影装置。 The imaging device according to claim 1,
The imaging unit is capable of compound-eye 3D imaging in which two images with parallax are simultaneously captured by two imaging optical systems,
The image parameter identification unit recognizes a luminance distribution of imaging data as the image parameter,
The unnecessary image determination unit determines that the area is an unnecessary image area including an unnecessary image when a low luminance area exists in only one of the imaging data of the two images. - 請求項1に記載の撮影装置であって、
前記撮像データの画像処理を行う画像処理部を備え、
前記画像処理部は、前記不要画像領域を含む撮像データについて、前記撮像データ記憶部に記憶された他の撮像データを用いて被写体の画像の動きベクトルを算出し、前記動きベクトルに応じて前記他の撮像データから前記不要画像領域の画素を補完する、撮影装置。 The imaging device according to claim 1,
An image processing unit that performs image processing of the imaging data;
The image processing unit calculates a motion vector of an image of a subject using other imaging data stored in the imaging data storage unit for imaging data including the unnecessary image region, and determines the other according to the motion vector. An imaging device that complements the pixels of the unnecessary image area from the imaging data of the image. - 請求項7に記載の撮影装置であって、
前記不要画像領域となる可能性が高い領域を予め設定する不要画像領域設定部を備える撮影装置。 The photographing apparatus according to claim 7,
An imaging apparatus comprising an unnecessary image area setting unit that presets an area that is likely to be the unnecessary image area. - 請求項7または8に記載の撮影装置であって、
前記画像処理部は、前記不要画像領域を含む撮像データについて、前記他の撮像データからの画素を補完ができない場合、前記不要画像領域を含む領域を除くように撮像データを切り出して撮影画像を生成する、撮影装置。 The photographing apparatus according to claim 7 or 8,
When the image data including the unnecessary image area cannot be complemented with pixels from the other image data, the image processing unit generates a captured image by cutting out the image data so as to exclude the area including the unnecessary image area. A shooting device. - 被写体像を撮像する撮像部を備える撮影装置における撮影方法であって、
前記撮像部により取得した被写体の撮像データを記憶するステップと、
当該撮影装置の動きを検出するステップと、
前記撮像データの画像パラメータの分布を識別するステップと、
前記撮影装置の動きがある状態で取得された複数の撮像データの画像パラメータに基づき、前記撮像データにおいて略同一の領域に略同一の画像パラメータの分布がある場合、当該領域を不要画像が含まれる不要画像領域であると判定するステップと、
前記不要画像領域が検出されたことを通知するステップと、
を有する撮影方法。 An imaging method in an imaging device including an imaging unit that captures a subject image,
Storing imaging data of the subject acquired by the imaging unit;
Detecting the movement of the imaging device;
Identifying a distribution of image parameters of the imaging data;
Based on image parameters of a plurality of imaging data acquired in a state where the photographing apparatus is in motion, if there is a distribution of substantially the same image parameters in substantially the same area in the imaging data, the area includes unnecessary images. Determining that it is an unnecessary image area;
Notifying that the unnecessary image area has been detected;
A photographing method comprising: - 請求項10に記載の撮影方法における各手順を、コンピュータにより実行させるプログラム。 A program for causing a computer to execute each procedure in the photographing method according to claim 10.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011101617A JP2012235257A (en) | 2011-04-28 | 2011-04-28 | Photographing device |
JP2011-101617 | 2011-04-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012147368A1 true WO2012147368A1 (en) | 2012-11-01 |
Family
ID=47071898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/002934 WO2012147368A1 (en) | 2011-04-28 | 2012-04-27 | Image capturing apparatus |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2012235257A (en) |
WO (1) | WO2012147368A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2816797A1 (en) * | 2013-06-19 | 2014-12-24 | BlackBerry Limited | Device for detecting a camera obstruction |
US9055210B2 (en) | 2013-06-19 | 2015-06-09 | Blackberry Limited | Device for detecting a camera obstruction |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6766086B2 (en) | 2017-09-28 | 2020-10-07 | キヤノン株式会社 | Imaging device and its control method |
JP2019117375A (en) * | 2017-12-26 | 2019-07-18 | キヤノン株式会社 | Imaging apparatus, control method of the same, and program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004040712A (en) * | 2002-07-08 | 2004-02-05 | Minolta Co Ltd | Imaging apparatus |
JP2009139427A (en) * | 2007-12-03 | 2009-06-25 | Fujifilm Corp | Camera and photographing control method |
-
2011
- 2011-04-28 JP JP2011101617A patent/JP2012235257A/en not_active Withdrawn
-
2012
- 2012-04-27 WO PCT/JP2012/002934 patent/WO2012147368A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004040712A (en) * | 2002-07-08 | 2004-02-05 | Minolta Co Ltd | Imaging apparatus |
JP2009139427A (en) * | 2007-12-03 | 2009-06-25 | Fujifilm Corp | Camera and photographing control method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2816797A1 (en) * | 2013-06-19 | 2014-12-24 | BlackBerry Limited | Device for detecting a camera obstruction |
US9055210B2 (en) | 2013-06-19 | 2015-06-09 | Blackberry Limited | Device for detecting a camera obstruction |
Also Published As
Publication number | Publication date |
---|---|
JP2012235257A (en) | 2012-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8792019B2 (en) | Video creation device, video creation method and non-transitory computer-readable storage medium | |
US7747159B2 (en) | Focusing device and image-capturing device provided with the same | |
US10021307B2 (en) | Processing apparatus for camera shake correction | |
JP2013070164A (en) | Imaging device and imaging method | |
US8576320B2 (en) | Digital photographing apparatus and method of controlling the same | |
JP2008009263A (en) | Imaging device and program therefor | |
JP2007201534A (en) | Imaging apparatus | |
KR101728042B1 (en) | Digital photographing apparatus and control method thereof | |
JP6389342B2 (en) | Imaging apparatus and control method thereof | |
US9451149B2 (en) | Processing apparatus, processing method, and program | |
JP7154758B2 (en) | Image processing device and its control method | |
CN114882543A (en) | Image processing apparatus, image processing method, and computer-readable storage medium | |
JP2011217103A (en) | Compound eye photographing method and apparatus | |
US11178337B2 (en) | Image capturing apparatus, method of controlling the same, and non-transitory computer readable storage medium for calculating a depth width for an object area based on depth information | |
CN108289170B (en) | Photographing apparatus, method and computer readable medium capable of detecting measurement area | |
WO2012147368A1 (en) | Image capturing apparatus | |
JP2020107956A (en) | Imaging apparatus, imaging method, and program | |
JP2009171428A (en) | Control method and program for digital camera apparatus and electronic zoom | |
JP2013110754A (en) | Camera device, and photographing method and program of the same | |
JP6115024B2 (en) | Imaging apparatus, imaging processing method, and program | |
JP6645711B2 (en) | Image processing apparatus, image processing method, and program | |
JP6460310B2 (en) | Imaging apparatus, image display method, and program | |
JP6346484B2 (en) | Image processing apparatus and control method thereof | |
JP2013205675A (en) | Imaging apparatus | |
JP7458769B2 (en) | Image processing device, imaging device, image processing method, program and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12776110 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12776110 Country of ref document: EP Kind code of ref document: A1 |