CN114143420B - Dual-sensor camera system and privacy protection camera method thereof - Google Patents
Dual-sensor camera system and privacy protection camera method thereof Download PDFInfo
- Publication number
- CN114143420B CN114143420B CN202011625515.2A CN202011625515A CN114143420B CN 114143420 B CN114143420 B CN 114143420B CN 202011625515 A CN202011625515 A CN 202011625515A CN 114143420 B CN114143420 B CN 114143420B
- Authority
- CN
- China
- Prior art keywords
- image
- color
- infrared
- details
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 39
- 238000003384 imaging method Methods 0.000 claims abstract description 50
- 230000002950 deficient Effects 0.000 claims description 38
- 230000009977 dual effect Effects 0.000 claims description 25
- 230000007547 defect Effects 0.000 claims description 18
- 238000010801 machine learning Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 12
- 210000002569 neuron Anatomy 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000005284 excitation Effects 0.000 claims description 4
- 230000000295 complement effect Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000004927 fusion Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010034972 Photosensitivity reaction Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 230000036211 photosensitivity Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/743—Bracketing, i.e. taking a series of images with varying exposure conditions
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Color Television Image Signal Generators (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Traffic Control Systems (AREA)
- Cameras In General (AREA)
- Burglar Alarm Systems (AREA)
- Air Bags (AREA)
- Measurement Of Optical Distance (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention provides a dual-sensor imaging system and a privacy protection imaging method thereof. The system controls the color and infrared sensor to acquire a plurality of color and infrared images respectively by adopting a plurality of exposure conditions suitable for a shooting scene, adaptively selects the combination of the color and the infrared images which can show the details of the shooting scene, detects the characteristic region with the characteristics of the object of interest in the color image, fuses the color and the infrared images to generate a fused image with the details of the shooting scene, cuts the characteristic region image of the fused image and replaces the characteristic region image with an image which does not belong to the infrared image to generate the scene image.
Description
Technical Field
The present disclosure relates to an image capturing system and method, and more particularly, to a dual-sensor image capturing system and privacy protection image capturing method thereof.
Background
The exposure conditions of cameras (including aperture, shutter, perceived brightness) affect the quality of the captured image, and many cameras automatically adjust the exposure conditions during the capture of the image to obtain a clear and bright image. However, in a scene with high contrast such as a low light source or a backlight, the result of adjusting the exposure condition by the camera may generate too high noise or overexposure of a partial area, which cannot satisfy the image quality of all the areas.
In view of this, a new image sensor architecture is adopted in the prior art, which utilizes the sensitivity characteristic of an Infrared (IR) sensor Gao Guangmin to insert and dispose IR pixels in color pixels of the image sensor so as to assist in brightness detection. For example, fig. 1 is a schematic diagram of an existing image acquisition using an image sensor. Referring to fig. 1, in a conventional image sensor 10, pixels of red (R), green (G), blue (B) and the like are arranged, and pixels of infrared (I) are also arranged alternately. Thus, the image sensor 10 is capable of combining color information 12 acquired by R, G, B color pixels with luminance information 14 acquired by I pixels to obtain an image 16 of moderate color and luminance.
However, under the above-mentioned architecture of a single image sensor, the exposure condition of each pixel in the image sensor is the same, so that only the exposure condition suitable for the color pixel or the infrared pixel can be selected to acquire the image, and as a result, the characteristics of the two pixels cannot be effectively utilized to improve the image quality of the acquired image.
Disclosure of Invention
The invention provides a dual-sensor imaging system and a privacy protection imaging method thereof, which can generate a scene image with imaging scene details under the condition of not invading the privacy of an imaging object.
The dual-sensor camera system comprises at least one color sensor, at least one infrared sensor, a storage device and a processor coupled with the color sensor, the infrared sensor and the storage device. The processor is configured to load and execute a computer program stored in a storage device to: controlling a color sensor and an infrared sensor to respectively acquire a plurality of color images and a plurality of infrared images by adopting a plurality of exposure conditions suitable for an imaging scene; adaptively selecting a combination of color images and infrared images that reveals details of the imaging scene; detecting feature regions in the selected color image having at least one feature of the object of interest based on the features; and fusing the selected color image and the infrared image to generate a fused image with the details of the shooting scene, cutting the image of the characteristic area in the fused image and replacing the image with an image which does not belong to the infrared image to generate the scene image.
The privacy protection image pickup method of the dual-sensor image pickup system is suitable for the dual-sensor image pickup system comprising at least one color sensor, at least one infrared sensor and a processor. The method comprises the following steps: controlling a color sensor and an infrared sensor to respectively acquire a plurality of color images and a plurality of infrared images by adopting a plurality of exposure conditions suitable for an imaging scene; adaptively selecting a combination of color images and infrared images that reveals details of the imaging scene; according to at least one feature of the object of interest, detecting a feature region having the feature in the selected color image, and fusing the selected color image and the infrared image to generate a fused image with imaging scene details, and cropping the image of the feature region in the fused image and replacing the image with an image not belonging to the infrared image to generate a scene image.
Based on the above, the dual-sensor image capturing system and the privacy protection image capturing method thereof of the present invention acquire a plurality of images using color sensors and infrared sensors that are configured independently and using different exposure conditions suitable for the current image capturing scene, select a combination of the color images and the infrared images that can reveal details of the image capturing scene for fusion, and replace the sensitive area therein with a non-infrared image, for example: high dynamic range images, thereby generating a scene image with imaging scene details without violating the privacy of the imaging object.
In order to make the present disclosure more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a schematic diagram of a prior art image acquisition using an image sensor;
FIG. 2 is a schematic diagram illustrating the use of an image sensor to acquire an image in accordance with an embodiment of the present invention;
FIG. 3 is a block diagram of a dual sensor camera system according to one embodiment of the present invention;
FIG. 4 is a flow chart of a privacy preserving imaging method of a dual sensor imaging system according to an embodiment of the present invention;
FIG. 5 is a flow chart of a privacy preserving imaging method of a dual sensor imaging system in accordance with an embodiment of the present invention;
FIG. 6 is an example privacy preserving camera method of a dual sensor camera system according to an embodiment of the present invention;
fig. 7 is a flowchart of a privacy preserving imaging method of a dual sensor imaging system according to an embodiment of the present invention.
Symbol description
10. 20: Image sensor
12: Color information
14: Luminance information
16: Image processing apparatus
22. 32: Color sensor
22A, 62: color image
24. 34: Infrared sensor
24A, 64: infrared image
26. 66: Scene image
30: Dual sensor camera system
36: Storage device
38: Processor and method for controlling the same
62A: face region
R, G, B, I: pixel arrangement
S402 to S408, S502 to S510, S702 to S720: step (a)
Detailed Description
The embodiment of the invention discloses a dual-sensor image pickup system and a privacy protection image pickup method, which utilize independently configured color and infrared sensors to respectively acquire a plurality of images under different exposure conditions, and select proper colors of the exposure conditions and the infrared images to be fused into a result image, thereby supplementing the texture details of the color images and improving the image quality of the photographed images. For the problem that the privacy of the photographed object may be damaged by the infrared image acquired by the infrared sensor, for example, the body details of the photographed object under wearing may be revealed, the photographing method according to the embodiment of the invention may be processed for a specific area, so that the photographing quality is improved and the damage is avoided.
Fig. 2 is a schematic diagram illustrating capturing an image using an image sensor in accordance with an embodiment of the present invention. Referring to fig. 2, the image sensor 20 of the embodiment of the present invention adopts a dual-sensor architecture in which the color sensor 22 and the Infrared (IR) sensor 24 are independently configured, uses the characteristics of the color sensor 22 and the infrared sensor 24 to respectively obtain a plurality of images by using a plurality of exposure conditions suitable for the current shooting scene, selects a color image 22a and an infrared image 24a with appropriate exposure conditions, and uses the infrared image 24a to complement the texture details lacking in the color image 22a by means of image fusion, thereby obtaining a scene image 26 with good color and texture details.
Fig. 3 is a block diagram of a dual sensor camera system according to an embodiment of the present invention. Referring to fig. 3, the dual-sensor camera system 30 of the present embodiment can be configured in an electronic device such as a mobile phone, a tablet computer, a notebook computer, a navigation device, a driving recorder, a digital camera, a digital video camera, etc. for providing a camera function. The dual sensor camera system 30 includes at least one color sensor 32, at least one infrared sensor 34, a memory device 36, and a processor 38, the functions of which are as follows:
Color sensor 32 may, for example, comprise a charge coupled device (Charge Coupled Device, CCD), a complementary metal oxide semiconductor (Complementary Metal-Oxide Semiconductor, CMOS) device, or other type of photosensitive device, and may sense light intensity to produce an image of the camera scene. The color sensor 32 is, for example, a red, green and blue (RGB) image sensor, which includes red (R), green (G) and blue (B) color pixels, and is configured to acquire color information such as red light, green light and blue light in the imaging scene, and combine the color information to generate a color image of the imaging scene.
The infrared sensor 34 includes, for example, a CCD, a CMOS device, or other kind of photosensitive device, which is capable of sensing infrared light by adjusting a wavelength sensing range of the photosensitive device. The infrared sensor 34 acquires infrared light information in the imaging scene using the above-described photosensitive device as a pixel, for example, and synthesizes the infrared light information to generate an infrared image of the imaging scene.
The Memory device 36 is, for example, any type of fixed or removable random access Memory (Random Access Memory, RAM), read-Only Memory (ROM), flash Memory (Flashmemory), a hard disk or the like, or a combination thereof, for storing a computer program executable by the processor 38. In some embodiments, storage device 36 may also store, for example, color images acquired by color sensor 32 and infrared images acquired by infrared sensor 34.
The Processor 38 is, for example, a central processing unit (Central Processing Unit, CPU), or other programmable general purpose or special purpose Microprocessor (Microprocessor), microcontroller (Microcontroller), digital signal Processor (DIGITAL SIGNAL Processor, DSP), programmable controller, application SPECIFIC INTEGRATED Circuits (ASIC), programmable logic device (Programmable Logic Device, PLD), or other similar devices or combinations of devices, as the invention is not limited in this regard. In this embodiment, the processor 38 may load a computer program from the storage device 36 to perform the privacy preserving image capturing method of the dual sensor image capturing system of the embodiment of the present invention.
Fig. 4 is a flowchart of a privacy preserving imaging method of a dual sensor imaging system according to an embodiment of the present invention. Referring to fig. 3 and fig. 4, the method of the present embodiment is applicable to the dual-sensor image capturing system 30, and the following describes the detailed steps of the privacy-preserving image capturing method of the present embodiment with respect to each device of the dual-sensor image capturing system 30.
In step S402, the processor 38 controls the color sensor 32 and the infrared sensor 34 to acquire a plurality of color images and a plurality of infrared images, respectively, using a plurality of exposure conditions suitable for the current imaging scene.
In some embodiments, processor 38, for example, controls at least one of color sensor 32 and infrared sensor 34 to take at least one standard image of the camera scene using standard exposure conditions and to use these standard images to identify the camera scene. The standard exposure conditions include parameters such as aperture, shutter, brightness, etc. determined by the existing photometry technique, and the processor 38 identifies the image scene according to the intensity or distribution of image parameters such as Hue (Hue), brightness (Value), chroma (Chroma), white balance, etc. of the image acquired under the exposure conditions, including the position (indoor or outdoor) of the image scene, the light source (high light source or low light source), the contrast (high contrast or low contrast), the type (object or portrait) or the state (dynamic or static) of the image object, etc. In other embodiments, the processor 38 may also use a positioning method to identify the image scene or directly receive the user operation to set the image scene, which is not limited herein.
In some embodiments, processor 38 controls color sensor 32 and infrared sensor 34 to obtain color images with shorter or longer Exposure times, for example, based on Exposure times in standard Exposure conditions, and the difference between the Exposure times of the color images is, for example, any Value in an Exposure Value (EV) between-3 and 3, which is not limited herein. For example, if an a-picture is twice brighter than a B-picture, then the EV of the B-picture may be incremented by 1, and so on, the exposure value may be a fraction (e.g., +0.3 EV), without limitation.
In step S404, the processor 38 adaptively selects a combination of the color image and the infrared image that reveals details of the imaging scene. In some embodiments, processor 38 may control color sensor 32 to acquire color images at the appropriate exposure times, for example, so that portions of the color details of the captured scene may be preserved and to ensure that the later fused image reveals the color details of the captured scene. The appropriate exposure time is, for example, an exposure time shorter than an exposure time that would cause overexposure of the acquired image by a preset time length, for example, any value of 0.01 to 1 second, without limitation.
In some embodiments, processor 38 may, for example, select one of the color images as a reference image based on the color details of the respective color image, identify at least one defective area in the reference image that lacks texture details, and select one of the infrared images as the image fused with the reference image based on the texture details of the images of the respective infrared images that correspond to the defective areas.
In detail, based on the color sensor 32 acquiring color images only using a single exposure condition at a time, each color image may have high noise, overexposure or underexposure (i.e., the above-mentioned defective area) in the case that the image capturing scene is a low light source or a high contrast. At this time, the processor 38 may select an infrared image having texture details of the defective region from the plurality of previously acquired infrared images for the defective region by utilizing the characteristic of high photosensitivity of the infrared sensor 34, and may complement the texture details of the defective region in the color image.
In step S406, a feature region in the selected color image having at least one feature of the object of interest is detected by the processor 38 from the feature. Such as, for example, a physical feature of a human, such as a face, torso, limbs, etc., or a feature of a human being's wear, such as a mask, clothing, pants, without limitation.
In some embodiments, processor 38 may identify objects of interest in the color image to detect the feature region, for example, using a machine learning model. The machine learning model is trained by using a plurality of color images including the object of interest and recognition results of the object of interest in each color image.
In detail, the machine learning model is, for example, a convolutional neural network (Convolutional Neural Network, CNN) including an input layer, at least one hidden layer, and an output layer, a deep neural network (Deep Neural Network, DNN), a recurrent neural network (Recurrent Neural Network, RNN), or other learning models, which are not limited herein. The processor 38, for example, sequentially inputs a plurality of color images including the object of interest into the input layer, and calculates the current output from the plurality of neurons of each hidden layer for the output of the input layer using an excitation function. The excitation function is, for example, a sigmoid (sigmoid) function or a hyperbolic tangent (tanh) function, without limitation, and then the current output of the hidden layer is converted into a prediction result of the object of interest by the output layer by using a conversion function such as a normalized index (softmax) function. Processor 38 then compares the predicted result with the identification result of the currently input color image map to update the weights of the neurons of the hidden layer based on the comparison result. The processor 38 calculates a loss function (loss function) by using the prediction result output by the machine learning model and the real recognition result, for example, and measures whether the prediction result of the machine learning model is accurate enough, so as to update the weights of the neurons of the hidden layer. In other embodiments, the processor 38 may also update the weights of the neurons of the hidden layer using a gradient descent method (GRADIENT DESCENT, GD) or a back propagation method (Backpropagation, BP), without limitation. Finally, the processor 38 will repeat the above steps, thereby training the machine learning model to identify the object of interest, and can obtain the region occupied by the object of interest in the color image as the feature region.
In step S408, the selected color image and the infrared image are fused by the processor 38 to generate a fused image with details of the captured scene, and the feature area image in the fused image is cropped and replaced with an image not belonging to the infrared image to generate a scene image. The image not belonging to the infrared image is, for example, the color image described above or an image generated by processing a plurality of color images via a high dynamic range (HIGH DYNAMIC RANGE, HDR), which is not limited herein.
In some embodiments, processor 38 directly fuses the selected color image and the entire image of the infrared image, for example, by calculating an average or weighted average of pixel values of corresponding pixels in the selected color image and the entire image of the infrared image, or by other image fusion means. In some embodiments, processor 38 may also use the image corresponding to the defective area in the infrared image to fill in or replace the image of the defective area in the color image for only the defective area in the color image, without limitation.
In some embodiments, processor 38 may, for example, crop out the selected color image and infrared image into a feature region image and then fuse the feature region images, and then attach images other than the infrared image to the feature regions in the fused image to generate the scene image. Thus, the amount of computation required to fuse the images can be reduced.
In some embodiments, processor 38 may control color sensor 32 to acquire multiple color images with multiple exposure times that are longer or shorter than the exposure time of the selected color image and perform high dynamic range processing to generate a high dynamic range image with details of the feature region and use this high dynamic range image to replace the image of the feature region in the cropped fused image, for example.
In detail, the processor 38 controls the color sensor 32 to acquire a color image with a shorter exposure time and a color image with a longer exposure time respectively according to the exposure time of the selected color image, and performs the HDR processing in combination with the color image acquired with the original exposure time. Namely, a region with better color and texture detail is selected from the three color images to complement the region without detail in other color images, so that a high dynamic range image with better detail of a bright part and a dark part is obtained.
In some embodiments, processor 38 may select exposure times to acquire multiple color images based on the details of the feature region of the selected color image such that the acquired multiple color images may be subjected to high dynamic range processing to generate a high dynamic range image having such feature region details. For example, if the feature area of the selected color image lacks color and texture details due to overexposure, processor 38 may select a plurality of shorter exposure times to acquire the color image and to perform high dynamic range processing, thereby generating a high dynamic range image with color and texture details. Similarly, if the feature area of the selected color image lacks color and texture details due to underexposure, processor 38 selects a plurality of longer exposure times to acquire the color image and performs high dynamic range processing, thereby generating a high dynamic range image with color and texture details.
In some embodiments, the processor 38 may perform Noise Reduction (NR) processing, such as two-dimensional spatial noise reduction (2D spatial denoise) for high dynamic range images, to reduce noise in the high dynamic range images and improve the image quality of the final output image.
By the above-described method, the dual sensor imaging system 30 can not only generate an image that can include all details (color and texture details) of the imaging scene, but also replace the image of the feature region in the image with an image that does not belong to an infrared image (e.g., a high dynamic range image), thereby improving the image quality of the captured image without violating the privacy of the imaging subject.
Fig. 5 is a flowchart of a privacy preserving imaging method of a dual sensor imaging system according to an embodiment of the present invention. Referring to fig. 3 and 5, the present embodiment further illustrates a detailed implementation of the above embodiment for fusing the whole image. The method of the present embodiment is applicable to the dual-sensor imaging system 30, and the detailed steps of the privacy-preserving imaging method of the present embodiment will be described below with respect to each element of the dual-sensor imaging system 30.
In step S502, one of the color images is selected as a reference image by the processor 38 according to the color details of each color image. In one embodiment, processor 38 selects, for example, the color image with the most color details as the reference image. The size of the color details may be determined, for example, by the size of the overexposed or underexposed areas in the color image. In detail, the color of the overexposed region pixels is closer to white and the color of the underexposed region pixels is closer to black, so that the color details of these regions are less. Therefore, if more such areas are included in the color image, which represents less color details, the processor 38 can determine which color image has the greatest color details, and use the color image as the reference image. In other embodiments, processor 38 may also distinguish how much of its color details are based on the contrast, saturation, or other image parameters of each color image, without limitation.
In step S504, at least one defective area in the reference image lacking texture details is identified by the processor 38. The defect area is, for example, the above overexposed area or underexposed area, or the area with higher noise obtained under the low light source, which is not limited herein.
In step S506, one of the infrared images is selected by the processor 38 based on the texture details of the image corresponding to the defective area among the infrared images. In an embodiment, processor 38, for example, selects the infrared image with the most texture details of the image corresponding to the defective area as the image to be fused with the reference image. The processor 38 is not limited herein, and can distinguish how much the texture details are based on, for example, the contrast of each infrared image or other image parameters.
In step S508, feature acquisition is performed on the selected color image and infrared image by the processor 38 to acquire a plurality of features in the color image and infrared image, and the color image and infrared image are aligned according to the correspondence between the acquired features. It should be noted that the above-mentioned feature acquisition and matching are merely examples, and in other embodiments, the processor 38 may also align the color image and the infrared image by using other kinds of image alignment methods, which is not limited herein.
At step S510, the aligned infrared image is image fused with the reference image by processor 38 to generate a scene image that complements the texture details of the defect region.
In some embodiments, processor 38 performs image fusion of the infrared image with the reference image, for example, by calculating an average or weighted average of pixel values of corresponding pixels in the color image and the entire image of the infrared image.
In some embodiments, processor 38 converts the color space of the reference image from, for example, an RGB color space to a YUV color space, and replaces the luminance component of the converted reference image with the luminance component of the infrared image, and then converts the color space of the replaced reference image back to the RGB color space to generate the scene image. In other embodiments, the processor 38 may also convert the color space of the reference image to YCbCr, CMYK or other color space, and convert the color space back to the original color space after replacing the luminance component, which is not limited in the embodiment.
In detail, since the luminance component of the infrared image has a better signal-to-noise ratio (SNR) and includes more texture details of the imaging scene, the luminance component of the infrared image is directly substituted for the luminance component of the reference image, so that the texture details in the reference image can be greatly increased.
By the above method, the dual-sensor image capturing system 30 can use the infrared image to increase the texture details of the color image, especially for the region with insufficient texture details, so as to improve the image quality of the captured image.
For example, fig. 6 is an example of a privacy preserving imaging method of a dual sensor imaging system according to an embodiment of the present invention. Referring to fig. 6, in the present embodiment, by using the privacy preserving image capturing method of fig. 5, a color image 62 with the greatest color details is selected as a reference image, and for a defect area (e.g., a face area 62 a) lacking texture details in the color image 62, an infrared image 64 with the greatest texture details of the defect area is selected from a plurality of infrared images obtained under different exposure conditions, so as to perform image fusion with the color image 62, thereby obtaining a scene image 66 with more color details and texture details.
In some embodiments, processor 38 converts the color space of the reference image from, for example, an RGB color space to a YUV color space, and replaces the luminance component of the image of the defective area of the converted reference image with the luminance component of the infrared image corresponding to the defective area, and then converts the color space of the replaced reference image back to the RGB color space to generate the scene image. In other embodiments, the processor 38 may also convert the color space of the reference image to YCbCr, CMYK or other color space, and convert the color space back to the original color space after replacing the luminance component, which is not limited in the embodiment.
By the above method, the dual-sensor image capturing system 30 can use the infrared image to complement the area with insufficient texture details in the color image, thereby improving the image quality of the captured image.
It should be noted that in some embodiments, the texture details of certain defective areas in the color image may not be enhanced or complemented by the infrared image due to specific factors, such as parallax (Parallax) between the color sensor 32 and the infrared sensor 34, which may cause the infrared sensor 34 to be obscured. In this case, embodiments of the present invention provide an alternative way to increase the texture detail of the defective area to maximize the image quality of the captured image.
Fig. 7 is a flowchart of a privacy preserving imaging method of a dual sensor imaging system according to an embodiment of the present invention. Referring to fig. 3 and fig. 7, the method of the present embodiment is applicable to the dual-sensor image capturing system 30, and the following describes the detailed steps of the privacy-preserving image capturing method of the present embodiment with respect to each device of the dual-sensor image capturing system 30.
In step S702, at least one of the color sensor 32 and the infrared sensor 34 is controlled by the processor 38 to acquire at least one standard image of the imaging scene using standard exposure conditions, and to identify the imaging scene using these standard images. The definition of the standard exposure condition and the identification manner of the imaging scene are as described in the foregoing embodiments, and are not described herein.
In step S704, the processor 38 controls the color sensor 32 and the infrared sensor 34 to acquire a plurality of color images and a plurality of infrared images, respectively, using a plurality of exposure conditions suitable for the recognized imaging scene. In step S706, one of the color images is selected as a reference image by the processor 38 according to the color details of each color image. In step S708, a feature region in the selected color image having at least one feature of the object of interest is detected by the processor 38 from the feature. In step S710, the color sensor 32 is controlled by the processor 38 to acquire a plurality of color images with a plurality of exposure times longer or shorter than the exposure time of the selected color image and perform high dynamic range processing to generate a high dynamic range image with details of the feature region. In step S712, at least one defective area in the reference image lacking texture detail is identified by the processor 38. The implementation manners of the above steps are the same as or similar to the steps S402 to S408 and S502 to S504 of the foregoing embodiments, respectively, so details thereof are not repeated here.
Unlike the previous embodiment, in step S714, processor 38 determines whether any of the above-mentioned plurality of infrared images includes texture details of the defective area in the reference image. The processor 38, for example, examines the infrared images for areas corresponding to the defective areas to determine whether the infrared sensor 34 is masked and whether the infrared images can be used to fill the texture details of the defective areas in the reference image.
If the ir image includes texture details of the defective area, in step S716, the processor 38 replaces the luminance component of the image of the defective area in the reference image with the luminance component of the ir image corresponding to the defective area to generate a fused image that complements the texture details of the defective area.
If no ir image includes texture details of the defective area, then in step S718, processor 38 replaces the image of the defective area in the reference image with the image corresponding to the defective area in the high dynamic range image to generate a fused image with the texture details of the defective area.
In some embodiments, the processor 38 may combine the processing manners of steps S716 and S718 to individually select an appropriate processing manner for the defect areas in the reference image, so as to maximize the details of the reference image, thereby improving the image quality of the captured image.
Finally, in step S720, the feature region image in the fused image is cropped and attached by the processor 38 to generate a scene image.
By the above method, the dual-sensor image capturing system 30 not only can complement the texture details by using the infrared image or the high dynamic range image for the defect area with insufficient texture details in the color image, but also can replace the image of the feature area in the fused image with the high dynamic range image, thereby improving the image quality of the captured image without invading the privacy of the captured object.
In summary, in the dual-sensor image capturing system and the privacy protection image capturing method thereof according to the present invention, the color sensor and the infrared sensor are independently configured to obtain a plurality of images, and the images with appropriate exposure conditions are selected from the images to be fused, so that the infrared images are used to fill or increase the texture details lacking in the color images, and the characteristic areas in the fused images, which may infringe the privacy of the captured object, are replaced by images not belonging to the infrared images, so that the scene image with the details of the captured scene can be generated without infringement of the privacy of the captured object.
While the present disclosure has been described with reference to the exemplary embodiments, it should be understood that the invention is not limited thereto, but may be embodied with various changes and modifications without departing from the spirit or scope of the present disclosure as defined by the appended claims and their equivalents.
Claims (20)
1. A dual sensor camera system comprising:
at least one color sensor;
At least one infrared sensor;
A storage device storing a computer program; and
A processor, coupled to the at least one color sensor, the infrared sensor, and the storage device, configured to load and execute the computer program to:
controlling the at least one color sensor and the at least one infrared sensor to acquire a plurality of color images and a plurality of infrared images respectively by adopting a plurality of exposure conditions suitable for a shooting scene;
selecting one of the color images as a reference image according to the color details of each color image;
Identifying at least one defective area in the reference image lacking texture detail;
Judging whether each infrared image comprises the texture details of the at least one defect area or not, and when at least one infrared image comprises the texture details;
adaptively selecting a combination of the color images and the infrared images that reveals details of the camera scene;
Detecting a feature region with the feature in the selected color image according to at least one feature of the object of interest; and
Fusing the selected color image and the infrared image to generate a fused image with the details of the imaging scene, and cropping the image of the characteristic region in the fused image and replacing the image with an image not belonging to the infrared image to generate a scene image.
2. The dual sensor camera system of claim 1, wherein the processor further comprises:
The at least one color sensor is controlled to acquire a plurality of color images with a plurality of exposure times longer or shorter than the selected color image and to perform high dynamic range processing to generate a high dynamic range image with details of the feature region, and to replace the image of the feature region in the fused image.
3. The dual sensor camera system of claim 2, wherein the processor comprises:
The exposure time for acquiring the plurality of color images is selected according to the details of the characteristic areas of the color images, so that the acquired plurality of color images are subjected to high dynamic range processing to generate the high dynamic range image with the details of the characteristic areas.
4. The dual sensor camera system of claim 1, wherein the processor comprises:
At least one of the at least one color sensor and the at least one infrared sensor is controlled to acquire at least one standard image of the imaging scene using standard exposure conditions, and the imaging scene is identified using the at least one standard image.
5. The dual sensor camera system of claim 1, wherein the processor comprises:
And selecting one of the infrared images as the infrared image used for fusing with the reference image according to the texture details of the image corresponding to the at least one defect area in each infrared image.
6. The dual sensor camera system of claim 5, wherein said processor comprises:
Selecting the color image with the greatest color details as the reference image; and
The infrared image corresponding to the image of the at least one defective area having the most texture detail is selected as the infrared image to be fused with the reference image.
7. The dual sensor camera system of claim 5, wherein said processor comprises:
Replacing a luminance component of an image of the at least one defective area in the reference image with an image of the infrared image corresponding to the at least one defective area to generate the scene image complementing the texture detail of the at least one defective area.
8. The dual sensor camera system of claim 5, wherein said processor further comprises:
controlling the at least one color sensor to acquire a plurality of color images with a plurality of exposure times longer or shorter than the selected color image and perform high dynamic range processing to generate a high dynamic range image with details of the feature region;
judging whether each infrared image comprises the texture details of the at least one defect area; and
And when none of the infrared images comprises the texture details, replacing the image of the at least one defect area in the reference image with the image corresponding to the at least one defect area in the high dynamic range image so as to generate the scene image with the texture details of the at least one defect area.
9. The dual sensor camera system of claim 1, wherein the processor further comprises:
Identifying the object of interest in the color images to detect the feature region using a machine learning model, wherein the machine learning model is trained using a plurality of color images including the object of interest and the identification of the object of interest in each of the color images.
10. The dual-sensor camera system of claim 9, wherein the machine learning model comprises an input layer, at least one hidden layer, and an output layer, the processor comprising:
sequentially inputting the color images into the input layer, calculating current output aiming at the output of the input layer by using an excitation function by a plurality of neurons of each at least one hidden layer, and converting the current output of the hidden layer into a prediction result of the object of interest by the output layer;
comparing the prediction result with the identification result of the currently input color image map to update the weight of each neuron of the hidden layer according to the comparison result; and
Repeating the steps, training the machine learning model to identify the object of interest.
11. A privacy preserving camera method of a dual sensor camera system comprising at least one color sensor, at least one infrared sensor, and a processor, the method comprising the steps of:
controlling the at least one color sensor and the at least one infrared sensor to acquire a plurality of color images and a plurality of infrared images respectively by adopting a plurality of exposure conditions suitable for a shooting scene;
Selecting one of the color images as a reference image according to the color details of each color image; and identifying at least one defective area in the reference image lacking texture detail;
judging whether each infrared image comprises the texture details of the at least one defect area, when at least one infrared image comprises the texture details,
Adaptively selecting a combination of the color images and the infrared images that reveals details of the camera scene;
Detecting a feature region with the feature in the selected color image according to at least one feature of the object of interest; and
Fusing the selected color image and the infrared image to generate a fused image with the details of the imaging scene, and cropping the image of the characteristic region in the fused image and replacing the image with an image not belonging to the infrared image to generate a scene image.
12. The method of claim 11, further comprising:
Controlling the at least one color sensor to acquire a plurality of color images with a plurality of exposure times longer or shorter than the selected color image and to perform high dynamic range processing to generate a high dynamic range image having details of the feature region, and replacing the image of the feature region in the fused image with the high dynamic range image.
13. The method of claim 11, wherein identifying the camera scene of the dual sensor camera system comprises:
At least one of the at least one color sensor and the at least one infrared sensor is controlled to acquire at least one standard image of the imaging scene using standard exposure conditions, and the imaging scene is identified using the at least one standard image.
14. The method of claim 11, wherein adaptively selecting the combination of the color image and the infrared image that reveals details of the camera scene comprises:
And selecting one of the infrared images as the infrared image used for fusing with the reference image according to the texture details of the image corresponding to the at least one defect area in each infrared image.
15. The method of claim 14, wherein adaptively selecting the combination of the color image and the infrared image that reveals details of the camera scene comprises:
Selecting the color image with the greatest color details as the reference image; and
The infrared image corresponding to the image of the at least one defective area having the most texture detail is selected as the infrared image to be fused with the reference image.
16. The method of claim 14, wherein fusing the selected color image and the infrared image to generate the scene image with the details of the camera scene comprises:
Replacing a luminance component of an image of the at least one defective area in the reference image with an image of the infrared image corresponding to the at least one defective area to generate the scene image complementing the texture detail of the at least one defective area.
17. The method of claim 14, wherein prior to the step of fusing the selected color image and the infrared image to generate a scene image with the details of the camera scene, the method further comprises:
controlling the at least one color sensor to acquire a plurality of color images with a plurality of exposure times longer or shorter than the selected color image and perform high dynamic range processing to generate a high dynamic range image with details of the feature region;
judging whether each infrared image comprises the texture details of the at least one defect area; and
And when none of the infrared images comprises the texture details, replacing the image of the at least one defect area in the reference image with the image corresponding to the at least one defect area in the high dynamic range image so as to generate the scene image with the texture details of the at least one defect area.
18. The method of claim 11, wherein detecting the selected feature areas in the color image and the infrared image having the feature according to at least one feature of an object of interest comprises:
Identifying the object of interest in the color images to detect the feature region using a machine learning model, wherein the machine learning model is trained using a plurality of color images including the object of interest and the identification of the object of interest in each of the color images.
19. The method of claim 18, wherein the machine learning model comprises an input layer, at least one hidden layer, and an output layer, prior to the step of detecting the selected feature areas of the color image and the infrared image having the feature according to at least one feature of an object of interest, further comprising:
Sequentially inputting the color images into the input layer, calculating current output aiming at the output of the input layer by using an excitation function through a plurality of neurons of each at least one hidden layer, and converting the current output of the hidden layer into a prediction result of the object of interest through the output layer;
comparing the prediction result with the identification result of the currently input color image map to update the weight of each neuron of the hidden layer according to the comparison result; and
Repeating the steps, training the machine learning model to identify the object of interest.
20. The method of claim 11, wherein controlling the at least one color sensor to acquire a plurality of color images with a plurality of exposure times that are longer or shorter than the selected color image and perform high dynamic range processing to generate a high dynamic range image with details of the feature region comprises:
The exposure time for acquiring the plurality of color images is selected according to the details of the characteristic areas of the color images, so that the acquired plurality of color images are subjected to high dynamic range processing to generate the high dynamic range image with the details of the characteristic areas.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063074477P | 2020-09-04 | 2020-09-04 | |
US63/074,477 | 2020-09-04 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114143420A CN114143420A (en) | 2022-03-04 |
CN114143420B true CN114143420B (en) | 2024-05-03 |
Family
ID=80438521
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011541300.2A Active CN114143418B (en) | 2020-09-04 | 2020-12-23 | Dual-sensor imaging system and imaging method thereof |
CN202011540274.1A Active CN114143443B (en) | 2020-09-04 | 2020-12-23 | Dual-sensor imaging system and imaging method thereof |
CN202011622478.XA Active CN114143419B (en) | 2020-09-04 | 2020-12-30 | Dual-sensor camera system and depth map calculation method thereof |
CN202011625552.3A Active CN114143421B (en) | 2020-09-04 | 2020-12-30 | Dual-sensor camera system and calibration method thereof |
CN202011625515.2A Active CN114143420B (en) | 2020-09-04 | 2020-12-30 | Dual-sensor camera system and privacy protection camera method thereof |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011541300.2A Active CN114143418B (en) | 2020-09-04 | 2020-12-23 | Dual-sensor imaging system and imaging method thereof |
CN202011540274.1A Active CN114143443B (en) | 2020-09-04 | 2020-12-23 | Dual-sensor imaging system and imaging method thereof |
CN202011622478.XA Active CN114143419B (en) | 2020-09-04 | 2020-12-30 | Dual-sensor camera system and depth map calculation method thereof |
CN202011625552.3A Active CN114143421B (en) | 2020-09-04 | 2020-12-30 | Dual-sensor camera system and calibration method thereof |
Country Status (2)
Country | Link |
---|---|
CN (5) | CN114143418B (en) |
TW (5) | TWI778476B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091341B (en) * | 2022-12-15 | 2024-04-02 | 南京信息工程大学 | Exposure difference enhancement method and device for low-light image |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008176195A (en) * | 2007-01-22 | 2008-07-31 | Seiko Epson Corp | Projector |
CN101404060A (en) * | 2008-11-10 | 2009-04-08 | 北京航空航天大学 | Human face recognition method based on visible light and near-infrared Gabor information amalgamation |
JP2013115679A (en) * | 2011-11-30 | 2013-06-10 | Fujitsu General Ltd | Imaging apparatus |
JP2016039409A (en) * | 2014-08-05 | 2016-03-22 | キヤノン株式会社 | Image processing device, its control method, program, and storage medium |
JP2018207497A (en) * | 2018-07-19 | 2018-12-27 | キヤノン株式会社 | Image processing apparatus and image processing method, imaging apparatus, program, and storage medium |
TW201931847A (en) * | 2018-01-09 | 2019-08-01 | 呂官諭 | Image sensor of enhancing image recognition resolution and application thereof greatly reducing usage costs of image sensors and further saving mechanical costs |
CN110349117A (en) * | 2019-06-28 | 2019-10-18 | 重庆工商大学 | A kind of infrared image and visible light image fusion method, device and storage medium |
CN111383206A (en) * | 2020-06-01 | 2020-07-07 | 浙江大华技术股份有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111527743A (en) * | 2017-12-28 | 2020-08-11 | 伟摩有限责任公司 | Multiple modes of operation with extended dynamic range |
IN202021032940A (en) * | 2020-07-31 | 2020-08-28 | .Us Priyadarsan |
Family Cites Families (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004246252A (en) * | 2003-02-17 | 2004-09-02 | Takenaka Komuten Co Ltd | Apparatus and method for collecting image information |
JP2005091434A (en) * | 2003-09-12 | 2005-04-07 | Noritsu Koki Co Ltd | Position adjusting method and image reader with damage compensation function using the same |
JP4244018B2 (en) * | 2004-03-25 | 2009-03-25 | ノーリツ鋼機株式会社 | Defective pixel correction method, program, and defective pixel correction system for implementing the method |
US9307212B2 (en) * | 2007-03-05 | 2016-04-05 | Fotonation Limited | Tone mapping for low-light video frame enhancement |
US8866920B2 (en) * | 2008-05-20 | 2014-10-21 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
US8902321B2 (en) * | 2008-05-20 | 2014-12-02 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
US8749635B2 (en) * | 2009-06-03 | 2014-06-10 | Flir Systems, Inc. | Infrared camera systems and methods for dual sensor applications |
US8681216B2 (en) * | 2009-03-12 | 2014-03-25 | Hewlett-Packard Development Company, L.P. | Depth-sensing camera system |
JP5670456B2 (en) * | 2009-08-25 | 2015-02-18 | アイピーリンク・リミテッド | Reduce noise in color images |
US8478123B2 (en) * | 2011-01-25 | 2013-07-02 | Aptina Imaging Corporation | Imaging devices having arrays of image sensors and lenses with multiple aperture sizes |
US10848731B2 (en) * | 2012-02-24 | 2020-11-24 | Matterport, Inc. | Capturing and aligning panoramic image and depth data |
TW201401186A (en) * | 2012-06-25 | 2014-01-01 | Psp Security Co Ltd | System and method for identifying human face |
US20150245062A1 (en) * | 2012-09-25 | 2015-08-27 | Nippon Telegraph And Telephone Corporation | Picture encoding method, picture decoding method, picture encoding apparatus, picture decoding apparatus, picture encoding program, picture decoding program and recording medium |
KR102070778B1 (en) * | 2012-11-23 | 2020-03-02 | 엘지전자 주식회사 | Rgb-ir sensor with pixels array and apparatus and method for obtaining 3d image using the same |
WO2014100783A1 (en) * | 2012-12-21 | 2014-06-26 | Flir Systems, Inc. | Time spaced infrared image enhancement |
TWM458748U (en) * | 2012-12-26 | 2013-08-01 | Chunghwa Telecom Co Ltd | Image type depth information retrieval device |
JP6055681B2 (en) * | 2013-01-10 | 2016-12-27 | 株式会社 日立産業制御ソリューションズ | Imaging device |
CN104661008B (en) * | 2013-11-18 | 2017-10-31 | 深圳中兴力维技术有限公司 | The treating method and apparatus that color image quality is lifted under low light conditions |
CN104021548A (en) * | 2014-05-16 | 2014-09-03 | 中国科学院西安光学精密机械研究所 | Method for acquiring scene 4D information |
US9516295B2 (en) * | 2014-06-30 | 2016-12-06 | Aquifi, Inc. | Systems and methods for multi-channel imaging based on multiple exposure settings |
US10462390B2 (en) * | 2014-12-10 | 2019-10-29 | Sony Corporation | Image pickup apparatus, image pickup method, program, and image processing apparatus |
WO2016158196A1 (en) * | 2015-03-31 | 2016-10-06 | 富士フイルム株式会社 | Image pickup device and image processing method and program for image processing device |
WO2016192437A1 (en) * | 2015-06-05 | 2016-12-08 | 深圳奥比中光科技有限公司 | 3d image capturing apparatus and capturing method, and 3d image system |
JP2017011634A (en) * | 2015-06-26 | 2017-01-12 | キヤノン株式会社 | Imaging device, control method for the same and program |
CN105049829B (en) * | 2015-07-10 | 2018-12-25 | 上海图漾信息科技有限公司 | Optical filter, imaging sensor, imaging device and 3-D imaging system |
CN105069768B (en) * | 2015-08-05 | 2017-12-29 | 武汉高德红外股份有限公司 | A kind of visible images and infrared image fusion processing system and fusion method |
US10523855B2 (en) * | 2015-09-24 | 2019-12-31 | Intel Corporation | Infrared and visible light dual sensor imaging system |
TW201721269A (en) * | 2015-12-11 | 2017-06-16 | 宏碁股份有限公司 | Automatic exposure system and auto exposure method thereof |
JP2017112401A (en) * | 2015-12-14 | 2017-06-22 | ソニー株式会社 | Imaging device, apparatus and method for image processing, and program |
CN206117865U (en) * | 2016-01-16 | 2017-04-19 | 上海图漾信息科技有限公司 | Range data monitoring device |
JP2017163297A (en) * | 2016-03-09 | 2017-09-14 | キヤノン株式会社 | Imaging apparatus |
KR101747603B1 (en) * | 2016-05-11 | 2017-06-16 | 재단법인 다차원 스마트 아이티 융합시스템 연구단 | Color night vision system and operation method thereof |
CN106815826A (en) * | 2016-12-27 | 2017-06-09 | 上海交通大学 | Night vision image Color Fusion based on scene Recognition |
CN108280807A (en) * | 2017-01-05 | 2018-07-13 | 浙江舜宇智能光学技术有限公司 | Monocular depth image collecting device and system and its image processing method |
JP6974873B2 (en) * | 2017-02-06 | 2021-12-01 | フォトニック センサーズ アンド アルゴリズムス,エセ.エレ. | Devices and methods for retrieving depth information from the scene |
CN108419062B (en) * | 2017-02-10 | 2020-10-02 | 杭州海康威视数字技术股份有限公司 | Image fusion apparatus and image fusion method |
CN109474770B (en) * | 2017-09-07 | 2021-09-14 | 华为技术有限公司 | Imaging device and imaging method |
CN109712102B (en) * | 2017-10-25 | 2020-11-27 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device and image acquisition equipment |
CN107846537B (en) * | 2017-11-08 | 2019-11-26 | 维沃移动通信有限公司 | A kind of CCD camera assembly, image acquiring method and mobile terminal |
CN112788249B (en) * | 2017-12-20 | 2022-12-06 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device, electronic equipment and computer readable storage medium |
US10748247B2 (en) * | 2017-12-26 | 2020-08-18 | Facebook, Inc. | Computing high-resolution depth images using machine learning techniques |
CN110136183B (en) * | 2018-02-09 | 2021-05-18 | 华为技术有限公司 | Image processing method and device and camera device |
CN108965654B (en) * | 2018-02-11 | 2020-12-25 | 浙江宇视科技有限公司 | Double-spectrum camera system based on single sensor and image processing method |
CN110572583A (en) * | 2018-05-18 | 2019-12-13 | 杭州海康威视数字技术股份有限公司 | method for shooting image and camera |
CN108961195B (en) * | 2018-06-06 | 2021-03-23 | Oppo广东移动通信有限公司 | Image processing method and device, image acquisition device, readable storage medium and computer equipment |
JP7254461B2 (en) * | 2018-08-01 | 2023-04-10 | キヤノン株式会社 | IMAGING DEVICE, CONTROL METHOD, RECORDING MEDIUM, AND INFORMATION PROCESSING DEVICE |
CN109035193A (en) * | 2018-08-29 | 2018-12-18 | 成都臻识科技发展有限公司 | A kind of image processing method and imaging processing system based on binocular solid camera |
CN112602316B (en) * | 2018-09-14 | 2022-06-24 | 浙江宇视科技有限公司 | Automatic exposure method and device for double-light image, double-light image camera and machine storage medium |
JP2020052001A (en) * | 2018-09-28 | 2020-04-02 | パナソニックIpマネジメント株式会社 | Depth acquisition device, depth acquisition method, and program |
US11176694B2 (en) * | 2018-10-19 | 2021-11-16 | Samsung Electronics Co., Ltd | Method and apparatus for active depth sensing and calibration method thereof |
CN109636732B (en) * | 2018-10-24 | 2023-06-23 | 深圳先进技术研究院 | Hole repairing method of depth image and image processing device |
CN110248105B (en) * | 2018-12-10 | 2020-12-08 | 浙江大华技术股份有限公司 | Image processing method, camera and computer storage medium |
US11120536B2 (en) * | 2018-12-12 | 2021-09-14 | Samsung Electronics Co., Ltd | Apparatus and method for determining image sharpness |
CN113170048A (en) * | 2019-02-19 | 2021-07-23 | 华为技术有限公司 | Image processing device and method |
US10972649B2 (en) * | 2019-02-27 | 2021-04-06 | X Development Llc | Infrared and visible imaging system for device identification and tracking |
JP7316809B2 (en) * | 2019-03-11 | 2023-07-28 | キヤノン株式会社 | Image processing device, image processing device control method, system, and program |
CN110706178B (en) * | 2019-09-30 | 2023-01-06 | 杭州海康威视数字技术股份有限公司 | Image fusion device, method, equipment and storage medium |
CN111524175A (en) * | 2020-04-16 | 2020-08-11 | 东莞市东全智能科技有限公司 | Depth reconstruction and eye movement tracking method and system for asymmetric multiple cameras |
CN111540003A (en) * | 2020-04-27 | 2020-08-14 | 浙江光珀智能科技有限公司 | Depth image generation method and device |
CN111586314B (en) * | 2020-05-25 | 2021-09-10 | 浙江大华技术股份有限公司 | Image fusion method and device and computer storage medium |
-
2020
- 2020-12-23 CN CN202011541300.2A patent/CN114143418B/en active Active
- 2020-12-23 CN CN202011540274.1A patent/CN114143443B/en active Active
- 2020-12-23 TW TW109145614A patent/TWI778476B/en active
- 2020-12-23 TW TW109145632A patent/TWI767468B/en active
- 2020-12-30 TW TW109146764A patent/TWI764484B/en active
- 2020-12-30 TW TW109146831A patent/TWI797528B/en active
- 2020-12-30 TW TW109146922A patent/TWI767484B/en active
- 2020-12-30 CN CN202011622478.XA patent/CN114143419B/en active Active
- 2020-12-30 CN CN202011625552.3A patent/CN114143421B/en active Active
- 2020-12-30 CN CN202011625515.2A patent/CN114143420B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008176195A (en) * | 2007-01-22 | 2008-07-31 | Seiko Epson Corp | Projector |
CN101404060A (en) * | 2008-11-10 | 2009-04-08 | 北京航空航天大学 | Human face recognition method based on visible light and near-infrared Gabor information amalgamation |
JP2013115679A (en) * | 2011-11-30 | 2013-06-10 | Fujitsu General Ltd | Imaging apparatus |
JP2016039409A (en) * | 2014-08-05 | 2016-03-22 | キヤノン株式会社 | Image processing device, its control method, program, and storage medium |
CN111527743A (en) * | 2017-12-28 | 2020-08-11 | 伟摩有限责任公司 | Multiple modes of operation with extended dynamic range |
TW201931847A (en) * | 2018-01-09 | 2019-08-01 | 呂官諭 | Image sensor of enhancing image recognition resolution and application thereof greatly reducing usage costs of image sensors and further saving mechanical costs |
JP2018207497A (en) * | 2018-07-19 | 2018-12-27 | キヤノン株式会社 | Image processing apparatus and image processing method, imaging apparatus, program, and storage medium |
CN110349117A (en) * | 2019-06-28 | 2019-10-18 | 重庆工商大学 | A kind of infrared image and visible light image fusion method, device and storage medium |
CN111383206A (en) * | 2020-06-01 | 2020-07-07 | 浙江大华技术股份有限公司 | Image processing method and device, electronic equipment and storage medium |
IN202021032940A (en) * | 2020-07-31 | 2020-08-28 | .Us Priyadarsan |
Non-Patent Citations (1)
Title |
---|
基于彩色图像融合的隐藏武器检测技术;王亚杰;冉晓艳;叶永生;石祥滨;;光电工程(02);全文 * |
Also Published As
Publication number | Publication date |
---|---|
TWI778476B (en) | 2022-09-21 |
CN114143418B (en) | 2023-12-01 |
CN114143443B (en) | 2024-04-05 |
CN114143420A (en) | 2022-03-04 |
TW202211673A (en) | 2022-03-16 |
TW202211165A (en) | 2022-03-16 |
CN114143419B (en) | 2023-12-26 |
CN114143421A (en) | 2022-03-04 |
CN114143421B (en) | 2024-04-05 |
CN114143443A (en) | 2022-03-04 |
TW202211160A (en) | 2022-03-16 |
CN114143418A (en) | 2022-03-04 |
TW202211161A (en) | 2022-03-16 |
TWI764484B (en) | 2022-05-11 |
TWI767468B (en) | 2022-06-11 |
TWI797528B (en) | 2023-04-01 |
CN114143419A (en) | 2022-03-04 |
TWI767484B (en) | 2022-06-11 |
TW202211674A (en) | 2022-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110519489B (en) | Image acquisition method and device | |
US8811733B2 (en) | Method of chromatic classification of pixels and method of adaptive enhancement of a color image | |
US11689822B2 (en) | Dual sensor imaging system and privacy protection imaging method thereof | |
KR100983037B1 (en) | Method for controlling auto white balance | |
JP6685188B2 (en) | Imaging device, image processing device, control method thereof, and program | |
WO2007126707A1 (en) | Varying camera self-determination based on subject motion | |
US11496694B2 (en) | Dual sensor imaging system and imaging method thereof | |
US9489750B2 (en) | Exposure metering based on background pixels | |
CN110047060B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN113691795A (en) | Image processing apparatus, image processing method, and storage medium | |
CN114143420B (en) | Dual-sensor camera system and privacy protection camera method thereof | |
KR101005769B1 (en) | Auto exposure and auto white-balance method for detecting high dynamic range conditions | |
US11496660B2 (en) | Dual sensor imaging system and depth map calculation method thereof | |
US11568526B2 (en) | Dual sensor imaging system and imaging method thereof | |
JP2009063674A (en) | Imaging apparatus and flash control method | |
WO2022192015A1 (en) | Systems and methods for high dynamic range image reconstruction | |
CN114697483A (en) | Device and method for shooting under screen based on compressed sensing white balance algorithm | |
JP2013132065A (en) | Imaging apparatus and flash control method | |
WO2023070412A1 (en) | Imaging method and imaging device for wading scene | |
JP2003259231A (en) | Automatic exposure controller and program thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |